8 avril 2010
By: Slavik Markovich
It's hard enough to lock down sensitive data when you know exactly which server the database is running on, but what will you do when you deploy virtualization and these systems are constantly moving? And making sure your own database administrators (DBAs) and system administrators aren't copying or viewing confidential records is already a challenge. How are you going to know when your cloud computing vendor's staff members are not using their privileges inappropriately? These are just two of the obstacles that any enterprise must overcome in order to deploy a secure database platform in a virtual environment, or in the cloud. In some cases, these concerns have been preventing organizations from moving to virtualization or cloud computing.
Security in a Dynamic Systems Environment
Whether we're talking about your own VMware data center, or an Amazon EC2-based cloud, one of the major benefits is flexibility. Moving servers, and adding or removing resources as needed, allows you to maximize the use of your systems and reduce expense. But, it also means that your sensitive data, which resides in new instances of your databases, are constantly being provisioned (and de-provisioned). While gaining more flexibility, monitoring data access becomes much more difficult. If the information in those applications is subject to regulations like Payment Card Industry Data Security Standard (PCI DSS) or Health Insurance Portability and Accountability Act (HIPAA), you need to be able to demonstrate to auditors it is secure.
As you look at solutions to monitor these "transient" database servers, the key to success will be finding a methodology that is easily deployed on new virtual machines (VMs) without management involvement. Each of these VMs will need to have a sensor or agent running locally - and this software must be able to be provisioned automatically along with the database software, without requiring intrusive system management, such as rebooting, for example whenever you need to install, upgrade or update the agents. Even better, if it can automatically connect to the monitoring server, you'll avoid the need to reconfigure constantly to add/delete new servers from the management console. The right architecture will allow you to see exactly where your databases are hosted at any point in time, and yet centrally log all activity and flag suspicious events across all servers, wherever they are running.
Monitoring Intra-Server and WAN Traffic
Data center virtualization and cloud computing architectures differ most significantly in terms of their network topology. As a result, they pose different challenges when it comes to monitoring data access. While many current database activity monitoring solutions utilize a "network sniffing" model to identify malicious queries, in both virtual and cloud environments this will not be sufficient. Moreover, in each case, simply adding a local agent that sends all traffic to a server for processing will not be efficient in these models. You'll need to find a solution that is architected for distributed processing, where the local sensor is able to analyze traffic autonomously. Here's why.
For cloud computing, network bandwidth — and even more importantly, network latency — will render offhost processing too inefficient. The whole concept of cloud computing prevents you from being able to co-locate a server close to your databases — you simply won't know where they are. Therefore, the time and resources spent sending every transaction to a necessarily remote server for analysis will bog down network performance, as well as prevent timely interruption of malicious activity.
In the case of data center virtualization, exactly the opposite problem arises. You need to be concerned about local traffic going from one VM to another VM, potentially on the same server. For example, your CRM application may be running on the same physical hardware as your Oracle instances that house your CRM data. This traffic doesn't even hit the network — it goes straight from VM to VM at memory speeds. Attempting to send all of this traffic offhost for analysis will quickly become a processing bottleneck, negating the efficiencies you were expecting from virtualization in the first place.
In both cases, you want to make sure that whatever solution you choose utilizes a "smart agent," so that once security policy is set for a monitored database, from that point on the agent or sensor is capable of implementing the necessary protection and alerting locally. This will ensure that the network doesn't become the gating factor for application server performance. For cloud computing (or in fact, for remote management of distributed data centers), you'll also want to test the WAN (wide area network) capabilities of your chosen software. It should encrypt all traffic between the management console and sensors as you need to limit exposure of sensitive data, and performance can also be enhanced through various compression techniques so that policy updates and alerts are efficiently transmitted.
Outsiders Are Now Insiders
One of the most difficult elements to monitor in any database implementation is the activity of privileged users. DBAs and system administrators have many options at their disposal to access and copy sensitive information, often entirely undetected or in ways that can be easily covered up. The fact that in a cloud computing environment there will be unknown personnel at unknown sites with these privileges, coupled with the fact that you cannot possibly conduct the same level of background checks on third parties as you do for your own staff, makes this even more difficult.
One way to resolve this is through "separation of duties," essentially ensuring that the activities of those privileged third parties are monitored by your own staff, and that the pieces of the solution on the cloud side of the network are not able to be defeated without raising alerts. Another critical capability is being able to closely monitor individual data assets (for example, a credit card table), regardless of the method used to access it. Sophisticated users with privileges can create new views, insert stored procedures into a database, or generate triggers that compromise information without the SQL command even looking suspicious. Look for a system that knows when the data is being accessed in violation of the policy, without relying solely on query analytics.
These Architectures are in Your Future
For some, the complexity of monitoring databases in a virtual or cloud architecture may lead them to conclude that it is simply not worth changing from dedicated systems. However, most enterprises will likely determine that it is simply a matter of time until they deploy applications with sensitive data on one of these models. Leading organizations have already begun to do so, and the tools are now catching up with the customer requirements driven by the issues raised above. If you are ready to try database virtualization, or databases in the cloud, security should not prevent you from moving forward. Just make sure your security methodologies adequately address the special cases outlined here.