Anthem Data Breach: A Wake-up Call for Security and Governance

By on Feb 09, 2015

The Anthem security breach and massive PII data exposure is an unfortunate recent reminder that breaches are now routine. Hackers can leverage the most basic vulnerabilities, such as static passwords to gain access to protected systems. It also highlights a troubling new trend whereby hackers use cloud services, particularly unapproved cloud storage and file sync and share services as the data exfiltration vector. The most troubling part is that hackers don’t require innovative schemes to exfiltrate data, but rather use unmonitored and unsecured cloud services as a front door exfiltration vector.

The latest thoughts on how the breach occurred

While details are still emerging, here’s what currently know based on unconfirmed reports:

  1. The attacker hacked a public facing administrative website and obtained admin credentials.
  2. The attacker queried the customer records database to download all its PII information, including SSNs, names, addresses etc.
  3. The attacker then exfiltrated all of this sensitive data by uploading a file to a personal cloud storage service account.

Anthem detected the breach because a DB admin noticed a query to download all the PII data and no internal party had any knowledge of such a query. While this data breach occurred within an on-premise customer database, the same scenario could occur in a critical corporate business cloud service, such as Salesforce, ServiceNow, SuccessFactors, Office 365, or Box, which all regularly contain sensitive data such as PII, PHI, billing information, IT information, or intellectual property.

This problem requires careful consideration

Two common extreme reactions to this type of breach are:

  1. We need to block all unsanctioned cloud services to prevent data exfiltration.
    While this may seem tempting in the face of such an extensive breach, the fact is that employees are utilizing hundreds of cloud services every day to get their jobs done. Immediately prohibiting their use would have a catastrophic effect on the productivity of these employees. This approach also results in a whack-a-mole approach whereby IT blocks services, so employees find new, often riskier services that are not blocked. When IT detects and blocks those services, the cycle begins again.
  2. We need to perform DLP across all cloud services to prevent any sensitive data from reaching the cloud.
    This approach avoids the employee productivity loss, but there are some limitations when performing a DLP scan on all outbound traffic, particularly with newer DLP engines. First, one must consider whether the DLP solution is tuned for false positives and negatives. If the solution is not leveraging an existing and fully tuned DLP engine, then this solution could easily introduce a whole new slew of problems with many false positives and negatives.

Secondly, one must consider: what happens if the hacker were to have invested little bit more resources to encrypt the exfiltrated data? At McAfee (formerly Skyhigh Networks), we have seen evidence in real-world customer scenarios where hackers use stenography to embed sensitive data into a series of videos and upload them to YouTube. DLP alone would not catch this type of exfiltration.

A better approach to safeguarding against this type of attack

While blocking and DLP are important pieces of the solution, the full solution requires a bit more thought and a more measured approach. Below we’ve outlined a holistic “cloud security and governance lifecycle” approach that incorporates best practices and industry standard recommendations from the Verizon Data Breach report.

  1. No static passwords: Enforce strict multi-factor authentication and SSO (single sign-on) authentication mechanisms for your critical business assets, such as databases, and business critical cloud services, such as Salesforce, ServiceNow, Workday, Box, and SuccessFactors.
  2. Encrypt or tokenize sensitive data at rest: Encrypt (or tokenize) all sensitive data fields inside your database or cloud services. Administrators should not have “clear access” to sensitive data fields inside databases or cloud services as this presents a big external threat as we have seen in this incident.
  3. Establish a comprehensive “cloud security and governance” framework with the following steps:
    1. Discover all the cloud services used in your organization and assess the risk of each service, particularly those that enable data uploads
    2. Group all the cloud services into the following categories: a) IT Sanctioned b) Prohibited c) Permitted based on business relevance and security risk.
    3. If you do not have an “IT sanctioned” cloud storage service, consider adopting and enabling one for your users and LOBs. Otherwise, your users and LOBs will continue to use all the available cloud services.
    4. Implement an “educate and block” policy for the redundant cloud storage services in the “Prohibited” category, particularly high-risk ones. This should be deployed using “closed-loop” remediation with your existing proxies and firewalls.
    5. Enable monitoring for mis-categorized cloud services on your proxy and firewall as this potentially suggests a proxy circumvention tactic.
    6. Closely monitor unmatched data uploads to unknown cloud services and assess the source and identity.
    7. Closely monitor data upload and service access count anomalies for all your cloud traffic as they indicate a potential hacker trying to figure out an exfiltration path.
    8. Implement compromised credential anomaly detection for all business critical cloud services.
    9. Enable DLP for all data uploads by integrating with your existing on-premise DLP engine that is tuned for your environment. Make sure you have a clear policy on how to deal with encrypted files and communications. We recommend the “quarantine” action for these files.
    10. Employ a comprehensive audit log mechanism and anomaly detection for high-risk activities such as “administrative usage” and “downloading large amounts of sensitive data in a short time.”
    11. Integrate your cloud service anomalies to your existing SIEM product, such as Splunk.
  4. Leverage the “Network Effect” of your community peers and other customers. It is critical to leverage the intelligence and best practices from your industry peers, other users, and partners so that everyone can share security intelligence and cloud security and governance best practices.

An offer to help

At McAfee, we believe that serious incidents, such as the Anthem breach, provide an opportunity for the industry to take a step back and think strategically and holistically on security measures that can prevent breaches of this magnitude.

Please reach out to the McAfee Cloud Security Experts to discuss some the aforementioned best practices and how we can help you with security and governance for your cloud services.

Finally, we sincerely appreciate Anthem’s immediate response whereby they notified all the users of the breach and engaged both FBI and Mandiant in the investigation process. The risk of a breach extends far beyond one company or one incident and constitutes a collective responsibility of the security industry. The best way for us to defend data security is to take a holistic and communal approach.


The Cloud Encryption Handbook

Learn which encryption schemes support search and sorting, the security/functionality of each scheme, and how to select the right scheme for your use case.

Download Now


About the Author

McAfee Cloud BU

Learn about cloud threats, the latest cloud security technologies, and the leading approaches for protecting data in cloud services.

Read more posts from McAfee Cloud BU

Categories: Cloud Security

Subscribe to McAfee Securing Tomorrow Blogs