Wayne Anderson – McAfee Blogs https://www.mcafee.com/blogs Securing Tomorrow. Today. Tue, 05 May 2020 15:43:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.1 https://www.mcafee.com/wp-content/uploads/2018/11/cropped-favicon-32x32.png Wayne Anderson – McAfee Blogs https://www.mcafee.com/blogs 32 32 Securing the Unsecured: State of Cybersecurity 2019 – Part II https://www.mcafee.com/blogs/enterprise/cloud-security/securing-the-unsecured-state-of-cybersecurity-2019-part-ii/ https://www.mcafee.com/blogs/enterprise/cloud-security/securing-the-unsecured-state-of-cybersecurity-2019-part-ii/#respond Thu, 10 Oct 2019 16:00:16 +0000 https://securingtomorrow.mcafee.com/?p=97030

Recently the Straight Talk Insights team at HCL Technologies invited a social panel to discuss a critical question at the center of today’s digital transitions: How do companies target investments and change the culture to avoid being the next victim of a cyberattack? In Part I of the series, we explored IT security trends for […]

The post Securing the Unsecured: State of Cybersecurity 2019 – Part II appeared first on McAfee Blogs.

]]>

Recently the Straight Talk Insights team at HCL Technologies invited a social panel to discuss a critical question at the center of today’s digital transitions: How do companies target investments and change the culture to avoid being the next victim of a cyberattack?

In Part I of the series, we explored IT security trends for 2019 and ways companies can protect themselves from IoT device vulnerability. Today, we’re continuing the discussion by exploring the threat of cryptocrime, the nature of cybersecurity threats in the near future, and the steps that small- and medium-sized businesses can take to protect themselves.

Q3: How great is the threat to companies of “crypto crime”?

The thing about ransomware is that it’s no longer the province of specific groups. At the RSA Conference this year, McAfee’s own Raj Samani shared the advent of the franchise model in crypto crime. As a result, we are seeing greater reach, but less unique systems applying ransomware. Still, we see the enterprises failing in the same ways year after year and falling victim to these families of ransomware at scale.

As you seek to conquer incident response as an effective plank of mitigating the effect of phishing and initial ransomware infections—I’d ask, how does your incident response change in the cloud? Do you have incident response resources and provisions for SaaS vs. IaaS? How do you get the logs and resources that you need from cloud providers to effectively investigate and ensure you have identified all affected nodes, or the initial attack vector? The time to figure out that question isn’t during time-compressed investigation stages when everyone is under stress from an active threat.

With the recent third anniversary of No More Ransom, security leaders like Raj Samani and the companies that make up partnerships like that of the No More Ransom website can help offer basic protection for some forms of ransomware. In this joint project with Europol and AWS, it’s been an amazing journey to watch and even invest in helping protect businesses against ransomware.

Q4: How can small businesses with limited resources protect the privacy of their customers?

The dwell time of threats in small and medium businesses is 45 to 800 days, with the averages moving more towards the latter. Cloud based information security SaaS (Software as a Service) is helping to level the playing field. To make continued progress, venture capital backing small firms, and the public buying from these companies, need to assert an expectation of security as part of doing business.

Many restaurants and retail establishments are still small businesses today, run by families and individuals. In many of these stores, there is a certain level of distrust of cloud and connected platforms, versus point-of-sales systems they can put their hands on and feel like they have control over. How do we gain the trust and their attention to of these small stakeholders, help them either more strongly secure things in-house or make the move to cloud security services? We can’t just have an answer that demands $4,000 or $40,000 to make the fix. Instead we have to find every possible opportunity to go serverless and make more and more walled garden capability for things like point of sale, or small engineering platform.

When it comes to small businesses interconnecting systems and moving into cloud services for consumers, these small companies holding identities is a challenge from a trust perspective. Forums and programs like the OpenID technologies providing standards and enabling identity without spreading the authorization infrastructure unnecessarily has been instrumental in constraining the size of this problem.

Security spans everything. There are basic exercises that you can do as business customers to check your readiness. I am a huge fan of SOAPA from ESG as a method of mapping what assets you have at different levels of the organization. Ask yourself a basic question -can you keep control integrity when you go from one “tower” —like on-premise—of connected capability to mapping the other silos or major cloud environments of your hybrid company? I’d also add it costs nothing to follow some of your favorite security personalities. I follow people like Cisco’s Wendy Nather and Kate Moussouris, the CEO of Luta Security who is helping even small companies understand the market of bug bounties and vulnerability disclosure.

Here, too, public policy potentially has a natural role. Government requires health training, for example in a restaurant, but not information security necessarily at small- and medium-sized business. Actually, the natural consequences and motivations of insurance companies can be an ally here, requiring training in basic computer hygiene, security, and privacy as part of issuing liability policies for businesses.

Q5: What are some new cybersecurity threats that we can expect to see in the next year?

I expect to see the rise of more significant exploitation of the “seams” in cloud integrations. The recent CapitalOne breach was relatively benign in the scheme of things. The actor was a braggart hacktivist, but the media coverage emphasized the weakness of cloud integrations to many who might have more capability. We’ve seen spikes in discussion in the dark web around this, so the profile of the cloud vulnerability is higher, and now we will have to see how the cat-and-mouse game between offense and defense proceeds.

I think it’s worth adding, the next threat isn’t as much the challenge to me, as the enterprise reaching the next run of maturity in the digital environment. Asset management, vulnerability reduction, and preparing the protection of cloud operations and visibility are all critical disciplines for the enterprise, no matter what the threat is.

Protect your devices. Protect your cloud—not in silos, but with an integrated strategy. Demand from your vendors the ability to integrate to maintain a cohesive threat picture which you can use to easily react.

To read Part I of this two-part series, click here.

 

The post Securing the Unsecured: State of Cybersecurity 2019 – Part II appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/securing-the-unsecured-state-of-cybersecurity-2019-part-ii/feed/ 0
Securing the Unsecured: State of Cybersecurity 2019 – Part I https://www.mcafee.com/blogs/enterprise/cloud-security/securing-the-unsecured-state-of-cybersecurity-2019-part-i/ https://www.mcafee.com/blogs/enterprise/cloud-security/securing-the-unsecured-state-of-cybersecurity-2019-part-i/#respond Tue, 08 Oct 2019 16:00:16 +0000 https://securingtomorrow.mcafee.com/?p=97025

Recently the Straight Talk Insights team at HCL Technologies invited a social panel to discuss a critical question at the center of today’s digital transitions: How do companies target investments and change the culture to avoid being the next victim of a cyberattack? Alongside some fantastic leaders and technology strategists from HCL, Oracle, Clarify360, Duo […]

The post Securing the Unsecured: State of Cybersecurity 2019 – Part I appeared first on McAfee Blogs.

]]>

Recently the Straight Talk Insights team at HCL Technologies invited a social panel to discuss a critical question at the center of today’s digital transitions: How do companies target investments and change the culture to avoid being the next victim of a cyberattack?

Alongside some fantastic leaders and technology strategists from HCL, Oracle, Clarify360, Duo Security, and TCDI, we explored the challenges of today’s hyper-connected and stretched security team.

Today, businesses operate in a world where over the last few years, more than 85% of business leaders surveyed by Dell and Dimensional Research say they believe security teams can better enable digital transformation initiatives if they are included early. Moreover, 90% say they can better enable the business if given more resources. Yet most of these same leaders assert that security is being brought in too late to enable digital transformation initiatives! These digital transformation trends—cloud, data, analytics, devices—are critical to the next generation of customer and employee experiences, and for the clear majority of companies, the transition of value chains is already in progress!

We collate the insights from the course of the discussion …

Q1: What are some of the IT security trends for 2019? Are there particular cybersecurity challenges related to digital trends?

Digital isn’t one trend—it’s many. Plus, we can’t stop running the business today. This forces a split of the skill investment that is available to companies, which MSSPs and system integrators can cover part of. The biggest challenge is information security extension in a multi-cloud world. All large enterprise is multi-cloud and hybrid. Yet few security operations teams are prepared for that.

Part of solving that challenge is bringing nascent ways of identifying anomalies and gaining scale—for example, through graph theory technology, critical to find the little traces that represent defensive capability. Machine learning will be throughout the information security technology stack soon. This shift must happen, as the challenge is more than new environments. The log volumes in cloud are material—and you pay for them, by the way—the formats are different, the collections are different, and the visibility is fragmented.

The harder thing here is that information security teams must adjust to ALL of this at ONCE. Great, you have AWS Cloud Trail. Let me ask you a question: Which of your security stack can see that AND is tuned for it AND can unify the risk identified there with on-premise derived visibility? And if you can answer that in a positive way, what about when I ask the same thing for Azure? Are you starting to think about the shift to resilience, or are you still thinking about defense and control exclusively?

I’d ask though, as your team is investing in cloud, are they investing in the understanding and readiness to protect data science? Are you preparing the project cycle for your security team to now be iterative as well to even deliver these services? Identity and access management is part of the solution as a critical foundation. Effective governance and strategy can help you figure out which platforms have security relevant data. While it’s easy to say “see and save everything,” you quickly find out how expensive that is, and how much trash is in there. At that point, you can start thinking about automation.

Focusing on data storage and data in motion has led us to consider more zero-trust to cut down on the amount of interstitial security complexity. To realize that vision, tokenization and indexing and many other technologies must continue to expand. We face an odd duality between the confidentiality and accessibility of making data useful in digital employee experience and customer experience.

It’s about more than adding automation to conquer the complexity. The automation must have intelligence, and it must operate in a way that is more than “I bought tech with buzzwords.” So many platforms and products say they do these things—but as you buy and implement, you need to focus on how, and how hard they are to build and link together. Plus, how are you going to maintain them? Be careful as we adjust to keep the pace of digital transformation that we aren’t trading one problem for another.

Finally, I’d note that at every level of the information security organization—not jus the CISO—the people need to have a sense of purpose. What value do you add as a security professional to the customer experience? Why do you exist? We need to remember that, as customer journeys are the way that digital transformation shows up. We have to think end-to-end.

Q2: What can companies do to protect themselves against vulnerabilities created by IoT devices?

Start with procurement. Look, I’d love to tell you that IoT security is a software problem, but that’s only part of it. It really starts with buying technology that is well-designed, and both the customer and the upstream vendor must enforce Secure Development Life Cycle (SDLC) internally.

To a certain degree, we need to see IoT as completely untrusted. Google’s BeyondCorp is a good goal for an entire org’s high-level vision of zero trust. Data introspection and device behaviors then need to have high inspection rather than assumptions of performance. We are advantaged in that we now live in a society full of tools where the reality is that encryption overhead is almost negligible with RISC based enhancements to network interface level assets. The organization can think differently about data protection in that kind of world with (relatively) cheap encryption cost to latency and performance.

When I think about IoT security, I continue to go back to an example that really made an impression on me a couple years back: If the team at IKEA can sell an IoT lightbar for cheap that has basic randomization, locked services, and minimal platform build … I have to think that certainly we can do better in health technology, industrial control systems, and manufacturing technologies.

When it comes to governance, IoT has the potential to turn asset management issues up to “11” on the 10-point scale of concern. How do you define an authorized device? Authorize an untrusted device to send data into the system? What do you recognize as a managed device? How will your organization make conditional access decisions to use, aggregate, and modify data? “Enterprise Architecture” (EA) needs to be part of the plan for effective governance. In some ways, as an industry, EA got swept up with the boom and bust of specific analyst models of architecture not proving out value cases at a lot of organizations. In today’s iterative digital world, architecture and simplicity have to be part of the IoT project Minimum Viable Product in order to realize the scale needed later.

We can’t manage IoT like laptops—these devices have fewer capabilities. Instead we need more affirmative approaches that integrate the components of the ecosystem in a predictable and defined way, like trusted cloud. The default expectation for a device intended to be used in a reduced management environment should have heavy encryption, PKI validation, and locked down application-controlled execution built into them out of the box.

When you take a step back and look at the problem as societal instead of the microcosm of a specific company’s product or implementation, public policy must enter into the intersection of law and devices at scale. We have to solve difficult questions like the role of liability and commercial incentives to build and deploy device platforms in a responsible way. As one example, when machine learning-led IoT decisions create a catastrophe, who is responsible? The owning company? The software vendor? The system integrator? All the above? In critical spaces like utilities and healthcare, we need the focus of meeting some level of criteria for devices to have minimum reasonable security.

Even at this scale, this, too could be a great place for graph theory and machine learning-led approaches to secure societal level device challenges like elections. It’s easily expressed as math—easily identified for loci and baseline deviations. We need investment, however, from government or non-traditional sources as the state/local government and education sectors have very long buying cycles, and the available budget for this problem hasn’t yet justified the extended R&D costs of these kinds of technological changes.

Even while these public policy shifts are emerging, the greater propensity of localized privacy law has created operational hurdles for enterprise. As a microcosm, introduction of privacy safeguards in the India data localization law represents many different interests trying to be balanced in one approach. This has created a higher cost for external multinationals as they create duplicative storage and has even slowed digital transformation and created a drag on growth for India based consulting and business process outsourcing economic engines. You could make the same analysis for CCPA or GDPR, but these same measures have helped privacy, potentially, for citizens.

To help companies navigate these challenges, we are seeing organizations like ENISA, and the NCSC Secure Authority providing advisory guidance. This leads to the definition of a state of reasonable practice. When we add that kind of practical dimension to ISO standards like the 27000 series, and the Top 20 from the Center for Internet Security, and others, we help organizations navigate what the basics look like for practical security applicability in IoT and security generally.

In Part II of this series, we’ll explore the threat of cryptocrime, the nature of cybersecurity threats in the near future, and the steps that small- and medium-sized businesses can take to protect themselves.

The post Securing the Unsecured: State of Cybersecurity 2019 – Part I appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/securing-the-unsecured-state-of-cybersecurity-2019-part-i/feed/ 0
Getting Started with Cloud Governance https://www.mcafee.com/blogs/enterprise/cloud-security/getting-started-with-cloud-governance/ https://www.mcafee.com/blogs/enterprise/cloud-security/getting-started-with-cloud-governance/#respond Wed, 03 Jul 2019 15:00:50 +0000 https://securingtomorrow.mcafee.com/?p=95773

Governing cloud security and privacy in the enterprise is hard, but it’s also critical: As recently noted in a blog by Cloud Transformation Specialist Brooke Noelke, security and complexity remain the two most significant obstacles to achieving enterprise cloud goals. Accelerating cloud purchases and tying them together without critical governance has resulted in many of […]

The post Getting Started with Cloud Governance appeared first on McAfee Blogs.

]]>

Governing cloud security and privacy in the enterprise is hard, but it’s also critical: As recently noted in a blog by Cloud Transformation Specialist Brooke Noelke, security and complexity remain the two most significant obstacles to achieving enterprise cloud goals. Accelerating cloud purchases and tying them together without critical governance has resulted in many of today’s enterprise security executives losing sleep, as minimally secured cloud provider estates run production workloads, and organizations only begin to tackle outstanding SaaS (Software as a Service) footprints.

For security professionals and leaders, the on-premise (or co-location) data center seems simple by comparison: Want to protect applications in the data center? By virtue of the fact that it has a network connection in the data center, there are certain boundaries and processes that already apply. Business unit leaders aren’t exactly standing by with a credit card, trying to load tens of thousands of dollars of 4U Servers, storage racks, and a couple of SAN heads and then trying to expense it. In other words, for a workload in the data center, certain procurement controls must be completed, an IT review established, and implementation steps forced before the servers “light up”—and networking gates must be established for connectivity and publishing.

When it comes to the cloud, however, we’re being asked to fulfill new roles, while continuing to serve as protector of all the organization’s infrastructure, both new and existing. Be the rule setter. Contribute to development practice. Be the enforcer. And do all of this while at the same time making sure all the other projects you already had planned for the next 18 months get accomplished, as well …

Without appropriate controls and expectation-setting, development teams could use a credit card and publish a pre-built workload—from registration to world-accessibility—in hours! Sadly, that’s the reality at many organizations today, in a world where as much as 11% of a company’s published sensitive data is likely to be present in custom/engineered cloud applications.

Simplify Governance – Be Transparent

One of the biggest challenges for today’s businesses is understanding what the “sanctioned” path to cloud looks like: Who do they reach out to? Why should they engage the security team and other IT partners when the software vendor is willing to take credit cards directly? At many of today’s enterprises, “Security Awareness” initiatives mean some emails and a couple training sessions a year on “building block” security measures, with a particular focus on detecting phishing emails. While these measures have their place, security teams should also establish regular partnership meetings at the business unit level to “advertise” available services to “accelerate” capabilities into the cloud.

However, instead of communicating what the business will receive or explaining the steps the security team requires in order to complete the process, the emphasis should be on what departments receive by engaging the security team early: Faster funding and procurement approvals. Proactive scheduling of scarce resources for application review. Accelerated provisioning. And ultimately, faster spend and change times, with less risk and hopefully with minimal schedule impact.

The security team also needs to help the business understand that, while they may not see it reflected in direct line items today, there is a cost per application that they are generating for existing/legacy applications. If the perception is that today’s applications are “free,” but the team needs a line item to be created in new projects for cloud security deployments, it encourages people to exit the process or to avoid things that add to the price—or, at least, to fight an internal battle to push back on each line-item add. Our job is to help the organization understand that today’s security spend is around 7% of infrastructure or application spend, and to set the expectation that whatever the next-generation project budget is, an associated investment should be expected—in both technology and people—to secure the platform.

Establish a Goal and Discuss It

Does your business understand what the “goal line” looks like when it comes to putting something into the cloud? Would they know where to go to find the diagram(s) or list(s) that define that? What level of cloud competency and security understanding does someone in the business need in order to consume what your team has published?

If the answer to one or more of these questions is a shrug—or demands a master’s level understanding of technical knowledge—how can we as the leaders of the security space expect the business to readily partner with us in a process they don’t understand?

Published policy with accompanying detailed standards is a start. But the security team has an opportunity to go a step further with very basic conceptual “block” diagrams, which set “minimum viable protection” that the business’ “minimum viable product” must have to go into security.

The easiest way to do this is to take a minimum control set, and then create a few versions of the diagram—in other words, one for the smallest footprint and one or more at larger scale—to explain to the organization how the requirements “flex” according to the size and traffic volume of what has been deployed.

Cloud Governance is Possible

Governance is the initial building block for cloud security. Being successful in protecting cloud applications requires effective technical controls, like MVISION Cloud’s product risk assessment and protection for enterprise data through unified policy. For the organization to mature and further reduce risk, governance must become as much about consulting with businesses regarding cloud consumption as it has been historically about risk meetings and change reviews. With a few simple adjustments and intentional internal marketing investments, your team can start the journey.

The post Getting Started with Cloud Governance appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/getting-started-with-cloud-governance/feed/ 0
Our PaaS App Sprung a Leak https://www.mcafee.com/blogs/enterprise/cloud-security/our-paas-app-sprung-a-leak/ https://www.mcafee.com/blogs/enterprise/cloud-security/our-paas-app-sprung-a-leak/#respond Mon, 22 Apr 2019 16:00:20 +0000 https://securingtomorrow.mcafee.com/?p=94954

Many breaches start with an “own goal,” an easily preventable misconfiguration or oversight that scores a goal for the opponents rather than for your team. In platform-as-a-service (PaaS) applications, the risk profile of the application can lure organizations into a false sense of security. While overall risk to the organization can be lowered, and new capabilities otherwise […]

The post Our PaaS App Sprung a Leak appeared first on McAfee Blogs.

]]>

Many breaches start with an “own goal,” an easily preventable misconfiguration or oversight that scores a goal for the opponents rather than for your team. In platform-as-a-service (PaaS) applications, the risk profile of the application can lure organizations into a false sense of security. While overall risk to the organization can be lowered, and new capabilities otherwise unavailable can be unlocked, developing a PaaS application requires careful consideration to avoid leaking your data and making the task of your opponent easier.

PaaS integrated applications are nearly always multistep service architectures, leaving behind the simplicity of yesterday’s three-tier presentation/business/data logic applications and basic model-view-controller architectures. While many of these functional patterns are carried forward into modern applications—like separating presentation functions from the modeled representation of a data object—the PaaS application is nearly always a combination of linear and non-linear chains of data, transformation, and handoffs.

As a simple example, consider a user request to generate a snapshot of some kind of data, like a website. They make the request through a simple portal. The request would start a serverless application, which applies basic logic, completes information validation, and builds the request. The work goes into a queue—another PaaS component. A serverless application figures out the full list of work that needs to be completed and puts those actions in a list. Each of these gets picked up and completed to build the data package, which is finally captured by another serverless application to an output file, with another handoff to the publishing location(s), like a storage bucket.

Planning data interactions and the exposure at each step in the passing process is critical to the application’s integrity. The complexity of PaaS is that the team must consider threats both for each script/step at a basic level individually as well as holistically for the data stores in the application. What if I could find an exploit in one of the steps to arbitrarily start dumping data? What if I found a way to simply output more data unexpectedly than it was designed to do? What if I found a way to inject data instead, corrupting and harming rather than stealing?

The familiar threats of web applications are present, and yet our defensive posture is shaped by which elements of the applications we can see and which we cannot. Traditional edge and infrastructure indicators are replaced by a focus on how we constructed the application and how to use cloud service provider (CSP) logging together with our instrumentation to gain a more holistic picture.

In development of the overall application, the process architecture is as important as the integrity of individual technical components. The team leadership of the application development should consider insider, CSP, and external threats, and consider questions like:

  • Who can modify the configuration?
  • How is it audited? Logged? Who monitors?
  • How do you discover rogue elements?
  • How are we separating development and production?
  • Do we have a strategy to manage exposure for updates through blue/green deployment?
  • Have we considered the larger CSP environment configuration to eliminate public management endpoints?
  • Should I use third-party tools to protect access to the cloud development and production environment’s management plane, such as a cloud access broker, together with cloud environmental tools to enumerate accounts and scan for common errors?

In the PaaS application construction, the integrity of basic code quality is magnified. The APIs and/or the initiation processes of serverless steps are the gateway to the data and other functions in the code. Development operations (DevOps) security should use available sources and tools to help protect the environment as new code is developed and deployed. These are a few ways to get your DevOps team started:

  • Use the OWASP REST Security Cheat Sheet for APIs and code making calls to other services directly.
  • Consider deploying tools from your CSP, such as the AWS Well-Architected Tool on a regular basis.
  • Use wrappers and tie-ins to the CSP’s PaaS application, such as AWS Lambda Layers to identify critical operational steps and use them to implement key security checks.
  • Use integrated automated fuzzing/static test tools to discover common missteps in code configuration early and address them as part of code updates.
  • Consider accountability expectations for your development team. How are team members encouraged to remain owners of code quality? What checks are necessary to reduce your risk before considering a user story or a specific implementation complete?

The data retained, managed, and created by PaaS applications has a critical value—without it, few PaaS applications would exist. Development teams need to work with larger security functions to consider the privacy requirements and security implications and to make decisions on things like data classification and potential threats. These threats can be managed, but the specific countermeasures often require a coordinated implementation between the code to access data stores, the data store configuration itself, and the dedicated development of separate data integrity functions, as well as a disaster recovery strategy.

Based on the identified risks, your team may want to consider:

  • Using data management steps to reduce the threat of data leakage (such as limiting the amount of data or records which can be returned in a given application request).
  • Looking at counters, code instrumentation, and account-based controls to detect and limit abuse.
  • Associating requests to specific accounts/application users in your logging mechanisms to create a trail for troubleshooting and investigation.
  • Recording data access logging to a hardened data store, and if the sensitivity/risk of the data store requires, transition logs to an isolated account or repository.
  • Asking your development team what the business impact of corrupting the value of your analysis, or the integrity of the data set itself might be, for example, by an otherwise authorized user injecting trash?

PaaS applications offer compelling value, economies of scale, new capabilities, and access to advanced processing otherwise out of reach for many organizations in traditional infrastructure. These services require careful planning, coordination of security operations and development teams, and a commitment to architecture in both technical development and managing risk through organizational process. Failing to consider and invest in these areas while rushing headlong into new PaaS tools might lead your team to discover that your app has sprung a leak!

The post Our PaaS App Sprung a Leak appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/our-paas-app-sprung-a-leak/feed/ 0
The Exploit Model of Serverless Cloud Applications https://www.mcafee.com/blogs/enterprise/cloud-security/the-exploit-model-of-serverless-cloud-applications/ https://www.mcafee.com/blogs/enterprise/cloud-security/the-exploit-model-of-serverless-cloud-applications/#respond Mon, 11 Feb 2019 15:00:02 +0000 https://securingtomorrow.mcafee.com/?p=94091

Serverless platform-as-a-service (PaaS) offerings are being deployed at an increasing rate for many reasons. They relate to information in a myriad of ways, unlocking new opportunities to collect data, identify data, and ultimately find ways to transform data to value. Figure 1. Serverless application models. Serverless applications can cost-effectively reply and process information at scale, returning […]

The post The Exploit Model of Serverless Cloud Applications appeared first on McAfee Blogs.

]]>

Serverless platform-as-a-service (PaaS) offerings are being deployed at an increasing rate for many reasons. They relate to information in a myriad of ways, unlocking new opportunities to collect data, identify data, and ultimately find ways to transform data to value.

Figure 1. Serverless application models.

Serverless applications can cost-effectively reply and process information at scale, returning critical data models and transformations synchronously to browsers or mobile devices. Synchronous serverless applications unlock mobile device interactions and near-real-time processing for on-the-go insights.

Asynchronous serverless applications can create data sets and views on large batches of data over time. We previously needed to have every piece of data and run batch reports, but we now have the ability to stagger events, or even make requests, wait some time to check in on them, and get results that bring value to the organization a few minutes or an hour later.

Areas as diverse as tractors, manufacturing, and navigation are benefiting from the ability to stream individual data points and look for larger relationships. These streams build value out of small bits of data. Individually they’re innocuous and of minimal value, but together they provide new intelligence we struggled to capture before.

The key theme throughout these models is the value of the underlying data. Protecting this data, while still using it to create value becomes a critical objective for the cloud-transforming enterprise. We can start by looking at the model for how data moves into and out of the application. A basic access and data model illustrates the way the application, access medium, CSP provider security, and serverless PaaS application have to work together to balance protection and capability.

Figure 2. Basic access and data model for serverless applications.

A deeper exploration of the security environment—and the shared responsibility in cloud security—forces us to look more carefully at who is involved, and how each party in the cloud ecosystem is empowered to see potential threats to the environment, and to the transaction specifically. When we expand the access and data model to look at the activities in a modern synchronous serverless application, we can see how the potential threats expand rapidly.

Figure 3. Expanded access and data model for a synchronous serverless application.

Organizations using this common model for an integrated serverless PaaS application are also gaining information from infrastructure-as-a-service (IaaS) elements in the environment. This leads to a more specific view of the threats that exist:

Figure 4. Sample threats in a serverless application.

 

By pushing the information security team to more carefully and specifically consider the ways the application can be exploited, they can then take simple actions to ensure that both development activities and the architecture for the application itself offer protection. A few examples:

  • Threat: Network sniffing/MITM
  • Protection: High integrity TLS, with signed API requests and responses

 

  • Threat: Code exploit
  • Protection: Code review, and SAST/pen testing on regular schedule

 

  • Threat: Data structure exploit
  • Protection: API forced data segmentation and request limiting, managed data model

The organization first must recognize the potential risk, make it part of the culture to ask the question, “What threats to my data does my change or new widget introduce?” and make it an expectation of deployment that privacy and security demand a response.

Otherwise, your intellectual property may just become the foundation of someone else’s profit.

The post The Exploit Model of Serverless Cloud Applications appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/the-exploit-model-of-serverless-cloud-applications/feed/ 0
The Shifting Risk Profile in Serverless Architecture https://www.mcafee.com/blogs/enterprise/cloud-security/the-shifting-risk-profile-in-serverless-architecture/ https://www.mcafee.com/blogs/enterprise/cloud-security/the-shifting-risk-profile-in-serverless-architecture/#respond Fri, 11 Jan 2019 16:00:49 +0000 https://securingtomorrow.mcafee.com/?p=93675

Technology is as diverse and advanced as ever, but as tech evolves, so must the way we secure it from potential threats. Serverless architecture, i.e. AWS Lambda, is no exception. As the rapid adoption of this technology has naturally grown, the way we approach securing it has to shift. To dive into that shift, let’s […]

The post The Shifting Risk Profile in Serverless Architecture appeared first on McAfee Blogs.

]]>

Technology is as diverse and advanced as ever, but as tech evolves, so must the way we secure it from potential threats. Serverless architecture, i.e. AWS Lambda, is no exception. As the rapid adoption of this technology has naturally grown, the way we approach securing it has to shift. To dive into that shift, let’s explore the past and present of serverless architecture’s risk profile and the resulting implications for security.

Past

For the first generation of cloud applications, we implemented “traditional” approaches to security. Often, this meant taking the familiar “Model-View-Controller” view to initially segment the application, and sometimes we even had the foresight to apply business logic separation to further secure the application.

But our cloud security model was not truly “cloud-native.”  That’s because our application security mechanisms assumed that traffic functioned in a specific way, with specific resources. Plus, our ability to inspect and secure that model relied on an intimate knowledge of how the application worked, and the full control of security resources between its layers. In short, we assumed full control of how the application layers were segmented, thus replicating our data center security in the cloud, giving up some of the economics and scale of the cloud in the process.

Figure 2. Simplified cloud application architecture separated by individual functions.

Present

Now, when it comes to the latest generation of cloud applications, most leverage Platform-as-a-Service (PaaS) functions as an invaluable aid in the quest to reduce time-to-market. Essentially, this means getting back to the original value proposition for making the move to cloud in the first place.

And many leaders in the space are already making major headway when it comes to this reduction. Take Microsoft as an example, which cited a 67% reduction in time-to-market for their customer Quest Software by using Microsoft Azure services. Then there’s Oracle, which identified 50% reduction in time-to-market for their customer HEP Group using Oracle Cloud Platform services.

However, for applications built with Platform-as-a-Service, we have to think about risk differently. We must ask ourselves — how do we secure the application when many of the layers between the “blocks” of serverless functions are under cloud service provider (CSP) control and not your own?

Fortunately, there are a few things we can do. We can start by having the architecture of the application become a cornerstone of information security. From there, we must ask ourselves, do the elements relate to each other in a well understood, well-modeled way?  Have we considered how they can be induced to go wrong? Given that our instrumentation is our source of truth, we need to ensure that we’re always in the know when something does go wrong – which can be achieved through a combination of CSP and 3rd party tools.

Additionally, we need to look at how code is checked and deployed at scale and look for opportunities to complete side by side testing. Plus, we must always remember that DevOps, without answering basic security questions, can often unwittingly give away data in any release.

It can be hard to shoot a moving target. But if security strategy can keep pace with the shifting risk profile of serverless architecture, we can reap the benefits of cloud applications without worry. Then, serverless architecture will remain both seamless and secure.

The post The Shifting Risk Profile in Serverless Architecture appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/the-shifting-risk-profile-in-serverless-architecture/feed/ 0
5 Things Your Organization Needs to Know About Multi-Cloud https://www.mcafee.com/blogs/enterprise/cloud-security/5-things-your-organization-needs-to-know-about-multi-cloud/ https://www.mcafee.com/blogs/enterprise/cloud-security/5-things-your-organization-needs-to-know-about-multi-cloud/#respond Thu, 04 Oct 2018 15:00:55 +0000 https://securingtomorrow.mcafee.com/?p=91778 Cloud awareness and adoption continues to grow as more enterprises take advantage of the benefits that come with multiple cloud platforms. As this trend continues its upward trajectory, we see more tech vendors coming to market with new tools designed to address a variety of different challenges. Whether you are switching up your multi-cloud strategy […]

The post 5 Things Your Organization Needs to Know About Multi-Cloud appeared first on McAfee Blogs.

]]>
Cloud awareness and adoption continues to grow as more enterprises take advantage of the benefits that come with multiple cloud platforms. As this trend continues its upward trajectory, we see more tech vendors coming to market with new tools designed to address a variety of different challenges.

Whether you are switching up your multi-cloud strategy or starting from scratch, here are a few things your organization needs to know first about multi-cloud.

Determine what features will either make or break your multi-cloud strategy

When picking the best multi-cloud structure for your business, be bold. Build a vision for what you need cloud services to do for your company; worry less about “how” and more about the “why” and “what” you need from your providers. The reality is that top cloud providers in IaaS/PaaS and, separately, SaaS spaces are offering extremely versatile capabilities and compelling value. It is important to understand what features are make or break and which ones change the way your organization works when it comes to selecting vendors.

Outside of single requests for a new or different capability, your organization needs to rationalize the different needs for each down to “collections” of related needs. For example, consider SaaS for well-known, repeatable needs first, then look to move or re-deploy capability into IaaS or build natively in PaaS for efficient applications.

Security measurements that are important when architecting a multi-cloud structure

First and foremost, avoid looking at your new cloud infrastructure as a separate environment. It’s not merely a new data center, so an organization also needs to consider how switching to a cloud infrastructure will shift how the organization secures assets. Consider looking to resources like the MITRE ATT&CK matrix and the Center for Internet Security’s Basic and Foundational Controls list as a guide for answering this question: “In the future, how do I maintain unified visibility and security when I incorporate new cloud providers?”

For a successful multi-cloud migration, use your cloud access security layer and a platform that ultimately unifies your policy and threat identification approaches. Identity is another common challenge area. Moving to the cloud at scale often requires your organization to “clean up” your identity directory to be ready and accommodating of shared sign-on. By using an identity management and/or aggregation platform to expose identity to well-known cloud services, you will be able to ease the cloud implementation burden and threat exposure of any given provider.

Ensure compliance

It’s important to know that your organization’s compliance requirements are not mitigated or transmuted simply because the data has left your internal environment and entered the one your cloud provider(s) uses. As your organization matures, the way you manage and align your cloud provider’s capabilities to your compliance requirements should evolve accordingly.

Initially, ensure that your company requires business unit executives to apply or accept the risk of compliance obligations where service providers may not have every requirement. Your legal team should be a part of the initial purchase decisions, armed with technical knowledge to help identify potential “rogue” cloud services and policy guidelines that dissuade employees from adding services “on a credit card” without appropriate oversight.

As your organization gains more experience with the cloud, request that providers share copies of the SSAE16 attestations / audits. This, together with more formal due diligence processes, should become commonplace.  Organizations looking to advance in this space would be well-advised to look at the Cloud Security Alliance’s STAR attestation and the associated Cloud Controls Matrix as a ready accelerator to benchmark cloud providers.

Approaching buy-in from exec/C-level on a multi-cloud strategy

Use of cloud services should reflect the strategic focus of the business. Technology leaders can leverage the benefits of these services to underpin initiatives in efficiency, bringing innovation to market and controlling costs. To strengthen this message, technology department heads should consider the metrics and operations adjustments that will allow them to demonstrate the enhanced value of the cloud beyond just the bottom line. If you are trying to get exec/C-level buy in, consider the following:

  • How will you measure the speed of introducing new capabilities?
  • Are new areas of value or product enhancement made possible through cloud services?
  • How will the organization measure and control usage to hit your cost targets?
  • How do you know whether your organization is getting what you have contracted for from cloud providers?
  • Do you have a mechanism for commercial coverage of the organization when things go wrong?

Protect your organization and secure the cloud

Organizations will often “upgrade” in some areas of basic security (perimeter, basic request hygiene) when making the move to well-known cloud providers. How the overall security posture is affected depends heavily on the level of diligence that goes into onboarding new cloud providers. Implementing critical technical measures like the Cloud Access Security layer and policy around how the cloud is procured and technically implemented should drive basic control requirements.

We previously discussed the challenges of governing cloud and the maturity model that we use with customers to ascertain their readiness for new cloud providers.

As the number of cloud providers scales in the environment, your organization needs to assess and document them based on how much your organization depends on a given service and the sensitivity of the data those services will hold. Services that are prioritized higher on these two fronts should have increased organizational scrutiny and technical logging integration in order to maintain the overall defensive posture of the company.

As with any other technology trend, the missteps in making the transition to business and consumer cloud services have received outsized coverage. Take the time to dive into the “hows” and “whys” of early cloud breaches to avoid becoming a potential victim. A resource like the Cloud Security Alliance’s “Top Threats to Cloud Computing: Deep Dive” and McAfee’s report on “Practical Guidance and the State of Cloud Security” can be a great place to start.

Learning from someone else’s experiences is always highly preferred, though. After all, learning about cloud incident response after the fact can be a hard, costly lesson!

The post 5 Things Your Organization Needs to Know About Multi-Cloud appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/5-things-your-organization-needs-to-know-about-multi-cloud/feed/ 0
Moving to a Software-Defined Data Center and Its Impact on Security https://www.mcafee.com/blogs/enterprise/cloud-security/moving-to-a-software-defined-data-center-and-its-impact-on-security/ https://www.mcafee.com/blogs/enterprise/cloud-security/moving-to-a-software-defined-data-center-and-its-impact-on-security/#respond Thu, 30 Aug 2018 13:00:36 +0000 https://securingtomorrow.mcafee.com/?p=91275 For 57% of enterprise organizations in our latest survey on cloud adoption, IT infrastructure took the form of a hybrid cloud, i.e. a mix of public cloud infrastructure-as-a-service (IaaS) and some form of private cloud data center. At McAfee, we spend a lot of time speaking about the benefits of using public cloud infrastructure providers […]

The post Moving to a Software-Defined Data Center and Its Impact on Security appeared first on McAfee Blogs.

]]>
For 57% of enterprise organizations in our latest survey on cloud adoption, IT infrastructure took the form of a hybrid cloud, i.e. a mix of public cloud infrastructure-as-a-service (IaaS) and some form of private cloud data center. At McAfee, we spend a lot of time speaking about the benefits of using public cloud infrastructure providers like AWS and Azure. We spend less time discussing private cloud, which today is increasingly software-defined, earning the name “software-defined data center” or SDDC.

Infrastructure designed to operate as an SDDC provides the flexibility of cloud with the most control possible over IT resources. That control enables well-defined security controls with the potential to rise above and beyond what many teams are used to having at their disposal in a traditional data center, particularly when it comes to micro-segmenting policy.

To start, the concept of software-defined data center describes an environment where compute, networking, and often storage are all virtualized and abstracted above the physical hardware they run on. VMware handles the largest share of these virtualized deployments, which is a natural extension of their long history of transforming single-purpose servers into far more cost-effective virtual server infrastructure. The big change here is adding network virtualization through their technology NSX, which frees the network from physical constraints and allows it to be software-defined.

In a physical network, your infrastructure has a perimeter which you allow traffic in/out of. This limits your control to the physical points where you can intercept that traffic. In a software-defined network (a critical part of a software-defined data center) your network can be controlled at every logical point in the virtual infrastructure. For a simple example, say you have 100 VMs running in 3 compliance-based groupings. Here is how your policy could be constructed at a high level in an SDDC:

  1. Group 1: PCI compliant storage. Every VM in this group is tagged for Group 1, and network traffic limited to internal IPs only.
  2. Group 2: GDPR compliant application with customer data. Again, each VM is tagged for its group to share the same policy, this time enforcing encryption and read-only access.
  3. Group 3: Mixed-use, general purpose VMs with varying compliance requirements. In this case, each VM needs its own policy. Some may be limited to single-IP access, others open to the internet. A per-VM policy effectively introduces micro-segmentation to your infrastructure.

The point of these basic examples is to clarify the opportunity that a software-defined data center has to fine-tune policy for your assets held on-premises. If you’re also running in AWS or Azure, then what you’ve kept on-premises likely consists of your most sensitive assets, which require the most stringent protection. Controlling policy down to the individual VM gives you this flexibility. Once you’re controlling policy at the VM-level, you can also monitor and control the communication between those VMs (i.e. east-west or intra-VM), stopping lateral threat movement from one VM to another within your data center.

If you’re in a state where certain assets simply can’t enter the public cloud, and you want to make improvements in your resource efficiency and protection strategy, you should consider building out a plan to completely virtualize your data center, including the network. To help you with that strategy, we partnered with VMware and research firm IDC to write a short paper on the security benefits of adopting a software-defined data center. You can read it here to dive deeper into this topic.

The post Moving to a Software-Defined Data Center and Its Impact on Security appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/moving-to-a-software-defined-data-center-and-its-impact-on-security/feed/ 0
Six Things your Enterprise Needs to Learn from the DNC Hacking Indictment https://www.mcafee.com/blogs/enterprise/six-things-your-enterprise-needs-to-learn-from-the-dnc-hacking-indictment/ https://www.mcafee.com/blogs/enterprise/six-things-your-enterprise-needs-to-learn-from-the-dnc-hacking-indictment/#respond Tue, 17 Jul 2018 20:49:34 +0000 https://securingtomorrow.mcafee.com/?p=90412 All politics aside, the United States Department of Justice on Friday unsealed a judicial indictment against a number of individuals alleged to be from Russia’s intelligence services engaged in activities in 2016. Stepping outside of the context of this party or that party, and politics as a whole – McAfee’s CTO, Steve Grobman noted, “Attribution […]

The post Six Things your Enterprise Needs to Learn from the DNC Hacking Indictment appeared first on McAfee Blogs.

]]>
All politics aside, the United States Department of Justice on Friday unsealed a judicial indictment against a number of individuals alleged to be from Russia’s intelligence services engaged in activities in 2016.

Stepping outside of the context of this party or that party, and politics as a whole – McAfee’s CTO, Steve Grobman noted, “Attribution is amongst the most complex aspects of cyberwar and the US government is in a unique position to make this attribution assessment.  Technical forensics combined with information from trusted intelligence or law enforcement agencies are needed to provide confidence behind identifying actors in an attack or campaign.  These indictments clearly show the US has reason to believe Russia interfered with the election process. “

The level of technical detail also offers practical insight for aspects of organizations’ readiness to react to the threat environment.

1) Nation State Activity is Real

At McAfee, we operate our own Advanced Threat Research.  We employ many professionals whose entire job it is to find ways to break things, to learn how others have already broken things, and to make decisions on the level of risk it represents to our customers and future customers.  Our hope is that our activity is both non-disruptive, ethically conducted, and consistent with our corporate values and our commitments to our customers.  In today’s threat environment, countries throughout the globe are investing in the cyber capabilities to practice intelligence, deception, counter intelligence, and in the past few years, we have documented the crossover from the cyber capability into kinetic effects.

While matters of one service’s actions versus another’s being perceived as “good” or “bad”, a matter of “criminal conspiracy” or “policy” involves many factors and points of view, as a profession it is critical that we recognize this rapidly growing reality for the fact that it is.

This judicial action is another breadcrumb reminding us as enterprise leaders that sophisticated adversaries need resources to act, especially those enterprises involved in services to organizations of public importance.  Organizations should evaluate their customer base, and the services that they provide for relative risks.  Risk has upside opportunity (“Revenue”) but should also prompt questions internally as to whether an organization or subset requires advanced security controls, or more proactive threat detection and resistance measures.

2) Geo-Location is Practically Irrelevant

For many professionals engaged in the early days of information security, we could leverage aspects of connection metadata to make snap judgements about the trustworthiness of requests.  The days of first-jump relays to command and control servers going to a given country’s public IP space or a two- letter country-associated domain are mostly over.

Instead, the organization needs to transition, looking more directly at the behavior of not just users, but of systems, and the access of resources.  At McAfee, we have evolved our own offerings in this space to establish McAfee Behavioral Analytics to discern elevated risks that break established patterns and to put advanced tools like McAfee Investigator in the hands of threat hunters.

Whether using our products or not, today’s enterprise needs to rely on security behaviors that do not look for traditional geographic or demographic identifiers as a means of making a strong determination of trust for access and/or threat identification.

When it comes to identify mis-use, where multi-factor authentication is possible, it should be implemented, with a decreased emphasis on means which are easily open to interception by opponents (like SMS based message codes).  Yubikey, TOTP based generators, and interactive application confirmation by providers like Duo Security are all effective measures to make it more difficult to apply credentials intercepted or cajoled from end users by other means.

3) URL Shorteners can be a Risk Indicator

While for many organizations – especially in the realm of social media analytics – the use of URL shorteners has enabled short-format messaging with business intelligence potential, they are often a means to obscure potentially malicious targets.  The indictment released by the United States Department of Justice highlights the continuing threat that the combination of URL Shortening and the user-focused technique of Spear Phishing continue to present as a means to attack the enterprise.

Aside from education campaigns to help users distinguish legitimate links and to help them become more sensitive to the risk, the organization can also consider web access methods for greater control and recognition of potential threats.

Systems like User Entity Behavioral Analytics (UEBA) can identify outlier websites not otherwise accessed at the organization and the presence or use of unknown URL shorteners can itself be a risk indicator.  The security operations team may want to look at the identification/risk management of certain URL shorteners over time to aid in determining which become commonly seen in the wild in the organization’s recent incidents, and thus could or should be managed in email and web access hygiene.

4) Vulnerability Management is a Key Risk Mitigation

I’ve never known a security professional who skips into the office with their coffee and announces, “I love patching servers.”  Never.  As experienced security leaders, we know how hard it can be to manage the impact to production systems, to identify system owners, to work together to maintain a cadence of patching.  Sometimes, even just the heterogeneous nature of the modern operating environment can be its own challenge!

The alleged activity of the identified conspirators reminds us how critical the public attack surface remains in protecting the enterprise as a whole.  Try as we might, each of our public infrastructure will maintain a footprint.  We “leak” details of our enterprise systems as a necessary byproduct of creating the ability for those systems to technically operate.  DNS Records.  Public IP block ownership.  Routing advertisements.  Job listings.  Employee CVs.  Employee social media profiles.

Vulnerability management requires an organization to think about more than patching.  Your organization’s threat surface has to be considered in a broader sense to manage holistic threat consideration and remediation.  The organization can also use public models as a means to check the organization’s readiness to defend against new vulnerabilities ahead of patching or other long-term remediation.

5) Response Threat Hunting is Hard – Trust Nothing

Despite the best efforts of technical security teams, sometimes intelligence and cues are missed.  The reality is that sophisticated adversaries have sophisticated skills and multiple means to stay engaged.  They also have reason and/or desire to hide from security teams.  As security professionals, we have to put personal ego and hubris aside.  Threat hunting in an incident is a time for humble approaches that recognize the adversaries are at or above our own skill level (and hope that is not the case).

In such a case, we go back to a few core fundamentals: we trust nothing.  We require validation for everything.  Each piece of intelligence goes into the picture, and through our tools to identify additional leads to pursue, and is evaluated for potential remediate actions made possible.  While we have talked at length prior about the cyber kill chain, a fundamental truth illustrated in today’s Department of Justice action is that where advanced activity occurs, the entire environment needs to be suspected and become zero trust.

Can you force each network flow to be validated for a time?  Can someone form the organization vouch for a piece of software or a specific node on the network?  Do your pre-work ahead of time to create the space so that when company brand is on the line, you can use maintenance windows, incident response policies, and similar corporate buffers to buy the “right” to shut down a segment, temporarily block a network flow and see what happens, etc.

6) Your organizational data is in the cloud. Your Incident Response needs to be, too.

The cloud was a key opportunity for the organizations compromised in these activities to continue to lose information.  Indications are that when the identity and initial incident was addressed “on premise”, the cloud systems were not connected to those changes.

Your organization has leveraged the advanced capability and time to market of the cloud.  Our recent survey of organizations worldwide indicates that the typical enterprise class organization has dozens of distinct providers hosting corporate data.  Just as your sensitive information may be stored in those providers, yet is part of your brand value and your delivery strategy, your response plans need to integrate intelligence from those providers – and to those providers – for investigation and mitigation.

Building unified visibility across cloud providers requires a deliberate approach and investment from the organizations.  Incident response procedures should include looking at cloud sources for activity from potential Indicators of Compromise, as well as an incident step of considering what actions are needed to manage the risk in cloud providers.

Your cloud is part of your holistic data and threat stance, it also needs to be part of your remediation and resilience plan.

Nation State Actors Remind us of the Fundamentals

The indictment released by the United States Department of Justice describes a multi-faceted effort that involved target research, user-focused phishing, exploiting vulnerable software, malware, and making use of the disconnect between on-premise and cloud management.

For literally years, McAfee has focused on a platform approach to security in our products.  We offer software with advancements like OpenDXL and an actively managed ecosystem of Security Innovation Alliance offerings.  We make these investments for the simple reason that in order to protect and adapt to continuing threats, your organization needs rapidly available, actionable intelligence.  Your organization’s approach to information security should return periodically to verify fundamental information sharing and basic controls, even as advanced capabilities are implemented.

 

The post Six Things your Enterprise Needs to Learn from the DNC Hacking Indictment appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/six-things-your-enterprise-needs-to-learn-from-the-dnc-hacking-indictment/feed/ 0
Cloud is Ubiquitous and Untrusted https://www.mcafee.com/blogs/enterprise/cloud-security/cloud-is-ubiquitous-and-untrusted/ https://www.mcafee.com/blogs/enterprise/cloud-security/cloud-is-ubiquitous-and-untrusted/#respond Mon, 16 Apr 2018 15:00:32 +0000 https://securingtomorrow.mcafee.com/?p=88338 As we release the resulting research and report at the 2018 RSA Conference, the message we learned this year was clear: there is no longer a need to ask whether companies are in the cloud, it’s an established fact with near ubiquitous (97%) acknowledgement.

The post Cloud is Ubiquitous and Untrusted appeared first on McAfee Blogs.

]]>
At the end of 2017, McAfee surveyed 1,400 IT professionals for our annual Cloud Adoption and Security research study.  As we release the resulting research and report at the 2018 RSA Conference, the message we learned this year was clear: there is no longer a need to ask whether companies are in the cloud, it’s an established fact with near ubiquitous (97%) acknowledgement.  And yet, as we dug into the comments and information that industry professionals and executives shared about their use and protection of the cloud, another intriguing theme became clear: companies are investing in cloud well ahead of their trust in it!

For this year’s report, Navigating a Cloudy Sky, we sought respondents from a market panel of IT and Technical Operations decision makers.  These were selected to represent a diverse set of geography, verticals, and organization sizes.  Fieldwork was conducted from October to December 2017, and the results offered a detailed understanding of the current state and future for cloud adoption and security.

Cloud First

More than any prior year – the survey indicated that 97% of organizations worldwide are currently using cloud services, up from 93% just one year ago.  In the past year, a majority of organizations in nearly every major geography have even gone so far as to assert a “cloud first” strategy for new initiatives using infrastructure or technology assets.

Indeed, this cloud-first strategy has driven organizations to take on many different providers in their cloud ecosystem.  As organizations tackle new data use initiatives, intelligence building, new capabilities to store and execute on applications – the growth in cloud is exploding the number of sanctioned cloud providers that businesses are reporting.

In the survey, enterprises are recognizing and reporting at a statistically significant level the explosion in provider count – each a source of potential risk and management need for the organization.  The provider count requires readiness in governance strategy that joins security capabilities and procurement together to protect the data entrusted to each new cloud deployment.  Security operations teams will need enhanced visibility that is unified to compose a picture across so many different environments containing enterprise data.

Data and Trust

This year’s report highlights an intriguing trend – companies are investing their data in cloud providers well in advance of their trust in those providers.  An incredible 83% of respondents reported storing sensitive data in the public cloud – with many reporting nearly every major data sensitive data type stored in at least one provider.

Despite such a high level of data storage in cloud applications, software, and infrastructure, the same business executives are clearly concerned about the continuing ability to trust the cloud provider to protect the data.  While cloud trust continues to gain, and cloud respondents indicated continuing buy-in to using providers and trusting them with critical data and workloads, only 23% of those surveyed said they “completely trust” their data will be secured in the public cloud.

Part of that trust stems from a perception that using public cloud providers is likely to drive use of more proven technologies, and that the risk is not perceived as being any less than in the private cloud.

As cloud deployment trends continue, IT decision makers have strong opinions on key security capabilities that would increase and speed cloud adoption.

  • 33% would increase cloud adoption with visibility across all cloud services in use
  • 32% would increase cloud adoption with strict access control and identity management
  • 28% would increase cloud adoption with control over cloud application functionality

You can download the full report here, and keep following @mcafee_business for more insights on this research.

The post Cloud is Ubiquitous and Untrusted appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/cloud-security/cloud-is-ubiquitous-and-untrusted/feed/ 0
Do I Even Need to Secure the Cloud? https://www.mcafee.com/blogs/enterprise/even-need-secure-cloud/ https://www.mcafee.com/blogs/enterprise/even-need-secure-cloud/#respond Fri, 22 Sep 2017 15:30:59 +0000 https://securingtomorrow.mcafee.com/?p=78307 You share responsibility for securing your data in the cloud. What does that mean? More than anything else, that you understand where the layers of protection from your cloud provider ends, and your responsibility begins.   A storm awaits many companies as they move infrastructure, applications, and entire portfolios to cloud services.  Yet, the pace […]

The post Do I Even Need to Secure the Cloud? appeared first on McAfee Blogs.

]]>
You share responsibility for securing your data in the cloud. What does that mean? More than anything else, that you understand where the layers of protection from your cloud provider ends, and your responsibility begins.  

A storm awaits many companies as they move infrastructure, applications, and entire portfolios to cloud services.  Yet, the pace of digital transformation demands that businesses make the transition.  We all receive the emails: “Deploy with scalability”, “leverage provider security”, “make your operational model more efficient”, and “manage less of the complexity” in your services!  These promises can certainly be realized – on the back of the billions of dollars in cloud investment from Amazon Web Services, Microsoft Azure, and others. To do so without risking the security of your data, however, requires careful planning along the way.

Most companies have become aware of which services they continue to “own” in the basic cloud provider models.

While the “who” of service block ownership has cleared, the question of security responsibility is a bit more complex. Amazon and Microsoft are spending billions (with a “b”) of dollars investing in the technology, people, and governance to protect public cloud services. The recent introduction of services like Amazon’s Macie shows for example, how the stock set of firewall and identity rules are quickly being complemented by deeper levels of data protection.

You, however, still retain something that Amazon and Microsoft simply don’t have: you know how your business works!  You know your people.  You know your data.  Amazon and Microsoft depend on your team, your understanding of what “good” and “bad” look like, and your willingness and ability to put reasonable security controls in place.  Often, those controls require advanced capabilities and visibility that complement the investments of the public provider, allowing you to mitigate your unique risks.

Take a simple communications scenario in Amazon Web Services.  A virtual machine in your cloud deployment makes a request to an S3 bucket to list the contents, which it receives, and then begins to request objects from the bucket.  In the transaction, Amazon’s various protective layers are  hard at work ensuring that DDoS and other external threats are not immediately involved.  The investment Amazon has made in the identity and access management (IAM) system, including the tools for policy generation and monitoring, are activated to check the policies which apply to establish a basic authorization context.

Yet, your enterprise still has outstanding risk in even this basic scenario.  Do you need to know why the list action occurred?  What application called it?  Has the VM been recently seen to engage in other unknown traffic streams?  What type of environment is the VM a part of?  What is the hygiene status / policy compliance of that VM?  Once the list is returned, is the VM allowed to access all of the things in the bucket, or should some of them be restricted?

Your enterprise remains responsible for critical aspects of the risk management of your deployment, including the ability to recognize and detect mis-configurations and/or respond to undesired access events.  In these kinds of scenarios, the cloud provider has applied their formidable assets in your defense – but as far as your IAM and bucket configuration have stated, the provider can only understand the events to be permitted.

Recent data leaks at a partner of Verizon, Dow Jones, and elsewhere from misconfigured cloud resources have underscored that this is not mere conjecture, confirming that “but, I’m on Amazon” is not a defense for breached data.  Your enterprise should have strong governance, ready discovery tools, the same (or better) identification and investigation tools you had on-premises, and the instrumentation to better assess the risk of individual data access and transmissions to your business.

In today’s cloud services, “we are running DevOps”, “it’s cloud”, and “but I’m on [provider]” cannot be our line of defense.  Your enterprise can safely realize the business cases of cloud deployment, remembering the lessons of the last generation of incrementally controlling first the perimeter and then north-south and east-west traffic for risks.  Today, data probably would not transit(hybrid or private cloud) without policy check, inspection, and data loss consideration.  Why would your operations on a cloud service be protected any less?

For more information on cloud security, follow @McAfee_Business.

The post Do I Even Need to Secure the Cloud? appeared first on McAfee Blogs.

]]>
https://www.mcafee.com/blogs/enterprise/even-need-secure-cloud/feed/ 0