Over the last few months, Zero Trust Architecture (ZTA) conversations have been top-of-mind across the DoD. We have been hearing the chatter during industry events all while sharing conflicting interpretations and using various definitions. In a sense, there is an uncertainty around how the security model can and should work. From the chatter, one thing is clear – we need more time. Time to settle in on just how quickly mission owners can classify a comprehensive and all-inclusive, acceptable definition of Zero Trust Architecture.
Today, most entities utilize a multi-phased security approach. Most commonly, the foundation (or first step) in the approach is to implement secure access to confidential resources. Coupled with the shift to remote and distance work, the question arises, “are my resources and data safe, and are they safe in the cloud?”
Thankfully, the DoD is in the process of developing a long-term strategy for ZTA. Industry partners, like McAfee, have been briefed along the way. It has been refreshing to see the DoD take the initial steps to clearly define what ZTA is, what security objectives it must meet, and the best approach for implementation in the real-world. A recent DoD briefing states “ZTA is a data-centric security model that eliminates the idea of trusted or untrusted networks, devices, personas, or processes and shifts to a multi-attribute based confidence levels that enable authentication and authorization policies under the concept of least privilege access”.
What stands out to me is the data-centric approach to ZTA. Let us explore this concept a bit further. Conditional access to resources (such as network and data) is a well-recognized challenge. In fact, there are several approaches to solving it, whether the end goal is to limit access or simply segment access. The tougher question we need to ask (and ultimately answer) is how to do we limit contextual access to cloud assets? What data security models should we consider when our traditional security tools and methods do not provide adequate monitoring? And is securing data, or at least watching user behavior, enough when the data stays within multiple cloud infrastructures or transfers from one cloud environment to another?
Increased usage of collaboration tools like Microsoft 365 and Teams, SLACK and WebEx are easily relatable examples of data moving from one cloud environment to another. The challenge with this type of data exchange is that the data flows stay within the cloud using an East-West traffic model. Similarly, would you know if sensitive information created directly in Office 365 is uploaded to a different cloud service? Collaboration tools by design encourage sharing data in real-time between trusted internal users and more recently with telework, even external or guest users. Take for example a supply chain partner collaborating with an end user. Trust and conditional access potentially create a risk to both parties, inside and outside of their respective organizational boundaries. A data breach whether intentional or not can easily occur because of the pre-established trust and access. There are few to no limited default protection capabilities preventing this situation from occurring without intentional design. Data loss protection, activity monitoring and rights management all come into question. Clearly new data governance models, tools and policy enforcement capabilities for this simple collaboration example are required to meet the full objectives of ZTA.
So, as the communities of interest continue to refine the definitions of Zero Trust Architecture based upon deployment, usage, and experience, I believe we will find ourselves shifting from a Zero Trust model to an Advanced Adaptive Trust model. Our experience with multi-attribute-based confidence levels will evolve and so will our thinking around trust and data-centric security models in the cloud.