FREMONT, CA: In the past few years, the concept of "zero trust" architecture has gone through multiple stages of development. It has gone from a popular new fashion to an obsolete (mainly due to the massive marketing of those who wish to profit from the trend), and now it has finally solved all the problems that it should have been along the way: A reliable, worker-like safety option with observable discrete and considerable advantages and disadvantages that can be incorporated into our organization's safety approach.

As the name implies, zero trust is a security model in which all assets-even the managed endpoints configured by you and the local network configured by you - are considered hostile, untrustworthy, and may have been compromised by an attacker. Zero trust does not assume that the traditional security model distinguishes "trusted" internal from untrusted external internals, but assumes that all networks and hosts are equally untrusted.

Once the assumptions are fundamentally changed, you can start to make different decisions about the content, object, and time of the trust, and allow acceptable confirmation methods to confirm the request or transaction.

As a security idea, this has advantages and disadvantages.

One of the advantages is that you can strategically apply security resources where they are most needed; and increase resistance to the attacker's lateral movement (because each resource needs to be broken again after the beachhead is established).

There are also disadvantages. For example, policies need to be enforced on every system and application, and older components built with different security assumptions may not be suitable, such as internal networks that are trustworthy.

One of the most potential problems and shortcomings is related to the verification of the security status, that is, when the security model needs to be reviewed by an older and more legacy organization. The dynamic is unfortunate: the organizations that might find the most compelling model are those that adopt it, and they are likely to be prepared for the challenge.

In the event of heavy use of cloud services, it is logical to decide that sensitive data can be stored in the cloud - of course subject to access controls, which are specifically built for this purpose and which have security measures and operational personnel that cannot afford to deploy or maintain only for personal use.

For example, suppose you have a hypothetical young organization in the mid-market. The so-called "young" means that only a few years have passed since the organization was founded. Assume that the organization is "cloud-native", that is, all business applications are 100% externalized and structured entirely around cloud usage.

For such an organization, zero trust is convincing. Because it is 100% externalized, it has no data center or internal servers, and only retains the minimal internal deployment technology footprint. The organization may explicitly require that no sensitive data "resident" on the endpoint or within its office network. Instead, all such data should reside in a subset of known defined cloud services explicitly approved for this purpose.

This means that the entity can focus all its resources on strengthening the cloud infrastructure, gateway services in such a way that all access (regardless of source) is reliably protected, and de-prioritizing things such as physical security, hardening the internal network (assuming that there is even 1), implementation of internal monitoring mechanisms, etc. Assuming that a fair, professional process is in place to secure the use of cloud components, this approach can help to focus on limited resources.

See Also : Zero trust Security Solutions Companies