What Exposed OPA Servers Can Tell You About Your Applications

With the proper request or token, an attacker could obtain even more information about these services and look for vulnerabilities or other entry points to get into an organization’s systems. We highly recommend that companies currently leveraging OPA as their policy-as-code solution to ensure that they are not unwittingly exposing their APIs and policies online. In certain cases, companies could be using OPA without them realizing it; multiple providers for Kubernetes-managed services rely on OPA for policy enforcement.

Keep in mind that we only queried the list policies endpoint from the REST API for ethical reasons. However, there are many other available endpoints and methods that not only list sensitive information, but also allow an attacker to edit or even delete data and objects from an exposed OPA server. Some of these are:

Create or update a policy

PUT /v1/policies/<id>
Delete a policy DELETE /v1/policies/<id>
Patch a document (Data API) PATCH /v1/data/{path:.+}
Delete a document (Data API): DELETE /v1/data/{path:.+}

All of these can be found in the OPA REST API Documentation.

Protecting OPA servers

Primarily, OPA servers should not be exposed to the internet. Thus, it’s necessary to restrict that access to avoid anyone poking around your OPA configurations via the REST API. The standard mode of OPA deployment for the authorization use case is to have OPA running on the same machine as the application asking it for decisions. This way, organizations would not need to expose OPA to the internet or the internal network, as communication is performed over the localhost interface. Furthermore, deploying OPA this way means that organizations usually won’t need authentication/authorization enabled for the REST API, as only a process running on the same machine would be able to query the OPA instance. To do that, OPA can be started with “opa run –addr localhost:8181” to have it bind only to the localhost interface.

Secondly, when using a policy-as-code tool such as OPA, it is important to protect policies in a location such as a source code management (SCM) system. It is also vital to have proper access controls to monitor who can change what in those policies via features such as branch protection and code owners. With the power of the SCM system, organizations can create a more streamlined process of reviews and approvals of any changes made to these policies, making sure that whatever is in the source code is also reflected in the production OPA servers.

TLS and HTTPS

As seen on Figure 4, most of these exposed OPA servers found on Shodan were not using any sort of encryption for communication, as this is not enabled by default. To configure TLS and HTTPS, system administrators need to create a certificate and a private key, and provide the following command line flags:

  • The path of the TLS certificate: –tls-cert-file=<path>
  • The path of the TLS private key: –tls-private-key-file=<path>

For up-to-date information regarding this process, please consult the OPA documentation on TLS and HTTPS.

Authentication and authorization

By default, OPA authentication and authorization mechanisms are turned off. This is described in OPA’s official documentation, and it is vital that system administrators and DevOps engineers enable these mechanisms immediately after installation. 

Both mechanisms can be configured via the following command line flags according to the OPA documentation:

  • Authentication: –authentication=<scheme>.  
    This can be bearer tokens (–authentication=token) or client TLS certificates (–authentication=tls).  
  • Authorization: –authorization=<scheme>.  
    This uses Rego policies to decide who can do what in OPA. It can be enabled by setting the –authorization=basic flag during OPA startup and providing a minimal authorization policy.

More details pertaining to this process can be found in the OPA official documentation on authentication and authorization.

Cloud security recommendations

Kubernetes is one of the most popular platforms among developers, as proven by its high adoption rate that does not show any signs of slowing down soon. With an ever-expanding userbase, Kubernetes deployments need to be kept secure from threats and risks. To do this, developers can turn to policy-as-code-tools, which can help implement controls and validate procedures in an automated manner.

Aside from diligently applying some basic housekeeping rules to keep Kubernetes clusters secure, organizations can also benefit from cloud-specific security solutions such as Trend Micro™ Hybrid Cloud Security and Trend Micro Cloud One™.

Trend Micro helps DevOps teams to build securely, ship fast, and run anywhere. The Trend Micro™ Hybrid Cloud Security solution provides powerful, streamlined, and automated security within the organization’s DevOps pipeline and delivers multiple XGen threat defense techniques for protecting runtime physical, virtual, and cloud workloads. It is powered by the Cloud One™ platform, which provides organizations a single-pane-of-glass look at their hybrid cloud environments and real-time security through Network Security, Workload Security, Container Security, Application Security, File Storage Security, and Conformity services.

For organizations looking for runtime workload, container image, and file and object storage security as software, the Deep Security™ scans workloads and container images for malware and vulnerabilities at any interval in the development pipeline to prevent threats before they are deployed.

Trend Micro™ Cloud One™ is a security services platform for cloud builders. It provides automated protection for cloud migration, cloud-native application development, and cloud operational excellence. It also helps identify and resolve security issues sooner and improves delivery time for DevOps teams. It includes the following:

Read More HERE