Why this latest cloud security incident highlights the need to eliminate misconfigurations and understand the ‘shared responsibility’ cloud security model.
This week’s news of the data breach at financial services company Capital One has generated several high-profile articles discussing the state of security in the cloud. As a result, I wanted to look a little closer at the details of the incident and see what conclusions and recommendations could be drawn from it.
The criminal complaint made against the individual charged with causing the breach gives some useful information. It states that a ‘firewall misconfiguration’ left one of Capital One’s cloud servers vulnerable, allowing the hacker to send commands that gave access to sensitive data.
However, in its press release about the breach, Capital One stated that: “This type of vulnerability is not specific to the cloud. The elements of infrastructure involved are common to both cloud and on-premises data center environments.” Amazon Web Services (AWS), the cloud service used by Capital One, confirmed that no AWS infrastructure or services were compromised.
So it isn’t cloud security that’s being called into question here. But it does highlight the complexity of securing complex cloud infrastructures, which have many moving parts and rapidly change according to business need. It’s important to remember that despite the security measures offered by the cloud provider, cloud servers and resources are much more exposed to risk than physical, on-premise servers.
If you make a mistake when configuring security for an on-premise server, it is still likely to be protected by other security measures (such as the main corporate network gateway). But when you provision a server in the public cloud, it’s designed to be accessible from any computer, anywhere in the world. Apart from a password, it might not have any other default protections in place. As such, it’s up to customers to deploy the appropriate security controls to protect the cloud servers they are using, and the relevant applications and data.
So enterprise security teams need to establish perimeters, define security policies and implement controls to manage connectivity to those cloud servers and protect data in the cloud. These requirements across both on-premise and cloud environments add significant complexity to security management, demanding that teams use multiple different tools to make network changes and enforce security.
Doing this manually using different consoles means that there’s a very real risk of misconfigurations introducing security holes, or causing application outages. And of course, teams will also need a logging capability to record actions for management and audits, to get a comprehensive record of what changes were made and who made them.
The breach also raises once again the issues around the ‘shared responsibility’ cloud security model. This means the cloud provider updates, patches and secures their infrastructure, and the customer is responsible for protecting what they run in that cloud infrastructure.
As we noted in a recent blog, this model is similar to the way an auto manufacturer installs locks and alarms in its cars: the security features only offer protection if the vehicle owner actually uses them. The bonus for using those controls, and ensuring that applications and data are protected rests entirely on the customer. So how do security teams integrate the management of those controls with their existing on-premise estate?
Using a network security management solution will greatly simplify all of these processes, enabling teams to have visibility of their entire network, and to make changes seamlessly and consistently across public clouds and on-premise networks from a single console.
The solution’s network simulation capabilities can be used to answer questions such as: ‘is my application server secure?’, or ‘is the traffic between these workloads protected by security gateways?’ It can also quickly identify issues that might block an application’s connectivity (such as incorrect routes, or misconfigured or missing security rules) and then plan how to correct the connectivity issue across the relevant security controls (both cloud and on-premise), eliminating human error. The solution also keeps an audit trail of every change for compliance reporting.
This enables automatic identification of any configuration errors in the cloud, so they can be swiftly remedied – as Capital One was able to do in this case. The solution can also be linked to SIEM and other monitoring tools, to quickly flag up any anomalies or threats to business applications, whether they are accidental errors, or the result of malicious activity, so they can be quickly addressed.
In conclusion, this incident should not be used to argue that the public cloud is insecure. Instead, it highlights the need for centralized management and control over security, to remove the risk of misconfigurations, and to enable a rapid response to any type of eventuality. With the right approach to security automation, that control is easy to achieve.
Receive notifications of new posts by email.