AlgoBuzz Blog

Everything you ever wanted to know about security policy management, and much more.

Search
Generic filters
Exact matches only
Search in title
Search in content
Search in excerpt
Search in comments
Filter by Custom Post Type
Posts

How to stop small misconfigurations becoming big security problems

by
[addtoany]


Security is a balancing act.  On one hand, it needs to protect the organization and prevent disruptive cyberattacks and breaches.  On the other hand, security needs to serve the business, supporting both existing and new business applications by ensuring the right connectivity is in place, and that continuity is maintained.  In both cases, there is no room for mistakes.

A simple error made during a routine change management process could open up a vulnerability that an attacker can exploit. And given the pressure from the business to make changes quickly – such as spinning up new servers or resources rapidly to serve a business need – those errors are all too easy to make. In 2018, misconfigured systems and cloud servers were responsible for the exposure of more than 2 billion personally-identifiable records, or nearly 70% of the total number of compromised records tracked in the IBM X-Force Threat Intelligence Index.

Misconfigurations can also cause significant business issues by simply disrupting application connectivity and causing an outage. A router misconfiguration at United Airlines grounded nearly 100 flights, causing widespread disruption and negative publicity. A recent AlgoSec survey found that two-thirds of application outages last more than an hour and in 10% of cases longer than a full working day to resolve. Given the impact that these unintentional errors can have on an organization’s network and business, how can companies ensure they avoid making them?

Analyzing errors

First, let’s look at how these misconfigurations happen using a simple example. Imagine you have a web server, and you want it to be able to access database servers in your on-premise data center. The web server and database servers are separated by a firewall, so you need to reconfigure the firewall in order to allow the traffic.

You manually key in the new rule for the firewall – but by mistake, you typed ‘neq’ (‘not equal to’), which allows access to any service, instead of ‘eq’ (‘equal to’), which gives access only to a single, specified network port. It’s just one misplaced letter, but the rule you’ve keyed in has completely the opposite effect to what you wanted. Instead of enabling a single new service, the web server can access any other service on that network. Tens of thousands of ports are suddenly available; the misconfigured rule has essentially blown your network wide open. A recent survey conducted by AlgoSec and CSA found that the chief contributor to application and network outages is human error.

These problems are not limited to on-premise environments. Similar configuration and routing mistakes can occur in the cloud, because the same principles regarding blocking and allowing traffic underpin cloud security. Misconfigurations can mean that incoming traffic from the cloud environment may bypass the security controls intended to protect it. In addition, there may often be no obvious signs that a security hole has opened up: the application relying on that traffic will still function as normal, but it’s not secured.

Another common problem, both in cloud and on-premise environments, occurs when organizations undertake periodic manual clean-ups of security rules, and remove a rule that they think is obsolete. Unfortunately, if that rule does turn out to be associated with a critical application, then key business services can be downed for significant periods of time.

Avoiding misconfigurations

So how should teams change their practices to stop these network misconfigurations and problems happening? There are basic principles that can be applied, such as ensuring that only authorized, qualified personnel can make network or device changes, and following a clearly defined change process, with mandatory review and approval for each stage.

These are all good practices, but they are extremely resource-intensive when done manually and do not easily scale. The connectivity requirements for business applications change all the time as employees come and go, new users are added, and databases are moved. The result is that even well-resourced IT and security teams become easily overwhelmed by the volume of changes. This puts the brakes on business agility.

Why automation matters

As such, all network configuration and change processes – whether planned or unplanned – need to be fully automated, to eliminate guesswork and error-prone manual input. Each change process should follow these four stages, to eliminate the risk of errors and misconfigurations:

  1. Planning the change: You need to identify which security devices are in the path of the proposed change, and the security policies associated with them. This demands full visibility across the entire network estate.
  2. Understanding the risk: Will the change cause undue security, compliance or business risk to the network environment or application? If so, the change should be re-planned.
  3. Making the change: Here, you must consider how best to insert the new security rule into the device’s current policy. Should a new rule be added to the device’s current policy, or an existing one modified, to avoid duplications? Then the changes should be automatically pushed to the relevant devices, and all changes documented.
  4. Validating the change: This involves checking that the change was implemented exactly as requested, that the applications or services work as they should, with no overly-permissive or risky rules remaining.

Security policy management solutions deliver this end-to-end visibility and automation, showing how application connectivity needs map onto the IT infrastructure, minimizing the risks of human error when planning and making changes, and documenting the entire change process to ensure compliance. Automating each stage of the change process goes a long way to eradicating the risk of accidental misconfigurations, and the business disruptions and security holes they can cause. It also ensures that the balancing act between security and business continuity is maintained, reducing risks while enhancing overall speed and agility.

Find out how to eliminate the risks of small misconfigurations causing big problems for your organization here.

Subscribe to Blog

Receive notifications of new posts by email.