Everything you ever wanted to know about security policy management, and much more.
As security threats become more and more advanced, configuring your firewall correctly has never been more important. While many IT professionals spend much of their time worrying about new and emerging firewall vulnerabilities, yet according to Gartner research, 95% of all firewall breaches are actually caused by misconfigurations, not flaws.
In my work I find many mistakes in firewall configurations. Following on from Kevin Beaver’s recent post on Top 10 Common Firewall Flaws: What You Don’t Know Can Hurt You!, here are the most common types of firewall misconfigurations that I encounter and how you can avoid them:
Go-anywhere policy configurations
Firewalls are often set up with an open policy of allowing traffic from any source to any destination. This is because IT teams don’t know exactly what they need at the outset, and decide to start with this broad rule and then work backwards. However, the reality is that, due to time pressures or simply not regarding it as a priority, they never get round to defining the firewall policies, leaving your network in this perpetually exposed state.
This demonstrates the need for proper documentation – ideally mapping out the flows that your applications actually require before granting this open access. Organizations should follow the principle of least privilege – that is, giving the minimum level of privilege that the user or service needs to function normally, thereby limiting the potential damage caused by a breach. It’s also a good idea to regularly revisit your firewall policies to look at application usage trends and identify new applications being used on the network and what connectivity they actually require.
Risky rogue services
Another mistake I often find is services that are left running on the firewall that don’t need to be. The two main culprits here are dynamic routing, which typically should not be enabled on security devices as best practice, and “rogue” DHCP servers on the network distributing IPs, which can potentially lead to availability issues as a result of IP conflicts. It’s also surprising the number of devices that still use unencrypted protocols like Telnet, despite the protocol being over thirty years old!
The solution is to harden devices and ensure that configurations are compliant before devices are promoted into production environments. This is something that a lot of organizations struggle with. By configuring your devices based on the function that you actually want them to fulfil and following the principle of least privileged access – before deployment in production – you will improve security and reduce the chances of accidentally leaving a risky service running on your firewall.
Non-standard authentication mechanisms
During my work, I often find organizations that use routers which don’t follow the enterprise standard for authentication. One example I encountered is a large bank that had all the devices in its primary data centers controlled by a central authentication mechanism, yet at its remote office they didn’t use the same mechanism. By not enforcing corporate authentication standards, staff in the remote branch could access local accounts, with weak passwords, and had a different limit on login failures before account lockout.
This scenario reduces security and creates more attack vectors for attackers, as it’s easier for them to access the corporate network via the remote office. Organizations should therefore ensure that all remote offices follow the same central authentication mechanism as the rest of the company.
Test systems using production data
While most organizations tend to have governance guidelines that state that test systems should not connect to production systems and collect production data, but in practice this is often not enforced. In reality, the people working in testing see production data as the best, most accurate way to test. However, when you allow test systems to collect data from production, you’re likely bringing that data into an environment that has much lower levels of security. That data could be highly sensitive, as well as subject to regulatory compliance. So if you do use production data in a test environment, make sure that you use the correct security controls required by the classification of the data
Logging matters
The final issue that I see more often than I should is organizations not analyzing the log output from their security devices, or not doing so with enough granularity. This is one of the worst things you can do in terms of network security; not only will you not be alerted when you’re under attack, but you’ll have little or no traceability when you’re investigating post-breach.
The reason I often hear for not logging properly is that logging infrastructure is expensive, and hard to deploy, analyze and maintain. However, the costs of being breached without being alerted or being able to trace the attack are surely far higher.
By taking note of these misconfiguration issues, organizations can quickly improve their overall security posture – and dramatically reduce their risk of a breach.
Receive notifications of new posts by email.