Managing ever-growing network security policies is not getting any easier. We are facing more threats, greater complexity and increased demand for both security and application connectivity. However, many companies are failing to update their approach to security policy management to keep up with these challenges. In my years of interactions with companies across pretty much every geography and industry vertical (many of which have become AlgoSec customers) I’ve identified what I call the “Seven Deadly Sins” of security policy management. I am sure none of them take place in YOUR organization (fingers crossed) but in just in case you want to help err… a friend… read on, and check out this new Infographic:
- Focusing on the “plumbing” instead of on the business applications: Often, when we think about network security, we are quick to adopt a network-centric (instead of an application-centric) approach – the IP addresses, ports, protocols, VPN tunnels etc. We look at the path, rather than the purpose. Documentation often focuses on the plumbing level too, and frequently the reason why network access was granted appears as an afterthought in the comments field or on an auxiliary spreadsheet—or may not be noted at all. Only at a later stage do we think about what should in fact be the most important question: why is this rule actually in place? What business application is it supporting?
- Not removing firewall rules for decommissioned applications: When we deploy an application, we create rules and define access rights. When we decommission an application, though, the reverse seldom happens. Unrequired access is often kept in place because of the fear that removing it from the network could break something else. While that may seem to fall into the “if it ain’t broke, don’t fix it” category, the opposite is actually true. Open access that is not required for any business purpose piles up and creates unnecessary clutter – making an attacker’s life a lot easier, and your audit preparation team’s life a lot harder.
- Tolerating or encouraging ineffective communication between teams: Maintaining a large IT infrastructure requires multiple teams—a security team to define and enforce policies, an operations team to make sure that the network is available and operates properly, and an applications team to support the business applications. Typically these teams care very little about each other – they speak very different languages, and do a terrible job of communicating with each other. There are a lot of reasons for this: different reporting structures, cultural differences and different goals. That makes it hard for one team to see the network and its challenges the same way as another, which introduces mistakes and makes for lengthy lead times for processing changes.
- Failing to document enough (or at all!): Let’s face it, documenting is probably the least enjoyable part of IT work for most people, but it is critical. If a rule isn’t documented, we won’t know (or won’t remember in four years’ time) why it’s in place. And, if we don’t know why it’s there to begin with, it will be a challenge to know how to manage changes that affect it. In addition, poor documentation makes for very awkward audits. You can’t say to an auditor…“well Bob wrote it, and he left 2 years ago”. You must to be able to answer with certainty when asked why a rule exists. Trying to figure it out months or years after implementation with someone looking over your shoulder is even less fun that documenting it initially.
- Not recycling existing firewall rules and objects. This happens all the time. One person calls all the database farm IP addresses “DB_srv.” A few weeks later, someone else creates “dbserver” for the same addresses and, a couple months after that, someone creates “databasesrv” for the same purpose. Not only does all that duplication create clutter, it confuses the heck out of your teammates who may try to figure out why all three were needed and what differences exist between them.
- Permitting “cowboy” changes: Every organization has “cowboys” – you know, those administrators who fire up the firewall console and make a change – completely out of process, and without the required approvals. While this may be done with the best intentions (e.g. an ad-hoc change that needs to be performed quickly) it can have disastrous repercussions (There is a reason why a process was put in place) According to AlgoSec’s 2014 State of Network Security Survey, 82% of organizations suffered an application or network outage as a result of an out-of-process security change.
- Making manual “fat finger” input mistakes. To err is human but to forgive is not in the nature of IT systems, so if you’re manually coding or processing changes, you run the risk of making mistakes that could leave you vulnerable to attacks or outages. An administrator in one company that we worked with accidently typed port 433 instead of port 443 when making a firewall rule change – let’s just say it was not a good day for him. Without a way to catch errors or automate processes, you run the risk of introducing mistakes and wasting time on activities that could be quickly done by software.
Mitigating these deadly sins requires process, visibility and automation. It’s a journey well worth embarking on to improve both security and business agility. To learn more about these deadly sins, and how you can address them with automated security policy management, check out our latest webinar Shift Happens: Eliminating the Risks of Network Security Policy Change.
Subscribe to Blog
Receive notifications of new posts by email.