Intelligent network segmentation is a key strategy for reducing the attack surface of data center networks. Just as the watertight compartments in an ocean-going ship are designed to contain flooding if the hull is breached, segmentation isolates servers and systems into separate zones, preventing intruders or malware being able to move from one zone to the next, and limiting the potential damage from a security breach or incident.
So, it’s no surprise that the use of network segmentation as a defense-in-depth strategy for data center networks is growing. However, deciding exactly where to place the boundaries that will separate those network segments isn’t always easy, especially in complex, multi-network, multi-vendor environments. Here’s how security teams can simplify the task of deciding where to place the borders between segments.
Finding the flows
The first step is to identify all the application network flows within the data center. A good way of doing this is to use a Netflow source, which integrates with a discovery engine.
An intelligent discovery engine can then identify and group together those flows which have a logical connection to each other – such as those based on shared IP addresses, which can indicate that the flows support the same business application, something like this:
This information can be augmented with additional data, such as labels for device object names or application names that are relevant to the flows. When collated, this creates a complete map which identifies the flows, servers and devices that your business applications rely on in order to function.
Once you have identified your network and application flows, you’re ready to start creating your network segmentation scheme. This means choosing the best place on the internal data center network to place your traffic filters and create the borders between segments. To do this, you need to establish exactly what will happen to the application flows once those filters are introduced.
It’s important to remember that when you place a filtering device (or activate a virtualized micro-segmentation technology) within your internal data center network to create a border between segments, some of the application traffic flows will need to cross that border. These flows will therefore need explicit filtering policy rules that allow them to cross the border – otherwise, the flows will be blocked, and the applications that rely on these flows could fail.
Enforcing border controls
So how do you know when you need to add specific rules to security controls, and what those rules should be? The critical step is to examine the application flows that were identified in your initial discovery process. When you do this, you should note whether or not the flow already passes through an existing firewall, filter or security control.
If the discovery process shows that a given flow is already being filtered by a firewall, then there is usually no need to add another explicit firewall rule for that flow when you start to segment your network.
However, the discovery process may show that some application flows do not currently pass through any traffic filter. So, if you plan to place a filter in order to create a new network segment, you need to consider if these unfiltered flows might get blocked when the new filtering is applied. In this case you will need to add a new, explicit rule in order to allow the flow through the segment borders.
The key takeaway is this: use your discovery system to identify application flows, and combine that information with information from your firewalls and security devices to recognize which flows are currently being filtered, and which are unfiltered. This will greatly assist you in deciding exactly where to place the segment borders in your data center.
Watch me explain this in detail in my new whiteboard video lesson.
Receive notifications of new posts by email.