top of page

Search results

621 results found with an empty search

  • Optimizing Network Security and Accelerating Operations for a Major Telecommunications Provider - AlgoSec

    Optimizing Network Security and Accelerating Operations for a Major Telecommunications Provider Case Study Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | How AppSec Network Engineers Can Align Security with the Business

    Eric Jeffery, AlgoSec’s regional solutions engineer, gives his view on the pivotal role of AppSec network engineers and how they can... Application Connectivity Management How AppSec Network Engineers Can Align Security with the Business Eric Jeffery 2 min read Eric Jeffery Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 7/13/22 Published Eric Jeffery, AlgoSec’s regional solutions engineer, gives his view on the pivotal role of AppSec network engineers and how they can positively impact the business It may surprise many people but the number one skills gap hampering today’s application security network engineers is primarily centred around the soft skills which includes communication, writing, presentation, team building and critical thinking. Why is this so important? Because first and foremost, their goal is to manage the organization’s security posture by deploying the best application security tools and technologies for the specific security and growth needs of the business. Keep things safe but don’t get in the way of revenue generation What an application security network engineer should not do is get in the way of developing new business-critical or revenue generating applications. At the same time, they need to understand that they have a leadership role to play in steering a safe and profitable course for the business. Starting with an in depth understanding of all wired traffic, AppSec network engineers need to know what applications are running on the network, how they communicate, who they communicate with and how to secure the traffic and connectivity flow associated with each one of them. An AppSec network engineer’s expertise should extend much more than mastering simple applications such as FTP and SSH. Rather, business traffic continuity should sit at the pinnacle of their responsibilities. There’s a lot of revenue generating traffic that they need to understand and put the right guardrails to protect it. However, equally as important, they need to make sure that the traffic is not hindered by outdated or irrelevant rules and policies, to avoid any negative financial impact on the organization. Layers of expertise beyond the OSI model A good starting point for any AppSec network engineer is to acquire a commanding knowledge of the seven layers of the OSI model, especially Layer 6 which covers Presentation. In practical terms, this means that they should have a thorough understanding of the network and transport layers – knowing what traffic is going across the network and why. It’s also helpful to have basic scripting knowledge and an understanding of simple scripts such as a cron job for scheduling tasks. It could also be useful to know some basic level programming like Perl and PHP. Beyond the network skills, AppSec network engineers should grasp the business vertical in which they operate. Once they gain an understanding of the business DNA and the applications that make it tick, then they can add real value to their organizations. What’s on the network vs. what should be on the network Should AppSec network engineers be expected to understand business and applications? Absolutely. With this level of skill and knowledge, they can help the business progress securely by corelating what is actually in the network environment versus what should be in the environment. Once they have clear understanding, they can clean up then environment and optimize network performance with enhanced security. This becomes more critical as organizations grow and develop, often allowing too much unnecessary traffic into the environment. Typically, this is how the scenario plays out: Applications are added or removed (decommissioned), or a new vendor or solution is brought on board and the firewall turns into a de facto router. The end result of such often leads to new vulnerabilities and too many unnecessary threat vectors. This is precisely where the aforementioned soft skills come in – an AppSec network engineer should be able to call out practices that don’t align with business goals. It’s also incumbent upon organizations to offer soft skills training to help their AppSec network engineers become more valuable to their teams. Need an application view to be effective in securing the business When firewalls become de facto routers, organizations end up relying on other areas for security. However, security needs to be aligned with the applications to prevent cyber attacks from getting onto the network and then from moving laterally across the network, should they manage to bypass the firewalls. All too often, east-west security is inadequate and therefore, AppSec network engineers need to look at network segmentation and application segmentation as part of a holistic network security strategy. The good news is that there are some great new technologies that can help with segmenting an internal network. The lesser good news is that there’s a danger in the thinking that by bolting on new tools, the problem will be solved. So often these tools are only partially deployed before the team moves onto the next “latest and the greatest” solution. When exploring new technologies, AppSec network engineers must ask themselves the following: Is there a matching use case for each solution? Will procurement of another tool lead to securing the environment or will it just be another useless “flavor of the month” tool? Irregardless, once the new technology solution is acquired, it is imperative to align the right skilful people with this technology to enable the organization to intelligently secure the whole environment before moving onto a new tool. To further hone this point, celebrating the introduction of a new firewall is superfluous if at the end of the day, it does not utilize the right rules and policies. Ushering some of these new technologies without proper deployment will only leave gaping holes and give organizations a false sense of security, exposing them to continuous risks. Don’t put the cloud native cart before the horse The role of an AppSec network engineer becomes even more critical when moving to the cloud. It starts with asking probing questions: What are the applications in the business and why are we moving them to the cloud? Is it for scalability, speed of access or to update a legacy system? Will the business benefit from the investment and the potential performance impact? It’s also important to consider the architecture in the cloud: Is it containerized, public cloud, private cloud or hybrid? Once you get definitive answers to these questions, create reference architectures and get senior level buy-in. Finally, think about the order in which the enterprise migrates applications to the cloud and maybe start with some non-critical applications that only affect a small number of locations or people before risking moving critical revenue generating applications. Don’t put the cart before the horse. DevSecOps: We should be working together; you can be sure the criminals are… Network application security is complicated enough without introducing internal squabbles over resources or sacrificing security for speed. Security teams and development teams need to work together and focus on what is best for your business. Again, this where the soft skills like teamwork, communications and project management come into play. The bottom line is this: Understand bad actors and prepare for the worst. The bad guys are just chomping at the bit, waiting for your organizations to make the next mistake. To beat them, DevSecOps teams must leverage all the resources they have available. Future promise or false sense of security? There are some exciting new technologies to look forward to in the horizon to help secure the application environment. Areas like quantum computing, machine learning, AI and blockchain show great promise in outfoxing the cyber criminals in the healthcare and financial services industries. It is expected that the AppSec network engineer will play a vital role in the viability of these new technologies. Yet, the right technology will still need to be applied to the right use case correctly and then fully deployed to in order see any effective results. The takeaway So much of the role of the AppSec network engineer is about taking a cold hard look at the goals of the business and asking some challenging questions. It all starts with “what’s right for the business?” rather than “what’s the latest technology we can get our hands on?” To be an effective AppSec network engineer, individuals should not only know the corporate network inside out, but they also must have an overall grasp of applications and the applicable business cases they support. Furthermore, collaboration with developers and operations (DevOps) becomes an agent for rapid deployment of revenue generating or mission critical applications. But it still goes back to the soft skills. To protect the business from taking needless security risks and demand a seat at the decision-making table, AppSec network engineers need to apply strong leadership, project management and communications skills To learn more on the importance of AppSec network engineers to your organization’s cybersecurity team, watch the following video Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • Check Point and AlgoSec | AlgoSec

    AlgoSec & Check Point AlgoSec seamlessly integrates with Check Points NGFWs to automate application and user aware security policy management and ensure that Check Points’ devices are properly configured. AlgoSec supports the entire security policy management lifecycle — from application connectivity discovery, through ongoing management and compliance, to rule recertification and secure decommissioning. Solution brief Cloudguard datasheet How to Check Point Regulatory compliance Learn how to prepare your Check Point devices for a regulatory audit Check Point Risk Assessment Learn how assess risk on your Check Point devices with AlgoSec Mapping your Network Visualize your complex network, including your Check Point devices, with a dynamic network topology map See how Check Point Users Can Benefit from AlgoSec Schedule time with one of our experts

  • Measures that actually DO reduce your hacking risk | AlgoSec

    Robert Bigman is uniquely equipped to share actionable tips for hardening your network security against vulnerabilities Don’t miss this opportunity to learn the latest threats and how to handle them Webinars Measures that actually DO reduce your hacking risk Learn from the best how to defeat hackers and ransomware As incidents of ransomware attacks become more common, the time has come to learn from the best how to defeat hackers. Join us as Robert Bigman, the former CISO of the CIA, presents his webinar Measures that Actually do Reduce your Hacking Risk. Robert Bigman is uniquely equipped to share actionable tips for hardening your network security against vulnerabilities. Don’t miss this opportunity to learn the latest threats and how to handle them. April 20, 2022 Robert Bigman Consultant; Former CISO of the CIA Relevant resources Ensuring critical applications stay available and secure while shifting to remote work Keep Reading Reducing risk of ransomware attacks - back to basics Keep Reading Ransomware Attack: Best practices to help organizations proactively prevent, contain and Keep Reading Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | The Application Migration Checklist

    All organizations eventually inherit outdated technology infrastructure. As new technology becomes available, old apps and services... Firewall Change Management The Application Migration Checklist Asher Benbenisty 2 min read Asher Benbenisty Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 10/25/23 Published All organizations eventually inherit outdated technology infrastructure. As new technology becomes available, old apps and services become increasingly expensive to maintain. That expense can come in a variety of forms: Decreased productivity compared to competitors using more modern IT solutions. Greater difficulty scaling IT asset deployments and managing the device life cycle . Security and downtime risks coming from new vulnerabilities and emerging threats. Cloud computing is one of the most significant developments of the past decade. Organizations are increasingly moving their legacy IT assets to new environments hosted on cloud services like Amazon Web Services or Microsoft Azure. Cloud migration projects enable organizations to dramatically improve productivity, scalability, and security by transforming on-premises applications to cloud-hosted solutions. However, cloud migration projects are among the most complex undertakings an organization can attempt. Some reports state that nine out of ten migration projects experience failure or disruption at some point, and only one out of four meet their proposed deadlines. The better prepared you are for your application migration project , the more likely it is to succeed. Keep the following migration checklist handy while pursuing this kind of initiative at your company. Step 1: Assessing Your Applications The more you know about your legacy applications and their characteristics, the more comprehensive you can be with pre-migration planning. Start by identifying the legacy applications that you want to move to the cloud. Pay close attention to the dependencies that your legacy applications have. You will need to ensure the availability of those resources in an IT environment that is very different from the typical on-premises data center. You may need to configure cloud-hosted resources to meet specific needs that are unique to your organization and its network architecture. Evaluate the criticality of each legacy application you plan on migrating to the cloud. You will have to prioritize certain applications over others, minimizing disruption while ensuring the cloud-hosted infrastructure can support the workload you are moving to. There is no one-size-fits-all solution to application migration. The inventory assessment may bring new information to light and force you to change your initial approach. It’s best that you make these accommodations now rather than halfway through the application migration project. Step 2: Choosing the Right Migration Strategy Once you know what applications you want to move to the cloud and what additional dependencies must be addressed for them to work properly, you’re ready to select a migration strategy. These are generalized models that indicate how you’ll transition on-premises applications to cloud-hosted ones in the context of your specific IT environment. Some of the options you should gain familiarity with include: Lift and Shift (Rehosting). This option enables you to automate the migration process using tools like CloudEndure Migration, AWS VM Import/Export, and others. The lift and shift model is well-suited to organizations that need to migrate compatible large-scale enterprise applications without too many additional dependencies, or organizations that are new to the cloud. Replatforming. This is a modified version of the lift and shift model. Essentially, it introduces an additional step where you change the configuration of legacy apps to make them better-suited to the cloud environment. By adding a modernization phase to the process, you can leverage more of the cloud’s unique benefits and migrate more complex apps. Refactoring/Re-architecting. This strategy involves rewriting applications from scratch to make them cloud-native. This allows you to reap the full benefits of cloud technology. Your new applications will be scalable, efficient, and agile to the maximum degree possible. However, it’s a time-consuming, resource-intensive project that introduces significant business risk into the equation. Repurchasing. This is where the organization implements a fully mature cloud architecture as a managed service. It typically relies on a vendor offering cloud migration through the software-as-a-service (SaaS) model. You will need to pay licensing fees, but the technical details of the migration process will largely be the vendor’s responsibility. This is an easy way to add cloud functionality to existing business processes, but it also comes with the risk of vendor lock-in. Step 3: Building Your Migration Team The success of your project relies on creating and leading a migration team that can respond to the needs of the project at every step. There will be obstacles and unexpected issues along the way – a high-quality team with great leadership is crucial for handling those problems when they arise. Before going into the specifics of assembling a great migration team, you’ll need to identify the key stakeholders who have an interest in seeing the project through. This is extremely important because those stakeholders will want to see their interests represented at the team level. If you neglect to represent a major stakeholder at the team level, you run the risk of having major, expensive project milestones rejected later on. Not all stakeholders will have the same level of involvement, and few will share the same values and goals. Managing them effectively means prioritizing the values and goals they represent, and choosing team members accordingly. Your migration team will consist of systems administrators, technical experts, and security practitioners, and include input from many other departments. You’ll need to formalize a system of communicating inside the core team and messaging stakeholders outside of it. You may also wish to involve end users as a distinct part of your migration team and dedicate time to addressing their concerns throughout the process. Keep team members’ stakeholder alignments and interests in mind when assigning responsibilities. For example, if a particular configuration step requires approval from the finance department, you’ll want to make sure that someone representing that department is involved from the beginning. Step 4: Creating a Migration Plan It’s crucial that every migration project follows a comprehensive plan informed by the needs of the organization itself. Organizations pursue cloud migration for many different reasons – your plan should address the problems you expect cloud-hosted technology to solve. This might mean focusing on reducing costs, enabling entry into a new market, or increasing business agility – or all three. You may have additional reasons for pursuing an application migration plan. This plan should also include data mapping . Choosing the right application performance metrics now will help make the decision-making process much easier down the line. Some of the data points that cloud migration specialists recommend capturing include: Duration highlights the value of employee labor-hours as they perform tasks throughout the process. Operational duration metrics can tell you how much time project managers spend planning the migration process, or whether one phase is taking much longer than another, and why. Disruption metrics can help identify user experience issues that become obstacles to onboarding and full adoption. Collecting data about the availability of critical services and the number of service tickets generated throughout the process can help you gauge the overall success of the initiative from the user’s perspective. Cost includes more than data transfer rates. Application migration initiatives also require creating dependency mappings, changing applications to make them cloud-native, and significant administrative costs. Up to 50% of your migration’s costs pay for labor , and you’ll want to keep close tabs on those costs as the process goes on. Infrastructure metrics like CPU usage, memory usage, network latency, and load balancing are best captured both before and after the project takes place. This will let you understand and communicate the value of the project in its entirety using straightforward comparisons. Application performance metrics like availability figures, error rates, time-outs and throughput will help you calculate the value of the migration process as a whole. This is another post-cloud migration metric that can provide useful before-and-after data. You will also want to establish a series of cloud service-level agreements (SLAs) that ensure a predictable minimum level of service is maintained. This is an important guarantee of the reliability and availability of the cloud-hosted resources you expect to use on a daily basis. Step 5: Mapping Dependencies Mapping dependencies completely and accurately is critical to the success of any migration project. If you don’t have all the elements in your software ecosystem identified correctly, you won’t be able to guarantee that your applications will work in the new environment. Application dependency mapping will help you pinpoint which resources your apps need and allow you to make those resources available. You’ll need to discover and assess every workload your organization undertakes and map out the resources and services it relies on. This process can be automated, which will help large-scale enterprises create accurate maps of complex interdependent processes. In most cases, the mapping process will reveal clusters of applications and services that need to be migrated together. You will have to identify the appropriate windows of opportunity for performing these migrations without disrupting the workloads they process. This often means managing data transfer and database migration tasks and carrying them out in a carefully orchestrated sequence. You may also discover connectivity and VPN requirements that need to be addressed early on. For example, you may need to establish protocols for private access and delegate responsibility for managing connections to someone on your team. Project stakeholders may have additional connectivity needs, like VPN functionality for securing remote connections. These should be reflected in the application dependency mapping process. Multi-cloud compatibility is another issue that will demand your attention at this stage. If your organization plans on using multiple cloud providers and configuring them to run workloads specific to their platform, you will need to make sure that the results of these processes are communicated and stored in compatible formats. Step 6: Selecting a Cloud Provider Once you fully understand the scope and requirements of your application migration project, you can begin comparing cloud providers. Amazon, Microsoft, and Google make up the majority of all public cloud deployments, and the vast majority of organizations start their search with one of these three. Amazon AW S has the largest market share, thanks to starting its cloud infrastructure business several years before its major competitors did. Amazon’s head start makes finding specialist talent easier, since more potential candidates will have familiarity with AWS than with Azure or Google Cloud. Many different vendors offer services through AWS, making it a good choice for cloud deployments that rely on multiple services and third-party subscriptions. Microsoft Azure has a longer history serving enterprise customers, even though its cloud computing division is smaller and younger than Amazon’s. Azure offers a relatively easy transition path that helps enterprise organizations migrate to the cloud without adding a large number of additional vendors to the process. This can help streamline complex cloud deployments, but also increases your reliance on Microsoft as your primary vendor. Google Cloud is the third runner-up in terms of market share. It continues to invest in cloud technologies and is responsible for a few major innovations in the space – like the Kubernetes container orchestration system. Google integrates well with third-party applications and provides a robust set of APIs for high-impact processes like translation and speech recognition. Your organization’s needs will dictate which of the major cloud providers offers the best value. Each provider has a different pricing model, which will impact how your organization arrives at a cost-effective solution. Cloud pricing varies based on customer specifications, usage, and SLAs, which means no single provider is necessarily “the cheapest” or “the most expensive” – it depends on the context. Additional cost considerations you’ll want to take into account include scalability and uptime guarantees. As your organization grows, you will need to expand its cloud infrastructure to accommodate more resource-intensive tasks. This will impact the cost of your cloud subscription in the future. Similarly, your vendor’s uptime guarantee can be a strong indicator of how invested it is in your success. Given all vendors work on the shared responsibility model, it may be prudent to consider an enterprise data backup solution for peace of mind. Step 7: Application Refactoring If you choose to invest time and resources into refactoring applications for the cloud, you’ll need to consider how this impacts the overall project. Modifying existing software to take advantage of cloud-based technologies can dramatically improve the efficiency of your tech stack, but it will involve significant risk and up-front costs. Some of the advantages of refactoring include: Reduced long-term costs. Developers refactor apps with a specific context in mind. The refactored app can be configured to accommodate the resource requirements of the new environment in a very specific manner. This boosts the overall return of investing in application refactoring in the long term and makes the deployment more scalable overall. Greater adaptability when requirements change . If your organization frequently adapts to changing business requirements, refactored applications may provide a flexible platform for accommodating unexpected changes. This makes refactoring attractive for businesses in highly regulated industries, or in scenarios with heightened uncertainty. Improved application resilience . Your cloud-native applications will be decoupled from their original infrastructure. This means that they can take full advantage of the benefits that cloud-hosted technology offers. Features like low-cost redundancy, high-availability, and security automation are much easier to implement with cloud-native apps. Some of the drawbacks you should be aware of include: Vendor lock-in risks . As your apps become cloud-native, they will naturally draw on cloud features that enhance their capabilities. They will end up tightly coupled to the cloud platform you use. You may reach a point where withdrawing those apps and migrating them to a different provider becomes infeasible, or impossible. Time and talent requirements . This process takes a great deal of time and specialist expertise. If your organization doesn’t have ample amounts of both, the process may end up taking too long and costing too much to be feasible. Errors and vulnerabilities . Refactoring involves making major changes to the way applications work. If errors work their way in at this stage, it can deeply impact the usability and security of the workload itself. Organizations can use cloud-based templates to address some of these risks, but it will take comprehensive visibility into how applications interact with cloud security policies to close every gap. Step 8: Data Migration There are many factors to take into consideration when moving data from legacy applications to cloud-native apps. Some of the things you’ll need to plan for include: Selecting the appropriate data transfer method . This depends on how much time you have available for completing the migration, and how well you plan for potential disruptions during the process. If you are moving significant amounts of data through the public internet, sidelining your regular internet connection may be unwise. Offline transfer doesn’t come with this risk, but it will include additional costs. Ensuring data center compatibility. Whether transferring data online or offline, compatibility issues can lead to complex problems and expensive downtime if not properly addressed. Your migration strategy should include a data migration testing strategy that ensures all of your data is properly formatted and ready to use the moment it is introduced to the new environment. Utilizing migration tools for smooth data transfer . The three major cloud providers all offer cloud migration tools with multiple tiers and services. You may need to use these tools to guarantee a smooth transfer experience, or rely on a third-party partner for this step in the process. Step 9: Configuring the Cloud Environment By the time your data arrives in its new environment, you will need to have virtual machines and resources set up to seamlessly take over your application workloads and processes. At the same time, you’ll need a comprehensive set of security policies enforced by firewall rules that address the risks unique to cloud-hosted infrastructure. As with many other steps in this checklist, you’ll want to carefully assess, plan, and test your virtual machine deployments before deploying them in a live production environment. Gather information about your source and target environment and document the workloads you wish to migrate. Set up a test environment you can use to make sure your new apps function as expected before clearing them for live production. Similarly, you may need to configure and change firewall rules frequently during the migration process. Make sure that your new deployments are secured with reliable, well-documented security policies. If you skip the documentation phase of building your firewall policy, you run the risk of introducing security vulnerabilities into the cloud environment, and it will be very difficult for you to identify and address them later on. You will also need to configure and deploy network interfaces that dictate where and when your cloud environment will interact with other networks, both inside and outside your organization. This is your chance to implement secure network segmentation that protects mission-critical assets from advanced and persistent cyberattacks. This is also the best time to implement disaster recovery mechanisms that you can rely on to provide business continuity even if mission-critical assets and apps experience unexpected downtime. Step 10: Automating Workflows Once your data and apps are fully deployed on secure cloud-hosted infrastructure, you can begin taking advantage of the suite of automation features your cloud provider offers. Depending on your choice of migration strategy, you may be able to automate repetitive tasks, streamline post-migration processes, or enhance the productivity of entire departments using sophisticated automation tools. In most cases, automating routine tasks will be your first priority. These automations are among the simplest to configure because they largely involve high-volume, low-impact tasks. Ideally, these tasks are also isolated from mission-critical decision-making processes. If you established a robust set of key performance indicators earlier on in the migration project, you can also automate post-migration processes that involve capturing and reporting these data points. Your apps will need to continue ingesting and processing data, making data validation another prime candidate for workflow automation. Cloud-native apps can ingest data from a wide range of sources, but they often need some form of validation and normalization to produce predictable results. Ongoing testing and refinement will help you make the most of your migration project moving forward. How AlgoSec Enables Secure Application Migration Visibility and Di scovery : AlgoSec provide s comprehensive visibility into your existing on-premises network environment. It automatically discovers all network devices, applications, and their dependencies. This visibility is crucial when planning a secure migration, ensuring no critical elements get overlooked in the process. Application Dependency Mapping : AlgoSec’s application dependency mapping capabilities allow you to understand how different applications and services interact within your network. This knowledge is vital during migration to avoid disrupting critical dependencies. Risk Assessment : AlgoSec assesses the security and compliance risks associated with your migration plan. It identifies potential vulnerabilities, misconfigurations, and compliance violations that could impact the security of the migrated applications. Security Policy Analysis : Before migrating, AlgoSec helps you analyze your existing security policies and rules. It ensures that security policies are consistent and effective in the new cloud or data center environment. Misconfigurations and unnecessary rules can be eliminated, reducing the attack surface. Automated Rule Optimiz ation : AlgoSec automates the o ptimization of security rules. It identifies redundant rules, suggests rule consolidations, and ensures that only necessary traffic is allowed, helping you maintain a secure environment during migration. Change Management : During the migration process, changes to security policies and firewall rules are often necessary. AlgoSec facilitates change management by providing a streamlined process for requesting, reviewing, and implementing rule changes. This ensures that security remains intact throughout the migration. Compliance and Governance : AlgoSec helps maintain compliance with industry regulations and security best practices. It generates compliance reports, ensures rule consistency, and enforces security policies, even in the new cloud or data center environment. Continuous Monitoring and Auditing : Post-migration, AlgoSec continues to monitor and audit your security policies and network traffic. It alerts you to any anomalies or security breaches, ensuring the ongoing security of your migrated applications. Integration with Cloud Platforms : AlgoSec integrates seamlessly with various cloud platforms such as AWS , Microsoft Azure , and Google Cloud . This ensures that security policies are consistently applied in both on-premises and cloud environments, enabling a secure hybrid or multi-cloud setup. Operational Efficiency : AlgoSec’s automation capabilities reduce manual tasks, improving operational efficiency. This is essential during the migration process, where time is often of the essence. Real-time Visibility and Control : AlgoSec provides real-time visibility and control over your security policies, allowing you to adapt quickly to changing migration requirements and security threats. Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • Master the Zero Trust strategy for improved cybersecurity | AlgoSec

    Learn best practices to secure your cloud environment and deliver applications securely Webinars Master the Zero Trust strategy for improved cybersecurity Learn how to implement zero trust security into your business In today’s digital world, cyber threats are becoming more complex and sophisticated. Businesses must adopt a proactive approach to cybersecurity to protect their sensitive data and systems. This is where zero trust security comes in – a security model that requires every user, device, and application to be verified before granting access. If you’re looking to implement zero trust security in your business or want to know more about how it works, you’ll want to watch this webinar. AlgoSec co-Founder and CTO Avishai Wool will discuss the benefits of zero trust security and provide you with practical tips on how to implement this security model in your organization. March 15, 2023 Prof. Avishai Wool CTO & Co Founder AlgoSec Relevant resources Protecting Your Network’s Precious Jewels with Micro-Segmentation, Kyle Wickert, AlgoSec Watch Video Professor Wool - Introduction to Microsegmentation Watch Video Five Practical Steps to Implementing a Zero-Trust Network Keep Reading Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • Partner solution brief AlgoSec & Zscaler - AlgoSec

    Partner solution brief AlgoSec & Zscaler Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • Stop Putting out Fires. Pass Network Security Audits – Every Time | AlgoSec

    Webinars Stop Putting out Fires. Pass Network Security Audits – Every Time Compliance with network and data security regulations and internal standards is vital and mission-critical. But with increasing global regulations and network complexities, it’s harder than ever to keep up. Firewall management and network security policies are critical components in achieving compliance. Firewall audits are complex and demanding and documentation of current rules is lacking. There’s no time and resources to find, organize, and inspect all your firewall rules. Instead of being proactive and preventative, network security teams are constantly putting out fires. In this webinar, you will learn: The golden rules for passing a network security audit Best practices to maintain continuous compliance How to conduct a risk assessment and fix issues Learn how to prevent fires and pass network security audits every time. Tal Dayan, AlgoSec’s product manager, will reveal the Firewall Audit Checklist, the six best practices to ensure successful audits. By adopting these best practices, security teams will significantly improve their network’s security posture and reduce the pain of ensuring compliance with regulations, industry standards and corporate policies. October 29, 2019 Tal Dayan AlgoSec security expert Relevant resources Network firewall security management See Documentation Firewall policy management Automate firewall rule changes See Documentation Securing & managing hybrid network security See Documentation Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | AlgoSec and Zero-Trust for Healthcare

    Before I became a Sale Engineer I started my career working in operations and I don’t remember the first time I heard the term zero trust... Zero Trust AlgoSec and Zero-Trust for Healthcare Adolfo Lopez 2 min read Adolfo Lopez Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 2/26/24 Published Before I became a Sale Engineer I started my career working in operations and I don’t remember the first time I heard the term zero trust but I all I knew is that it was very important and everyone was striving to get to that level of security. Today I’ll get into how AlgoSec can help achieve those goals, but first let’s have a quick recap on what zero trust is in the first place. There are countless whitepapers and frameworks that define zero trust much better than I can, but they are also multiple pages long, so I’ll do a quick recap. Traditionally when designing a network you may have different zones and each zone might have different levels of access. In many of these types of designs there is a lot of trust that is given once they are in a certain zone. For example, once someone gets to their workplace at the hospital, the nursing home, the dental center or any other medical office and does all the necessary authentication steps (proper company laptop, credentials, etc…) they potentially have free reign to everything. This is a very simple example and in a real-world scenario there would hopefully be many more safeguards in place. But what does happen in real world scenarios is that devices still manage to get trusted more than they should. And from my own experience and from working with customers this happens way too often. Especially in the healthcare industry this is becoming more and more important. These days there are many different types of medical devices, some that hold sensitive information, some scanning instruments, and some that might even be critical to patient support. More importantly many are connected to some type of network. Because of this level of connectivity, we do need to start shifting toward this idea of zero trust. In healthcare cybersecurity isn’t just a matter of maintaining the network, it’s about maintaining the critical operations of the hospitals running smoothly and patient data safe and secure. Maintaining security policies is critical to achieving zero trust. Below you can see some of the key features that AlgoSec has that can help achieve that goal. Feature Description Security Policy Analysis Analyze existing security policy sets across all parts of the network (on-premises and cloud) with various vendors. Policy Cleanup Identify and remove redundant rules, duplicate rules, and more from the first report. Specific Recommendations Over time, recommendations become more specific, such as identifying unnecessary rules (e.g., a printer talking to a medical device without actual use). Application Perspective Tie firewall rules to actual applications to understand the business function they support, leading to more targeted security policies. Granularity & Visibility Higher level of visibility and granularity in security policies, focusing on specific application flows rather than broad network access. Security Posture by Application View and assess security risks and vulnerabilities at the application level, improving overall security posture. One of my favorite aspects of the AlgoSec platform is that we not only help optimize your security policies, but we also start to look at security from an application perspective. Traditionally, firewall change requests come in and it’s just asking for very specific things, “Source A to Destination B using Protocol C.” But using AlgoSec we tie those rules to actual applications to see what business function this is supporting. By knowing the specific flows and tying them to a specific application this allows us to keep a closer eye on the actual security policies we need to create. This helps with that zero trust journey because having that higher level of visibility and granularity helps to keep the rules more specific. Instead of a change request coming in that is allowing wide open access between two subnets the application can be designed for only the access that is required. It also allows for an overall better view of the security posture. Zero trust, like many other ideas and frameworks in our industry might seem farfetched at first. We ask ourselves, how do we get there or how do we implement without it becoming so cumbersome that we give up on it. I think it’s normal to be a bit pessimistic about achieving the goal and it’s completely fine to look at some projects as moving targets that we might not have a hard deadline on. There usually isn’t a magic bullet that accomplish our goals, especially something like achieving zero trust. Multiple initiatives and projects are necessary. With AlgoSec’s expertise in application connectivity and policy management, we can be a key partner in that journey. Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • Environmental, Social responsibility, and Governance (ESG) - AlgoSec

    Environmental, Social responsibility, and Governance (ESG) Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | Sunburst Backdoor: A Deeper Look Into The SolarWinds’ Supply Chain Malware

    Update : Next two parts of the analysis are available here and here . As earlier reported by FireEye, the actors behind a global... Cloud Security Sunburst Backdoor: A Deeper Look Into The SolarWinds’ Supply Chain Malware Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/15/20 Published Update : Next two parts of the analysis are available here and here . As earlier reported by FireEye, the actors behind a global intrusion campaign have managed to trojanise SolarWinds Orion business software updates in order to distribute malware. The original FireEye write-up already provides a detailed description of this malware. Nevertheless, as the malicious update SolarWinds-Core-v2019.4.5220-Hotfix5.msp was still available for download for hours since the FireEye’s post, it makes sense to have another look into the details of its operation. The purpose of this write-up is to provide new information, not covered in the original write-up. Any overlaps with the original description provided by FireEye are not intentional. For start, the malicious component SolarWinds.Orion.Core.BusinessLayer.dll inside the MSP package is a non-obfuscated .NET assembly. It can easily be reconstructed with a .NET disassembler, such as ILSpy , and then fully reproduced in C# code, using Microsoft Visual Studio. Once reproduced, it can be debugged to better understand how it works. In a nutshell, the malicious DLL is a backdoor. It is loaded into the address space of the legitimate SolarWinds Orion process SolarWinds.BusinessLayerHost.exe or SolarWinds.BusinessLayerHostx64.exe . The critical strings inside the backdoor’s class SolarWinds.Orion.Core.BusinessLayer.OrionImprovementBusinessLayer are encoded with the DeflateStream Class of the .NET’s System.IO.Compression library, coupled with the standard base64 encoder. Initialisation Once loaded, the malware checks if its assembly file was created earlier than 12, 13, or 14 days ago. The exact number of hours it checks is a random number from 288 to 336. Next, it reads the application settings value ReportWatcherRetry . This value keeps the reporting status, and may be set to one of the states: New (4) Truncate (3) Append (5) When the malware runs the first time, its reporting status variable ReportWatcherRetry is set to New (4) . The reporting status is an internal state that drives the logic. For example, if the reporting status is set to Truncate , the malware will stop operating by first disabling its networking communications, and then disabling other security tools and antivirus products. In order to stay silent, the malware periodically falls asleep for a random period of time that varies between 30 minutes and 2 hours. At the start, the malware obtains the computer’s domain name . If the domain name is empty, the malware quits. It then generates a 8-byte User ID, which is derived from the system footprint. In particular, it is generated from MD5 hash of a string that consists from the 3 fields: the first or default operational (can transmit data packets) network interface’s physical address computer’s domain name UUID created by Windows during installation (machine’s unique ID) Even though it looks random, the User ID stays permanent as long as networking configuration and the Windows installation stay the same. Domain Generation Algorithm The malware relies on its own CryptoHelper class to generate a domain name. This class is instantiated from the 8-byte User ID and the computer’s domain name, encoded with a substitution table: “rq3gsalt6u1iyfzop572d49bnx8cvmkewhj” . For example, if the original domain name is “ domain “, its encoded form will look like: “ n2huov “. To generate a new domain, the malware first attempts to resolve domain name “ api.solarwinds.com “. If it fails to resolve it, it quits. The first part of the newly generated domain name is a random string, produced from the 8-byte User ID, a random seed value, and encoded with a custom base64 alphabet “ph2eifo3n5utg1j8d94qrvbmk0sal76c” . Because it is generated from a random seed value, the first part of the newly generated domain name is random. For example, it may look like “ fivu4vjamve5vfrt ” or “ k1sdhtslulgqoagy “. To produce the domain name, this string is then appended with the earlier encoded domain name (such as “ n2huov “) and a random string, selected from the following list: .appsync-api.eu-west-1[.]avsvmcloud[.]com .appsync-api.us-west-2[.]avsvmcloud[.]com .appsync-api.us-east-1[.]avsvmcloud[.]com .appsync-api.us-east-2[.]avsvmcloud[.]com For example, the final domain name may look like: fivu4vjamve5vfrtn2huov[.]appsync-api.us-west-2[.]avsvmcloud[.]com or k1sdhtslulgqoagyn2huov[.]appsync-api.us-east-1[.]avsvmcloud[.]com Next, the domain name is resolved to an IP address, or to a list of IP addresses. For example, it may resolve to 20.140.0.1 . The resolved domain name will be returned into IPAddress structure that will contain an AddressFamily field – a special field that specifies the addressing scheme. If the host name returned in the IPAddress structure is different to the queried domain name, the returned host name will be used as a C2 host name for the backdoor. Otherwise, the malware will check if the resolved IP address matches one of the patterns below, in order to return an ‘address family’: IP Address Subnet Mask ‘Address Family’ 10.0.0.0 255.0.0.0 Atm 172.16.0.0 255.240.0.0 Atm 192.168.0.0 255.255.0.0 Atm 224.0.0.0 240.0.0.0 Atm fc00:: fe00:: Atm fec0:: ffc0:: Atm ff00:: ff00:: Atm 41.84.159.0 255.255.255.0 Ipx 74.114.24.0 255.255.248.0 Ipx 154.118.140.0 255.255.255.0 Ipx 217.163.7.0 255.255.255.0 Ipx 20.140.0.0 255.254.0.0 ImpLink 96.31.172.0 255.255.255.0 ImpLink 131.228.12.0 255.255.252.0 ImpLink 144.86.226.0 255.255.255.0 ImpLink 8.18.144.0 255.255.254.0 NetBios 18.130.0.0 255.255.0.0 NetBios 71.152.53.0 255.255.255.0 NetBios 99.79.0.0 255.255.0.0 NetBios 87.238.80.0 255.255.248.0 NetBios 199.201.117.0 255.255.255.0 NetBios 184.72.0.0 255.254.0.0 NetBios For example, if the queried domain resolves to 20.140.0.1 , it will match the entry in the table 20.140.0.0 , for which the returned ‘address family’ will be ImpLink . The returned ‘address family’ invokes an additional logic in the malware. Disabling Security Tools and Antivirus Products If the returned ‘address family’ is ImpLink or Atm , the malware will enumerate all processes and for each process, it will check if its name matches one of the pre-defined hashes. Next, it repeats this processed for services and for the drivers installed in the system. If a process name or a full path of an installed driver matches one of the pre-defined hashes, the malware will disable it. For hashing, the malware relies on Fowler–Noll–Vo algorithm. For example, the core process of Windows Defender is MsMpEng.exe . The hash value of “ MsMpEng ” string is 5183687599225757871 . This value is specifically enlisted the malware’s source under a variable name timeStamps : timeStamps = new ulong[1] { 5183687599225757871uL } The service name of Windows Defender is windefend – the hash of this string ( 917638920165491138 ) is also present in the malware body. As a result, the malicioius DLL will attempt to stop the Windows Defender service. In order to disable various security tools and antivirus products, the malware first grants itself SeRestorePrivilege and SeTakeOwnershipPrivilege privileges, using the native AdjustTokenPrivileges() API. With these privileges enabled, the malware takes ownership of the service registry keys it intends to manipulate. The new owner of the keys is first attempted to be explicitly set to Administrator account. If such account is not present, the malware enumerates all user accounts, looking for a SID that represents the administrator account. The malware uses Windows Management Instrumentation query “ Select * From Win32_UserAccount ” to obtain the list of all users. For each enumerated user, it makes sure the account is local and then, when it obtains its SID, it makes sure the SID begins with S-1-5- and ends with -500 in order to locate the local administrator account. Once such account is found, it is used as a new owner for the registry keys, responsible for manipulation of the services of various security tools and antivirus products. With the new ownership set, the malware then disables these services by setting their Start value to 4 (Disabled): registryKey2.SetValue(“Start”), 4, RegistryValueKind.DWord); HTTP Backdoor If the returned ‘address family’ for the resolved domain name is NetBios , as specified in the lookup table above, the malware will initialise its HttpHelper class, which implements an HTTP backdoor. The backdoor commands are covered in the FireEye write-up, so let’s check only a couple of commands to see what output they produce. One of the backdoor commands is CollectSystemDescription . As its name suggests, it collects system information. By running the code reconstructed from the malware, here is an actual example of the data collected by the backdoor and delivered to the attacker’s C2 with a separate backdoor command UploadSystemDescription : 1. %DOMAIN_NAME% 2. S-1-5-21-298510922-2159258926-905146427 3. DESKTOP-VL39FPO 4. UserName 5. [E] Microsoft Windows NT 6.2.9200.0 6.2.9200.0 64 6. C:\WINDOWS\system32 7. 0 8. %PROXY_SERVER% Description: Killer Wireless-n/a/ac 1535 Wireless Network Adapter #2 MACAddress: 9C:B6:D0:F6:FF:5D DHCPEnabled: True DHCPServer: 192.168.20.1 DNSHostName: DESKTOP-VL39FPO DNSDomainSuffixSearchOrder: Home DNSServerSearchOrder: 8.8.8.8, 192.168.20.1 IPAddress: 192.168.20.30, fe80::8412:d7a8:57b9:5886 IPSubnet: 255.255.255.0, 64 DefaultIPGateway: 192.168.20.1, fe80::1af1:45ff:feec:a8eb NOTE: Field #7 specifies the number of days (0) since the last system reboot. GetProcessByDescription command will build a list of processes running on a system. This command accepts an optional argument, which is one of the custom process properties enlisted here . If the optional argument is not specified, the backdoor builds a process list that looks like: [ 1720] svchost [ 8184] chrome [ 4732] svchost If the optional argument is specified, the backdoor builds a process list that includes the specified process property in addition to parent process ID, username and domain for the process owner. For example, if the optional argument is specified as “ ExecutablePath “, the GetProcessByDescription command may return a list similar to: [ 3656] sihost.exe C:\WINDOWS\system32\sihost.exe 1720 DESKTOP-VL39FPO\UserName [ 3824] svchost.exe C:\WINDOWS\system32\svchost.exe 992 DESKTOP-VL39FPO\UserName [ 9428] chrome.exe C:\Program Files (x86)\Google\Chrome\Application\chrome.exe 4600 DESKTOP-VL39FPO\UserName Other backdoor commands enable deployment of the 2nd stage malware. For example, the WriteFile command will save the file: using (FileStream fileStream = new FileStream(path, FileMode.Append, FileAccess.Write)) { fileStream.Write(array, 0, array.Length); } The downloaded 2nd stage malware can then the executed with RunTask command: using (Process process = new Process()) { process.StartInfo = new ProcessStartInfo(fileName, arguments) { CreateNoWindow = false, UseShellExecute = false }; if (process.Start()) … Alternatively, it can be configured to be executed with the system restart, using registry manipulation commands, such as SetRegistryValue . Schedule a demo Related Articles 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • Radically reduce firewall rules with application-driven rule recertification | AlgoSec

    Webinars Radically reduce firewall rules with application-driven rule recertification Does your network still have obsolete firewall rules? Do you often feel overwhelmed with the number of firewall rules in your network? To make sure your network is secure and compliant, you need to regularly review and recertify firewall rules. However, manual firewall rule recertification is complex, time-consuming and error-prone, and mistakes may cause application outages. Discover a better way to recertify your firewall rules with Asher Benbenisty, AlgoSec’s Director of Product Marketing, as he discusses how associating application connectivity with your firewall rules can radically reduce the number of firewall rules on your network as well as the efforts involved in rule recertification. In this webinar, we will discuss: The importance of regularly reviewing and recertifying your firewall rules Integrating application connectivity into your firewall rule recertification process Automatically managing the rule-recertification process using an application-centric approach October 14, 2020 Asher Benbenisty Director of product marketing Relevant resources Changing the rules without risk: mapping firewall rules to business applications Keep Reading AlgoSec AppViz – Rule Recertification Watch Video Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

bottom of page