Providing Out-of-Band Connectivity to Mission-Critical IT Resources

Home » Archives for October 2024

Zombie Servers: The Hidden Energy Drainers in Data Centers

Zombies in the data center
As enterprises adopt AI, cloud computing, and data analytics, one thing lurks in the shadows of their data centers: zombie servers. These inactive or severely underutilized servers take a big bite out of operations, drawing power and resources without contributing meaningful work. Research from the Uptime Institute indicates that as much as 30% of servers may be idle at any given time, suggesting enterprises could save millions each year by identifying and eliminating these “zombies.”

The Cost of Zombie Servers

When it comes to cost, zombie servers can devour more than their fair share. Each idle server can consume approximately 200 to 400 watts per hour, resulting in annual power costs of $400 to $600 per server. In large data centers housing thousands of servers, wasted energy expenses can easily scale into the millions. Currently, U.S. data centers account for over 4% of the nation’s total electricity consumption, a figure projected to rise to 6% by 2026 due to growing demands from AI and cloud computing applications.

How ZPE Systems’ Nodegrid Fights Zombie Servers

Out-of-band management (OOBM) solutions, like ZPE Systems’ Nodegrid, provide an effective way to monitor, manage, and optimize data center infrastructure, even when the primary network is down. When combined with ServerTech Intelligent PDUs, data center admins can remote-in to identify and address zombie servers, so they can ensure their operations run at peak efficiency.

Key Features of Nodegrid’s Out-of-Band Management for Zombie Server Management

  • 24/7 Monitoring and Real-Time Insights: Nodegrid allows IT teams to continuously monitor server performance, making it easy to detect underutilized or idle servers. Real-time metrics show server activity, power usage, and health, so teams can pinpoint servers that may need to be repurposed or removed.
  • Detailed Power Usage Data: The combined Nodegrid and ServerTech solution provides comprehensive energy usage data, so teams can see inefficiencies and where power is consumed most. This is essential for high-density data centers, where wasting even a little bit of power adds up to substantial costs. These insights help data center operators pinpoint zombie servers, reducing energy costs and freeing up space.
  • Enhanced Automation and Management Control: With automation features, Nodegrid simplifies the complex task of managing server lifecycles. For instance, automated alerts can notify teams when a server reaches a specific threshold of low utilization, enabling quicker action to reassign or shut down the server.
  • Increased Security and Resilience: Nodegrid enhances security by providing direct access to infrastructure via isolated management. Teams can access critical systems even during network failures, to ensure servers remain compliant, functional, and secure.

Benefits of Removing Zombie Servers

AI and other resource-intensive applications mean data centers need to be as efficient as possible. Zombie servers are not just an energy problem; they impact a data center’s ability to scale and meet demand for high-performance computing. Here are some benefits of removing or repurposing zombie servers:

  • Energy Efficiency: Data centers can significantly lower energy costs and reduce environmental impact by shutting down idle servers.
  • Cost Savings: Operating more efficiently by removing zombie servers can lead to substantial annual savings, freeing up resources for necessary expansions.
  • Optimized AI-Ready Infrastructure: Freeing up resources allows data centers to repurpose space and energy toward servers that can support AI and other high-density applications.

Get Help Fighting Zombie Servers

Set up a call with one of ZPE Systems’ engineers, and we’ll show you how to get zombie servers out of your data center. Click the button below to schedule your call.

Watch a Walkthrough Demo

Watch this 20-minute video where Marcel van Zwienen (Senior Sales Engineer) demonstrates the remote management capabilities of Nodegrid and ZPE Cloud.

Marcel van Zwienen gives a walkthrough of ZPE Cloud for remote device management.

More Valuable Resources for Remote Monitoring

Check out these resources to help fight zombie servers and other inefficiencies lurking in your data center:

Data Center Environmental Sensors: Everything You Need to Know

According to a recent Uptime Institute survey, severe outages can cost more than $1 million USD and lead to reputational loss as well as business and customer disruption. Humidity, air particulates, and other problems could shorten the lifetime of critical equipment or cause outages. Unfortunately, much of a business’s critical digital infrastructure and services are housed in remote data centers, making it difficult for busy IT teams to keep eyes on the environmental conditions.

Data center environmental sensors can help teams prevent downtime by monitoring conditions in remote infrastructure deployments and alerting administrators to any problems before they lead to equipment failure. This blog explains how environmental sensors work and describes the ideal environmental monitoring solution for minimizing outages.

How data center environmental sensors reduce downtime

Data center environmental sensors are deployed around the rack, cabinet, or cage to collect information about various conditions that could negatively affect equipment like routers, servers, and switches. 

Mitigating environmental risks with data center environmental sensors

Environmental Risk Description How Environmental Sensors Help
Temperature All data center equipment has an optimal operating temperature range, as well as a max temp threshold above which devices may overheat. Environmental sensors monitor ambient temperatures and trigger automated alerts when it gets too hot or too cold in the data center.
Humidity If the air in the data center gets too humid, moisture may collect on the internal components of devices and cause corrosion, shorts, or other failures. Environmental sensors monitor the relative humidity in the DC and alert administrators when there’s a danger of moisture accumulation.
Fire A fire in the data center could burn equipment, raise the ambient temperature beyond acceptable limits, or activate automatic fire suppression controls that damage devices. Environmental sensors detect the heat and smoke from fires, giving DC teams time to shut down systems before they’re damaged.
Tampering A malicious actor who’s able to get past data center security (such as an inside threat) could potentially tamper with equipment to damage or breach it. Tamper detection sensors alert remote teams when data center cabinet doors are opened or a device is physically moved.
Air Particulates Smoke, ozone, and other air particulates could potentially damage data center infrastructure by oxidizing components or clogging vents. Environmental sensors monitor air quality and automatically alert teams when particulates are detected.

These sensors report back to monitoring software that’s either deployed on-premises in the data center or hosted in the cloud. Administrators use this software to view real-time conditions or to configure automated alerts.

Environmental monitoring sensors help reduce outages by giving remote IT teams advance warning that something is wrong with conditions in the data center, enabling them to potentially fix the problem before any systems go down. However, traditional monitoring solutions suffer from a number of limitations.

  1. They need a stable internet connection to allow remote access, so if there’s an ISP outage or unknown failure, teams lose their ability to monitor the situation.
  2. Many of them use on-premises software that requires administrators to connect via VPN to monitor or manage the solution, creating security risks and management hurdles.
  3. Most environmental monitoring systems don’t easily integrate with other remote management tools, leaving administrators with a disjointed patchwork of platforms to wrestle with.

The ideal data center environmental monitoring solution

The Nodegrid data center environmental monitoring platform overcomes these challenges with a combination of out-of-band management, cloud-based software, and a vendor-agnostic architecture.

Nodegrid environmental sensors work with Nodegrid serial consoles to provide remote teams with a virtual presence in the data center. These devices create an instant out-of-band network that uses a dedicated internet connection to provide continuous remote access to all connected sensors and infrastructure. This network doesn’t rely on the primary ISP or production network resources, giving administrators a lifeline to monitor and recover remote data center devices during an outage. The addition of Nodegrid Data Lake also allows teams to collect environmental monitoring data, discover trends and insights, and create better automation to address issues.

Nodegrid’s data center environmental monitoring and infrastructure management software is available on-premises or in the cloud, allowing teams to access critical equipment and respond to alerts from anywhere in the world. Plus, all Nodegrid hardware and software is vendor-neutral, supporting seamless integrations with third-party tools for automation, security, and more.

Schedule a free Nodegrid demo to see our data center environmental sensors and vendor-neutral management platform in action!

American Water Cyberattack: Another Wake-Up Call for Critical Infrastructure

Industrial water treatment plant with water
The October 2024 cyberattack on American Water, one of the largest water and wastewater utility companies in the U.S., signals yet another wake-up call for critical infrastructure security. Because millions of people rely on this critical service for safe drinking water and sanitation, this attack highlights why it’s so important to address cyber vulnerabilities.

Let’s trace the timeline of the attack, how it likely started, and the best practice architecture that could have mitigated or prevented the American Water cyberattack.

Timeline of the October 2024 American Water Cyberattack

  • Initial Intrusion (October 5, 2024)
    The attack on American Water was first detected in early October, when cybersecurity monitoring tools flagged suspicious activity within the company’s IT systems. Employees reported an unusual system slowdown, and automated alerts indicated possible unauthorized access.
  • Rapid Escalation (October 6-7, 2024)
    Within 24 hours of detection, the attackers had moved deeper into the company’s IT environment. In response, American Water initiated emergency protocols, including isolating key systems to prevent further damage. To contain the breach, critical operational technology (OT) systems — responsible for managing water treatment and distribution — were temporarily shut down
  • Public Notification and Response (October 8, 2024)
    American Water notified federal authorities, including the Cybersecurity and Infrastructure Security Agency (CISA), state regulators, and the public. The company reassured customers that water quality had not been compromised, but certain automated operations had been affected, leading to temporary disruptions in water distribution.
  • Ongoing Recovery (October 2024 – Present)
    As the investigation continued, third-party cybersecurity firms were brought in to assess the extent of the breach and assist in recovery. Manual operations were implemented in areas where automated systems were impacted. While the threat was contained, the company faced a lengthy process of system restoration and reconfiguration.

Impact of the Attack

The impact of the American Water cyberattack appears minimal. A class-action lawsuit was recently filed seeking $5-million in damages on behalf of affected customers, but this is the typical fallout that results from a breach. American Water did not shut down any treatment plants, and although they were forced to temporarily shut down their customer portal, pause billing, and revert to some manual processes, there were no water contamination or public health risks that came out of the attack. Per American Water’s FAQ page, it seems business is nearly back to normal.

However, this shouldn’t diminish the need for utilities providers to shore-up their defenses and ensure resilience of their IT architectures. The Oldsmar, Florida incident is an example of how an error or breach can change water treatment chemistry (in this case, adding too much lye to the water supply) and poison a population. There have also been many attempts by U.S. adversaries in which attackers were able to change water chemistry or disrupt automated operations.

Government agencies like the EPA have been warning that attacks on water treatment utilities are increasing. Lawmakers are also calling for inspections of IT systems, such as to ensure best practices are being followed for managing passwords and keeping remote access from Internet exposure, and considering civil and criminal penalties for those who don’t comply.

How the Attack Likely Happened

The American Water cyberattack is still under investigation. Specifics of how it occurred haven’t been released, but several likely scenarios have emerged based on trends in similar attacks:

  • Phishing or Social Engineering:
    Employees may have unknowingly opened a malicious email attachment or clicked a harmful link, allowing attackers access to the internal network, similar to 2023’s Ragnar Locker attacks. Water utilities and other public services often have large workforces, which makes them susceptible to phishing campaigns.
  • Ransomware:
    There are indications that ransomware may have encrypted key files and systems, similar to what happened during the MGM hack. Ransomware attacks on critical infrastructure have increased in recent years, with attackers locking companies out of their own data and demanding payment to restore access.
  • IT/OT Integration Vulnerabilities:
    Water utilities often rely on a hybrid network where both information technology (IT) systems and operational technology (OT) systems are integrated to monitor and control water purification, distribution, and wastewater management. While this setup improves efficiency, it can also create additional vulnerabilities if the two environments are not properly segregated. Once attackers gain access to the IT network, they can use it as a bridge to reach OT systems, which are typically less secure.
  • Internet-Facing Systems:
    In the past, the Chinese-sponsored hacker group Volt Typhoon took advantage of firewalls that were connected both to the internet and to critical control systems. This approach also takes advantage of a lack of control plane segregation, as hackers can remote-in via internet-facing systems and gain management access to critical systems.

The Solution: Isolated Management Infrastructure (IMI)

As with the global CrowdStrike outage, the most important takeaway from the American Water cyberattack is that organizations need the ability to recover fast. Remote access solutions help with this, but it matters how these solutions are architected and which capabilities they offer.

The traditional approach is to gain remote access via a direct link to the affected systems. The problem with this is that when these systems are breached, encrypted, or offline, it’s impossible to remote-into them. This requires teams to physically connect to and revive systems (as with the CrowdStrike incident), or worse – completely replace their infrastructure, as Merck did during the 2017 NotPetya breach.

Traditional remote management via direct link
Instead, organizations are turning to a best practice architecture that has been used by hyperscalers and large enterprises for years. This solution is called Isolated Management Infrastructure. IMI creates a management network that is connected to but completely independent of production network equipment, an architecture that resembles out-of-band (OOB) management. This gives teams a lifeline to their main IT and OT systems, including servers, switches, sensors, controllers, and other critical assets, even when their main systems are offline.
IMI is a lifeline to production assets

Here’s how IMI and out-of-band management could have helped mitigate the effects of the American Water attack:

  • Enhanced Containment: By isolating the network used for system control and monitoring, OOB management could have ensured that even if the primary network was compromised, attackers would not have been able to access or disable key operational systems. This would have limited the need to shut down OT systems and prevented widespread operational disruption.
  • Faster Recovery: With isolated management infrastructure, administrators would have been able to access critical systems remotely, even during the attack. This capability enables faster diagnosis of the issue and restoration of services without relying on compromised networks. In the case of a ransomware attack, for example, OOB management can help initiate recovery operations from backups, minimizing downtime.
  • Reduced Attack Surface: By creating an independent network with fewer access points and stricter controls, OOB infrastructure reduces the chances of attackers exploiting vulnerabilities. It’s an additional layer of security that complicates attempts to breach sensitive control systems.
IMI with Nodegrid2

30-year cybersecurity expert James Cabe recently published a walkthrough of how to do this. Read his article, What to do if you’re ransomware’d, to see how to deploy the Gartner-recommended Isolated Recovery Environment that lets you fight through an active attack.

Get the Blueprint for Building IMI

The American Water cyberattack is another wake-up call for critical infrastructure providers to rethink their cybersecurity strategies. Isolated Management Infrastructure is the key approach to retaining control during an attack, but requires the robust capabilities of Generation 3 out-of-band to ensure rapid recovery. To help utilities and essential services fortify their infrastructure, ZPE Systems recently created a blueprint for building IMI. Download the blueprint now to follow the best practices architecture and become resilient against cyberattacks.

Using Isolated Management Infrastructure to Access the Debug Port of Open Compute Project (OCP) Devices in AI Deployments

Data center computers large facility with servers storage. Illustration AI Generative

As artificial intelligence (AI) workloads grow more demanding, data centers are turning to specialized hardware like Open Compute Project (OCP) cards to meet their needs.

OCP cards, known for their open-source architecture and scalability, have become popular in AI-driven infrastructures due to their flexibility and cost-efficiency.

However, managing and troubleshooting these cards — especially in large-scale AI deployments — can pose significant challenges, particularly when it comes to accessing debug ports for diagnostics.

In this post, we’ll explore how isolated management infrastructure (IMI) offers a secure and reliable solution for accessing the debug ports of OCP cards used in AI systems. We’ll also discuss the importance of debugging in AI, the obstacles that come with large-scale deployments, and the role of IMI in overcoming those hurdles.

OCP Cards in AI: A High-Performance Solution

Open Compute Project cards have become central to AI and machine learning (ML) environments due to their powerful compute capabilities, scalability, and open-source design. These cards are often integrated into large data centers tasked with training AI models, running inference operations, and handling massive data streams.

With OCP cards, companies can optimize their data center hardware for specific workloads without being tied to proprietary solutions. This open-source approach allows for flexibility in AI infrastructure, but it also introduces challenges when managing such hardware at scale, especially when components fail or need troubleshooting.

The Importance of Debugging and Monitoring in AI

Debugging and monitoring are critical components of maintaining AI infrastructure. AI model training, in particular, places heavy demands on hardware, making performance consistency a key factor. Any malfunction at the hardware or software level needs to be identified and resolved quickly to avoid costly downtime.

One way to troubleshoot hardware-related problems is by accessing the debug ports of OCP cards. Debug ports provide administrators with direct access to diagnostics, enabling them to monitor system health and perform necessary repairs. However, accessing these ports can be difficult, particularly in AI deployments where hardware is distributed across large data centers.

The Challenges of Accessing Debug Ports in AI Deployments

In a large AI deployment, accessing the debug ports of individual OCP cards can present several obstacles:

  • Physical Access: High-density data centers make it challenging for technicians to reach hardware components physically. In many cases, the OCP cards are housed in remote locations, requiring specialized tools for diagnostics.
  • Security Risks: Allowing unrestricted access to debug ports can introduce security vulnerabilities. If these ports are not properly secured, cyber attackers could exploit them to gain control of critical infrastructure.
  • Network Disruptions: During system failures, it can be difficult to access the network and troubleshoot the issue. When the primary network goes down, relying on that same network to manage hardware can delay recovery efforts and worsen the outage.

These challenges make it essential to adopt a secure, remote solution for managing OCP cards and their debug ports, especially when it comes to AI environments where any downtime can disrupt business-critical operations.

How Isolated Management Infrastructure (IMI) Works

Isolated management infrastructure (IMI) is a dedicated, separate network used exclusively for system management and maintenance. Unlike the primary network that handles day-to-day operations, the management network is isolated to ensure uninterrupted access to critical systems, even during outages or security incidents.

OOB management network isolation with the Nodegrid platform.

Image: Isolated Management Infrastructure physically separates management access from production assets.

By implementing IMI, administrators can remotely access the debug ports of OCP cards without affecting the main production network. This setup not only secures the debug ports but also ensures that troubleshooting can be done in real-time, even if the primary network is down.

Benefits of Using IMI for OCP Debug Ports:

  • Secure, Controlled Access: Since the management network is isolated, it limits access to only authorized personnel. This reduces the chances of an attacker compromising critical hardware through exposed debug ports.
  • Reduced Downtime: IMI enables administrators to access, troubleshoot, and repair systems quickly, minimizing downtime during failures or performance issues. Even during major network outages, IMI ensures out-of-band (OOB) access to the OCP cards’ debug ports.
  • Lower Security Risks: By separating management traffic from regular operations, IMI reduces the attack surface. It becomes more difficult for hackers to use network vulnerabilities to gain unauthorized access to critical infrastructure.

Out-of-band management for OCP servers

Implementing Isolated Management for OCP Debug Access

To implement isolated management infrastructure for accessing the debug ports of OCP cards, follow these steps:

  • Network Segmentation: Physically separate your management network from the production network. Ensure that management traffic is not routed through the same pathways used for regular operations.
  • Use Out-of-Band Management Devices: Deploy dedicated OOB management hardware that allows for remote access and control of the OCP cards, even when the primary network is unavailable. This can include IPMI (Intelligent Platform Management Interface) or SSH (Secure Shell) for secure communication.
  • Integrate with Monitoring Systems: Combine IMI with automated monitoring and alerting systems. This way, any anomaly detected in the AI environment will trigger a response, allowing administrators to quickly access the OCP card’s debug port for diagnostics.

Security Benefits of Isolated Management Infrastructure

In addition to improving accessibility, IMI enhances security across the board in AI environments. Here’s how: 

  • Limited Access Points: Isolating management infrastructure limits the number of entry points for attackers, significantly reducing the attack surface.
  • Controlled User Access: Only authorized users can access the isolated network, meaning that internal threats and insider attacks are also mitigated.
  • Compliance and Auditing: For industries with strict regulatory requirements, IMI provides clear documentation and control over system access, helping organizations meet compliance standards and pass security audits.

Real-World Example

Consider a scenario in a data center where an AI model’s training process experiences sudden instability. The system administrator, located remotely, uses IMI to securely access the OCP card’s debug port through an OOB management interface.

The problem is quickly diagnosed and resolved without needing physical access to the hardware, minimizing downtime and ensuring that the AI model’s training can continue uninterrupted.

Deploy IMI with Nodegrid to Strengthen AI Environments

As AI infrastructures grow, so do the risks and complexities associated with managing them. The October 2024 cyberattack on American Water, which impacted their operational technology and water distribution, highlights the need for robust, secure, and isolated management networks to avoid large-scale disruptions.

By integrating isolated management infrastructure into your AI data center, you can ensure quick access to critical systems like OCP devices, reduce the impact of system failures, and improve security. ZPE Systems’ Nodegrid is a Gen 3 out-of-band management platform that allows you to deploy IMI in your data center environment, and it’s the only out-of-band management built to manage OCP cards. It can integrate or directly host third-party applications for automation, security, and much more, consolidating an entire tech stack into a single, cost-efficient solution.

Schedule a demo to see how Nodegrid gives remote access to OCP cards and strengthens your AI deployments.

Network Virtualization Platforms: Benefits & Best Practices

Network Virtualization Platforms: Benefits & Best Practices

Simulated network virtualization platforms overlaying physical network infrastructure.

Network virtualization decouples network functions, services, and workflows from the underlying hardware infrastructure and delivers them as software. In the same way that server virtualization makes data centers more scalable and cost-effective, network virtualization helps companies streamline network deployment and management while reducing hardware expenses.

This guide describes several types of network virtualization platforms before discussing the benefits of virtualization and the best practices for improving efficiency, scalability, and ROI.

What do network virtualization platforms do?

There are three forms of network virtualization that are achieved with different types of platforms. These include:

Type of Virtualization Description Examples of Platforms
Virtual Local Area Networking (VLAN) Creates an abstraction layer over physical local networking infrastructure so the company can segment the network into multiple virtual networks without installing additional hardware.

SolarWinds Network Configuration Manager

ManageEngine Network Configuration Manager

Software-Defined Networking (SDN) Decouples network routing and control functions from the actual data packets so that IT teams can deploy and orchestrate workflows across multiple devices and VLANs from one centralized platform.

Meraki

Juniper

Network Functions Virtualization (NFV) Separates network functions like routing, switching, and load balancing from the underlying hardware so teams can deploy them as virtual machines (VMs) and use fewer physical devices.

Red Hat OpenStack

VMware vCloud NFV

While network virtualization is primarily concerned with software, it still requires a physical network infrastructure to serve as the foundation for the abstraction layer (just like server virtualization still requires hardware in the data center or cloud to run hypervisor software). Additionally, the virtualization software itself needs storage or compute resources to run, either on a server/hypervisor or built-in to a networking device like a router or switch. Sometimes, this hardware is also referred to as a network virtualization platform.

The benefits of network virtualization

Virtualizing network services and workflows with VLANs, SDN, and NFVs can help companies:

  • Improve operational efficiency with automation. Network virtualization enables the use of scripts, playbooks, and software to automate workflows and configurations. Network automation boosts productivity so teams can get more work done with fewer resources.
  • Accelerate network deployments and scaling. Legacy deployments involve configuring and installing dedicated boxes for each function. Virtualized network functions and configurations can be deployed in minutes and infinitely copied to get new sites up and running in a fraction of the time.
  • Reduce network infrastructure costs. Decoupling network functions, services, and workflows from the underlying hardware means you can run multiple functions from once device, saving money and space.
  • Strengthen network security. Virtualization makes it easier to micro-segment the network and implement precise, targeted Zero-Trust security controls to protect sensitive and valuable assets.

Network virtualization platform best practices

Following these best practices when selecting and implementing network virtualization platforms can help companies achieve the benefits described above while reducing hassle.

Vendor neutrality

Ensuring that the virtualization software works with the underlying hardware is critical. The struggle is that many organizations use devices from multiple vendors, which makes interoperability a challenge. Rather than using different virtualization platforms for each vendor, or replacing perfectly good devices with ones that are all from the same vendor, it’s much easier and more cost-effective to use virtualization software that interoperates with any networking hardware. This type of software is called ‘vendor neutral.’

To improve efficiency even more, companies can use vendor-neutral networking hardware to host their virtualization software. Doing so eliminates the need for a dedicated server, allowing SDN software and virtualized network functions (VNFs) to run directly from a serial console or router that’s already in use. This significantly consolidates deployments, which saves  money and reduces the amount of space needed This can be a lifesaver in branch offices, retail stores, manufacturing sites, and other locations with limited space.

A diagram showing how multiple VNFs can run on a single vendor-neutral platform.

Virtualizing the WAN

We’ve mostly discussed virtualization in a local networking context, but it can also be extended to the WAN (wide area network). For example, SD-WAN (software-defined wide area networking) streamlines and automates the management of WAN infrastructure and workflows. WAN gateway routing functions can also be virtualized as VNFs that are deployed and controlled independently of the physical WAN gateway, significantly accelerating new branch launches.

Unifying network orchestration

The best way to maximize network management efficiency is to consolidate the orchestration of all virtualization with a single, vendor-neutral platform. For example, the Nodegrid solution from ZPE Systems uses vendor-neutral hardware and software to give networking teams a single platform to host, deploy, monitor, and control all virtualized workflows and devices. Nodegrid streamlines network virtualization with:

  • An open, x86-64bit Linux-based architecture that can run other vendors’ software, VNFs, and even Docker containers to eliminate the need for dedicated virtualization appliances.
  • Multi-functional hardware devices that combine gateway routing, switching, out-of-band serial console management, and more to further consolidate network deployments.
  • Vendor-neutral orchestration software, available in on-premises or cloud form, that provides unified control over both physical and virtual infrastructure across all deployment sites for a convenient management experience.

Want to see vendor-neutral network orchestration in action?

Nodegrid unifies network virtualization platforms and workflows to boost productivity while reducing infrastructure costs. Schedule a free demo to experience the benefits of vendor-neutral network orchestration firsthand.

Schedule a Demo