Providing Out-of-Band Connectivity to Mission-Critical IT Resources

What Is Edge Computing for Machine Learning?

Edge computing for machine learning is visualized as an artificial brain on the monitor of a remote industrial machine
Edge computing and machine learning technologies are helping organizations use their data more efficiently and effectively. In this blog, we’ll explain what edge computing is and discuss how it’s used with machine learning to improve performance and keep data secure. We’ll also explore two different edge computing deployment models for machine learning and provide advice on how to manage them.

What is edge computing?

For many modern enterprises, most of their data is no longer generated in a centralized data center or office building. These days, that data comes from IoT devices, “smart” industrial systems, and other remote locations around the globe. Transferring all that data to and from a central data center for processing can introduce latency and negatively impact performance. Transmitting sensitive data over the internet also increases the risk of interception by hackers.

Edge computing moves computational power closer to the source of data so that data doesn’t need to be sent to a separate location for processing. The benefit of edge computing is that data doesn’t need to travel as far, which translates to less latency and improved application performance. Plus, the data stays behind the firewall on the local network, reducing security risks.

What is edge computing for machine learning?

Machine learning (ML) is powered by data, and with data moving to the edge of enterprise networks, machine learning needs to decentralize as well. Edge computing for machine learning places ML applications closer to remote sources of data. The benefits of edge computing for machine learning are the same for edge computing in general, just supercharged.

Machine learning requires data to make intelligent predictions and decisions. In many cases, that data originates from the edge of the network. For example, the healthcare industry uses ML algorithms to analyze health data from smart devices in hospitals and clinics around the world, sometimes in hard-to-reach and politically unstable regions.

Getting patient health data from these remote facilities back to a centralized data center for machine learning processing can be very challenging, especially if the internet infrastructure is outdated or inconsistent. In addition, this data is personal and sensitive, and healthcare organizations are obligated to ensure its protection, so transferring it over uncertain internet connections is too risky.

Instead, organizations can install the ML algorithm on servers in each remote facility, or even on the smart devices themselves. This drastically reduces their reliance on outside network infrastructure for running machine learning workloads, which improves performance and ensures patient health data stays private.

How to deploy edge computing for machine learning

There are two basic deployment models for machine learning at the edge.

A traditional edge machine learning deployment uses one or more racks of heavy-duty servers with high-performance machine learning processing units. This deployment model is best suited to large ML workloads that process massive amounts of edge data.

A “thin” or “nano” edge machine learning deployment runs on smaller servers or multi-purpose devices that share rack space with other edge infrastructure. This deployment model is more cost-effective and works best for smaller ML workloads in buildings where space is limited.

For either deployment model, you need a solution in place for remote management so administrators can maintain and troubleshoot edge infrastructure without traveling on-site. The best way to ensure reliable management access is through out-of-band (OOB) management. OOB management creates a separate network dedicated to remote management and troubleshooting, and that  provides an alternative path to remote infrastructure (typically via cellular LTE) in case the primary ISP or WAN link goes down.

Through that OOB management network, you can orchestrate workloads, push out security patches, and monitor the health and performance of edge infrastructure.

Deploy and manage edge computing machine learning infrastructure with Nodegrid

The Nodegrid Net Services Router (NSR) from ZPE Systems supports both traditional and nano edge computing machine learning deployment models. The NSR is a modular and customizable solution that delivers OOB management, cellular failover, edge routing and switching, and automation in a single device.

You can use the NSR’s serial console modules to monitor, manage, and orchestrate an entire rack of edge machine learning servers. For less intensive workloads, you can use the edge compute module to host ML applications, virtual machines, and Docker images.

Either way, you can take advantage of 5G/4G LTE to ensure fast and reliable OOB access and cellular failover. The NSR is also secured by Zero Trust features like SAML 2.0 integration and BIOS protection to keep edge machine learning data protected.

Ready to learn more about 5G/4G LTE to ensure fast and reliable OOB access?

To learn more about edge computing for machine learning with Nodegrid, contact ZPE Systems today.

Contact Us

Comparing Cellular Failover Router, Gateway & Bridge for Business Continuity

NSRSTACK2-1(1)
Cellular failover is critical for business continuity because it ensures uninterrupted internet access even if the primary ISP connection goes down. When looking for an enterprise cellular failover solution, you’re likely to see the terms “cellular failover router,” “cellular failover bridge,” and “cellular failover gateway” used somewhat interchangeably. These three types of devices offer similar (and often overlapping) capabilities, which can make it difficult to tell which is the best option for your particular use case. In this post, we’ll define and compare these different cellular failover devices before discussing the best option for business continuity.

Cellular failover router vs. cellular failover bridge vs. cellular failover gateway

Let’s define the network functions provided by these devices and describe how cellular failover fits in. We’ll start with bridges, which provide the least amount of functionality.

What is a cellular failover router?

This router connects multiple LANs together but can also forward traffic to and from locations outside the domain. It forwards packets on the network layer of the OSI model (layer 3) using IP addresses. A basic router does not provide access to the internet—it must route traffic through a modem to forward packets outside of the LAN.

A cellular failover router provides a secondary internet connection over which traffic can be routed if the primary ISP link goes down. It includes a cellular modem for internet access as well as IP routing capabilities, giving it more functionality than a cellular failover bridge. Since cellular failover routers combine a modem and router into a single device, they’re often referred to as cellular failover gateways.

What is a cellular failover bridge?

A network bridge connects multiple local area networks (LANs) into a single domain but is not capable of moving data outside of the domain. It forwards frames on the data link layer of the OSI model (layer 2) using MAC addresses (also known as physical or hardware addresses).

A cellular failover bridge is essentially a device that connects the primary network to the cellular failover network so LAN devices can access that network if the ISP connection goes down. Usually, these come in the form of cellular modems configured in bridge mode. A cellular modem in bridge mode provides internet access via the cellular LTE network and gives devices on the LAN a link (or “bridge”) to cross over to that cellular network. It does not provide routing functionality itself, however, so it needs the primary router for that.

A basic network topology using a cellular modem in bridge mode for failover.

Fig. 1: A basic network topology using a cellular modem in bridge mode for failover.

What is a cellular failover gateway?

A gateway is a device that connects multiple networks with different transmission protocols together. All traffic flowing into and out of an enterprise network must pass through a gateway. Network gateways combine the functionality of a modem and a router, so they provide both an internet connection and the ability to route packets to and from IP addresses. That’s why cellular failover routers—which combine a cellular internet connection and IP routing—are frequently called cellular failover gateways.

A basic network topology using a cellular failover gateway router

Fig. 2: A basic network topology using a cellular failover gateway router

Cellular failover routers/gateways also function as cellular failover bridges, but the reverse is not true. A cellular failover bridge must rely on an external router for IP-based packet forwarding.

Why choose an integrated cellular failover device?

Generally speaking, the terms cellular failover router, bridge, or gateway refer to standalone devices. Often, they’re designed to provide simple cellular connectivity and rely on the primary router/gateway in order to function, such as Cradlepoint cellular failover adapters. Another option is to get a standalone cellular gateway, such as a Meraki, that you deploy alongside your primary router and fail traffic over to when the primary connection goes down.

In both cases, you’re investing in a single-purpose cellular failover device that must be purchased, installed, and managed in addition to the primary gateway router, network switches, serial consoles, etc. While this may not seem like a big deal in a single-site, centralized enterprise LAN, it grows much more onerous in a large and distributed network with many remote sites and a complicated SD-WAN architecture.

A much better option is to buy an all-in-one device that combines many networking capabilities into one, such as a Nodegrid Services Router. Nodegrid devices include production gateway, routing, and switching functionality in addition to cellular failover, remote out-of-band (OOB) management over serial, and more.

A basic network topology using a Nodegrid Net Services Router with integrated networking, cellular failover, hosted firewall solution, and a serial console module.

Fig. 3: A basic network topology using a Nodegrid Net Services Router with integrated networking, cellular failover, hosted firewall solution, and a serial console module.

The Nodegrid solution is highly customizable, with six integrated routers to choose from depending on your deployment size and use case. The most flexible option is the Nodegrid Net Services Router (NSR) with a modular design that lets you swap out expansion modules to get the exact functionality you need without paying for extras that you don’t. A single Nodegrid device can replace an entire rack of networking equipment, simplifying deployments in branch offices, edge computing data centers, manufacturing plants, and other remote sites.

Plus, Nodegrid’s vendor-neutral hardware and software allow you to consolidate infrastructure management behind a single pane of glass. You can use Nodegrid to orchestrate cellular failover, SD-WAN, DCIM, and more over a dedicated, reliable, and blazing-fast OOB management network.

Ready to learn more about Nodegrid cellular failover router connectivity?

To learn more or see a demo of Nodegrid in action, contact ZPE Systems today.

Contact Us

The Growing Role of Hybrid Cloud in Digital Transformation

cloud in digital transformation

Digital transformation is a broad term for the act of changing and improving your business processes through the implementation of new technologies. The cloud plays a major role in digital transformation because it provides a flexible, scalable, and accessible environment that’s ideal for a wide range of business applications. However, there are still many processes that are better suited for a traditional, on-prem data center or colocation infrastructure due to cost, security, or performance concerns.

Combining public cloud platforms with private infrastructure is known as hybrid cloud infrastructure, and it allows organizations to map their business processes and applications to the environments best suited to run them. In this post, we’ll discuss the role of hybrid cloud in digital transformation and provide tips for managing and orchestrating a hybrid infrastructure.

The importance of hybrid cloud in digital transformation

While the public cloud offers many advantages, there are a variety of reasons why an organization would want or need to keep some services private.

For example, a company doing business in an industry that’s subject to strict data privacy regulations—like finance, defense, or healthcare—may want to keep sensitive data in an on-premises data center so they can maintain complete control over the security and access control measures. At the same time, they might have other processes and applications that aren’t as high-risk and could benefit from the flexibility of cloud infrastructure.

Sometimes, an organization will migrate a workload to the cloud, only to bring it back in-house later. For instance, cloud services can reduce costs for certain applications but can increase costs for others. Most public cloud providers charge extra for data egress—transferring data of their systems and to another cloud or on-premises. That means applications that require a lot of data egress can be much more expensive to run in the cloud. That cost increase may be worthwhile in the long run to achieve optimal scalability and flexibility, but with a recession looming, many organizations are sacrificing those big picture goals to cut costs for short-term survival.

One of the biggest use cases for hybrid cloud in digital transformation is a gradual cloud migration. Digital transformation is a journey, and along the way, many organizations end up in a hybrid state because they’ve successfully moved some of their processes to the cloud but have others that still live in the data center. For example, a business may send some of their data analysis workflows to a business intelligence application in the cloud but then have an on-premises DCIM tool analyzing the same data in the data center. They eventually transition from hybrid cloud to a pure cloud or multi-cloud environment once they’ve finished migrating all their workloads to the cloud.

Hybrid cloud is one of the most popular enterprise infrastructure models because it’s flexible and affordable, allowing organizations to make the digital transformation journey at their own pace and in their own way.

Tips for managing hybrid cloud infrastructure

The most effective hybrid cloud deployment provides a single, seamless digital environment for business applications and resources, with centralized workload and infrastructure orchestration that works across all platforms and data centers. Let’s discuss how to achieve this ideal hybrid cloud deployment.

Vendor-agnostic platforms

To create a seamless environment in which workflows move effortlessly between the cloud and the data center to deliver a simple and unified experience to end-users, you need all your public cloud, private cloud, and data center solutions to work together. The best way to ensure this is by only using vendor-agnostic (vendor-neutral) hardware and software from the very beginning, but for most organizations that ship has already sailed. The next best option is to use a vendor-agnostic management platform that’s able to hook into all those closed solutions and control them equally. These solutions allow you to orchestrate workloads across public cloud, private cloud, and legacy environments without needing to replace all the systems and software already in place.

SD-WAN

A hybrid cloud deployment can create some networking challenges because of the need to orchestrate WAN (wide area networking) connections across multiple clouds and data centers, each of which may have a different networking infrastructure in place. Software-defined wide area networking, or SD-WAN, helps to reduce the complexity of hybrid cloud networking by separating the control and management processes from the underlying WAN hardware.

SD-WAN virtualizes network management functions as software or script-based configurations, which enables centralized and automated deployment. With the aid of a vendor-agnostic management platform, SD-WAN benefits hybrid cloud infrastructure by consolidating control behind a single pane of glass. This gives administrators the ability to easily orchestrate, optimize, and secure the entire distributed network.

Automation

Automation plays a key role in digital transformation because it can speed up workflows while reducing the risk of human error. For example, using automation to deploy new infrastructure means administrators can provision many resources in a short amount of time while ensuring consistent configurations.

Automation also improves security, both by reducing the rate of misconfigurations and by ensuring all infrastructure is patched as soon as possible. Unpatched infrastructure leaves you vulnerable to hacks and ransomware, but keeping track of updates for so many vendor solutions in so many different places can be challenging. Automation can help by ensuring patches are pushed out to hybrid cloud infrastructure solutions as soon as they become available. 

Vendor agnostic platforms, SD-WAN, and automation are key tools that help organizations more effectively utilize a hybrid cloud in their digital transformation journey.

The role of ZPE Systems in digital transformation

ZPE Systems offers a range of vendor-agnostic network management solutions to help your organization achieve digital transformation. The Nodegrid platform can dig its hooks into your legacy and mixed-vendor infrastructure to provide a common interface from which to manage and orchestrate your entire network architecture. Plus, Nodegrid can host or integrate with your choice of SD-WAN solutions to help you consolidate your tech stack while delivering optimized performance and security.

Contact ZPE Systems today

To learn more about the role of hybrid cloud in digital transformation.

Contact Us

The Importance of Out-of-Band Data Center Connectivity

The importance of data center connectivity is illustrated with overlapping digital globes superimposed over racks of data center equipment

Data center connectivity is more crucial than ever. Data, applications, and digital services power every aspect of business, which means your infrastructure needs to be available 24/7. However, according to the Uptime Institute’s 2022 Outage Analysis report, outages are still a frequent problem for enterprises and data centers, and the financial consequences of the resulting business interruptions are staggering.

One of the best tools for maintaining data center connectivity is remote out-of-band (OOB) management. OOB management creates an alternative path to remote infrastructure on a dedicated management network. An OOB management solution uses serial consoles and data center infrastructure management (DCIM) software to give administrators the ability to monitor and control remote data center infrastructure. With OOB, you can recover from outages faster and regain control over remote data center infrastructure even when the main network is down.

The importance of out-of-band data center connectivity

The first major takeaway from the Uptime Institute report is that outage rates have remained high over recent years. Twenty percent of responding organizations experienced a serious outage in the last three years, which is slightly higher than in the 2021 report. It was noted that 80% of data centers reported an outage of some kind (with varying severity), which hasn’t changed much since previous reports. The implication here is that businesses and data centers are both still struggling to maintain the 24/7 availability expected by their customers. Let’s dig deeper into the causes and effects of data center outages and discuss how out-of-band management can help.

  1. Network issues are the biggest cause of downtime
    According to the 2022 report, networking problems were the single largest cause of outages over the last three years. These issues are frequently due to the complexity of distributed and software-defined network architectures, especially in cloud or hybrid cloud deployments.Out-of-band data center connectivity solutions use serial consoles which directly connect to other data center devices using the serial port. That means administrators can access and manage those devices without needing to use their IP addresses. So, if a configuration mistake causes the production LAN to go down, administrators can still remotely fix the problem, shortening the duration of the outage. And, since OOB serial consoles provide a secondary network interface—often an LTE cellular modem—you’ll still have remote access even if human error brings down the WAN or SD-WAN architecture.
    .
  2. Power failures are another leading cause of outages
    Respondents reported that 43% of significant outages—ones that resulted in business interruption and financial loss—were caused by power issues. Many of those incidents were due to uninterruptible power supply (UPS) failures.As part of a data center infrastructure management (DCIM) solution, an OOB serial console gives administrators the ability to remotely monitor and manage UPS devices in the rack. Admins get alerts when devices aren’t performing efficiently or begin to show signs of imminent failure. That means organizations can proactively schedule repairs or deploy replacements before a power outage occurs.
    .
  3. Out-of-band data center connectivity shortens recovery time
    One of the most alarming statistics from the report is the percentage of public outages lasting more than 24 hours. In 2017, just 8% of outages lasted longer than a day, but that increased to nearly 30% in 2021.Out-of-band data center connectivity can significantly reduce the time to recovery by ensuring administrators always have remote access to data center infrastructure. That means your organization will waste less time waiting for on-site managed services to arrive or for in-house technicians to travel to the data center. As soon as DCIM monitoring alerts them to an issue, admins can begin diagnosing and fixing the problem from their remote desktop.
    .
  4. Outages are more expensive than ever
    Over 60% of reported outages resulted in at least $100,000 in losses, an increase of 21% since 2019. The number of outages costing more than $1 million also increased by 4%.OOB management gives teams the ability to remotely troubleshoot and recover from many issues, so you don’t need to pay for truck rolls or on-site managed services. If remote troubleshooting reveals that the problem requires an on-site fix, technicians can go in already knowing the source of the issue and with all the necessary tools to repair it. Either way, your organization saves time and money.

Out-of-band data center connectivity gives organizations reliable access to remote infrastructure even during a network outage. OOB serial consoles also provide visibility into the health and performance of critical data center devices like UPSs, so you can proactively address issues and prevent downtime from occurring. Through 24/7 remote access, monitoring, and management, you can reduce the incidence, duration, cost, and impact of data center downtime.

Gen 3 OOB data center connectivity with Nodegrid

The Nodegrid Serial Console Plus (NSCP) is a Gen 3 out-of-band data center connectivity solution that delivers reliable and blazing-fast remote access to up to 96 data center devices from a single 1U rack-mounted box. Nodegrid’s vendor-neutral OOB DCIM platform supports integrations with your choice of infrastructure solutions and automation tools, giving you total and efficient control over your data center infrastructure.

Ready to learn more about out-of-band data center connectivity?

To learn more about out-of-band data center connectivity with Nodegrid, contact ZPE Systems today.

Contact Us

What Is uCPE, and How Does It Benefit Enterprise Customers?

ucpe

uCPE stands for universal Customer Premises Equipment. A uCPE box is a general purpose networking device used to run virtual network functions, or VNFs. VNFs are essentially software versions of network devices such as routers, switches, and firewalls. That means you can consolidate an entire networking tech stack into a single uCPE box, saving money and reducing management complexity.

Despite the promise of uCPE, the technology has been slow to catch on. In this article, we’ll explore the reasons for the lack of popularity of early uCPE before discussing how newer generations overcome these issues to deliver cost savings, simplified management, and other benefits to enterprise customers.

The shortcomings of gen 1 uCPE

Early uCPE devices were generally provided by telecoms and ISPs to host their specific networking software. Customers didn’t get to choose the software or virtualization solutions—they had to use whatever the vendor gave them. That meant enterprises didn’t have the flexibility to swap out VNFs and software to get the specific features or pricing they wanted, and they couldn’t continue using existing solutions that they really liked.

However, the larger issue was that the virtualization technology itself was ahead of its time. Many organizations still didn’t have use cases that justified the business disruption and expense of swapping out networking infrastructure with virtualized solutions. Plus, software-based networking was so new that many network administrators and engineers didn’t have the skills and experience needed to configure, deploy, and manage fully virtualized tech stacks.

Due to these limitations, enterprises showed minimal interest in uCPE for a long time, leading many to believe that the technology would die out entirely. Instead, forward-thinking hardware and software vendors continued to improve uCPE technology to overcome these shortcomings. In addition, enterprises have been pushing their computing and business operations out to remote locations at the network edge, resulting in the rapid adoption of SD-WAN (software-based wide-area networking) solutions for distributed network management. A greater interest in software-based networking technology, and a need for hardware capable of running that software, has led to a renewed enthusiasm for uCPE.

The next evolution of uCPE

The current generation of uCPE focuses on delivering a truly universal, vendor-neutral platform from which to host, manage, and troubleshoot an entire consolidated tech stack. This is provided in two parts:

  1. The device itself, which runs on an open, Linux-based operating system and supports multiple pinout standards.
  2. An orchestration platform which consolidates the monitoring and management of all uCPE solutions behind a single pane of glass.

Through its a vendor-agnostic hardware, software, and orchestration platform, uCPE benefits enterprise customers in numerous ways, including:

Vendor freedom

Next-gen uCPE devices are capable of hosting any software or virtualization solution from any vendor. This gives enterprise customers the ability to shop around for the best features and pricing for their particular use case. If customers already have a software-based networking solution that works well for them, they can simply migrate it to the uCPE with minimal hassle.

Tech consolidation

A single uCPE box can take the place of an entire rack of networking equipment, reducing the number of devices to install, license, and maintain. This is especially vital for organizations that want to expand their operations to branch offices, edge data centers, and even hard-to-reach locations like oil rigs and research stations. Tech consolidation reduces the time and expense required to deploy remote infrastructure.

Centralized management

The current generation of uCPE includes an orchestration platform capable of observing and controlling the entire distributed network of uCPE boxes and connected infrastructure. Enterprises can deploy hundreds or even thousands of uCPE boxes to locations all over the globe, but they only need to log in to one platform to manage them all. uCPE gives organizations the ability to orchestrate network functions, monitor remote infrastructure, and troubleshoot and respond to issues from behind a single pane of glass, which results in simplified and optimized network management.

SD-WAN capabilities

As organizations have sped up their SD-WAN adoption plans in response to the rise of remote work, edge computing, and distributed network management, the need for universal networking hardware has also quickly increased. Next-gen uCPE devices are the perfect hosts for SD-WAN software solutions because they allow for easy integration with the underlying WAN infrastructure, which run as VNFs on the same box. That means enterprises don’t need to invest in new SD-WAN-capable routers and gateways for each remote site. Plus, with a uCPE orchestration platform, it is easier to view and control the entire SD-WAN architecture. 

To take advantage of the benefits promised by uCPE technology, you need to ensure that you choose a platform that’s truly vendor-neutral to support your choice of SD-WAN and VNF solutions. The hardware also needs to be powerful enough to run your entire edge networking stack from a single box.

Universal network management with Nodegrid

Nodegrid is a next-gen uCPE platform that delivers universal infrastructure orchestration for enterprise customers. Nodegrid’s flexible hardware and open OS give you the freedom to bring your choice of networking devices, SD-WAN solutions, and VNFs. Nodegrid devices are built with CPU and memory headroom and expansive storage options so you can run your entire branch from a single box. Plus, the ZPE Cloud infrastructure orchestration platform gives you complete control over your distributed network, including third-party automation playbooks and workflows.

Ready to learn more?

To learn more about Nodegrid next-gen uCPE, contact ZPE Systems today.

Contact Us