2025 global network outage report and internet health check | Network World https://www.networkworld.com From the data center to the edge Tue, 11 Mar 2025 21:21:32 +0000 http://backend.userland.com/rss092 Copyright (c) 2025 IDG Communications, Inc. en-US 2025 global network outage report and internet health check Tue, 11 Mar 2025 21:14:17 +0000

The reliability of services delivered by ISPs, cloud providers and conferencing services is critical for enterprise organizations. ThousandEyes, a Cisco company, monitors how providers are handling any performance challenges and provides Network World with a weekly roundup of events that impact service delivery. Read on to see the latest analysis, and stop back next week for another update on the performance of cloud providers and ISPs.

Note: We have archived prior-year outage updates, including our 2024 report, 2023 report and Covid-19 coverage.

Internet report for March 3-9

ThousandEyes reported 425 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of March 3-9. That’s down 5% from 447 outages the week prior. Specific to the U.S., there were 199 outages, which is up 5% from 189 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages decreased from 261 to 219 outages, a 16% decrease compared to the week prior. In the U.S., ISP outages increased from 73 to 81, an 11% increase.

Public cloud network outages: Globally, cloud provider network outages decreased from 120 to 111 outages. In the U.S., cloud provider network outages decreased from 82 to 69 outages.

Collaboration app network outages: Globally, collaboration application network outages increased from zero to two outages. In the U.S., collaboration application network outages remained at zero for the second week in a row.

Two notable outages

On March 3, Microsoft experienced an outage on its network that impacted some downstream partners and access to services running on Microsoft environments in multiple regions, including the U.S., Canada, Costa Rica, Egypt, South Africa, Saudi Arabia, Germany, the Netherlands, France, Sweden, Brazil, Singapore, India, and Mexico. The outage, which lasted a total of one hour and 22 minutes over a two-hour period, was first observed around 11:05 AM EST and appeared to initially center on Microsoft nodes located in Toronto, Canada, and Cleveland, OH. Around 20 minutes after appearing to clear the nodes located in Toronto, Canada, and Cleveland, OH, were joined by nodes located in Newark, NJ, in exhibiting outage conditions. A further ten minutes later, nodes located in Newark, NJ, were replaced by nodes located in New York, NY, in exhibiting outage conditions. Around ten minutes further into the outage, the nodes located in New York, NY, appeared to clear and were replaced by nodes located in Los Angeles, CA, and Des Moines, IA, in exhibiting outage conditions. Around fifty-five minutes after first being observed, nodes located in Los Angeles, CA, and Des Moines, IA, appeared to clear and were replaced by nodes located in Hamburg, Germany, in exhibiting outage conditions. Five minutes later, the nodes located in Hamburg, Germany, were replaced by nodes located in Des Moines, IA. A further twenty-five minutes later, the nodes located in Des Moines, IA, were replaced by nodes located in Paris, France, before themselves clearing five minutes later, leaving just nodes located in Toronto, Canada, and Cleveland, OH, exhibiting outage conditions. Fifteen minutes after appearing to clear, nodes located in Cleveland, OH, and New York, NY, once again appeared to exhibit outage conditions. The outage was cleared around 1:05 PM EST. Click here for an interactive view.

On March 5, Arelion, a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Japan, the Netherlands, Brazil, Australia, Costa Rica, the U.K., Colombia, and Germany. The disruption, which lasted 35 minutes, was first observed around 2:10 AM EST and appeared to center on nodes located in Ashburn, VA.  Around 30 minutes after first being observed, the number of nodes exhibiting outage conditions located in Ashburn, VA, appeared to increase. This rise in nodes exhibiting outage conditions also appeared to coincide with an increase in the number of downstream customers, partners, and regions impacted. The outage was cleared around 2:50 AM EST. Click here for an interactive view.

Internet report for Feb. 24-March 2

ThousandEyes reported 447 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 24-March 2. That’s up 13% from 397 outages the week prior. Specific to the U.S., there were 189 outages, which is down 5% from 199 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages increased from 190 to 261 outages, a 37% increase compared to the week prior. In the U.S., ISP outages increased from 64 to 73, a 14% increase.

Public cloud network outages: Globally, cloud provider network outages decreased from 137 to 120 outages. In the U.S., cloud provider network outages decreased from 96 to 82 outages.

Collaboration app network outages: Both globally and in the U.S., collaboration application network outages dropped back down to zero.

Two notable outages

On February 28, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Japan, the Philippines, the U.K., Romania, Thailand, South Korea, Hong Kong, New Zealand, Australia, Germany, Mexico, the Netherlands, South Africa, France, Luxembourg, India, Singapore, and Canada. The outage, which lasted 29 minutes, was first observed around 1:05 AM EST and initially appeared to center on Cogent nodes located in Los Angeles, CA, and San Jose, CA. Around five minutes after first being observed, the nodes located in Los Angeles, CA, appeared to clear and were replaced by nodes located in Washington, D.C., in exhibiting outage conditions. The outage was resolved around 1:25 AM EST. Click here for an interactive view.

On February 24, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., India, France, Ireland, Spain, Kenya, Singapore, the Netherlands, Mexico, Belgium, Romania, Germany, New Zealand, Hungary, Thailand, Australia, and Hong Kong. The disruption, which lasted a total of 18 minutes, was first observed around 1:00 PM EST and appeared to initially center on nodes located in Los Angeles, CA. Ten minutes after first being observed the nodes located in Los Angeles, CA appeared to clear, and were replaced by nodes located in Ashburn, VA, in exhibiting outage conditions. A further five minutes later, the number of nodes exhibiting outage conditions located in Ashburn, VA, appeared to increase. This rise in nodes exhibiting outage conditions also appeared to coincide with an increase in the number of downstream customers, partners, and regions impacted. The outage was cleared around 1:20 PM EST. Click here for an interactive view.

Internet report for Feb. 17-23

ThousandEyes reported 397 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 17-23. That’s nearly even with the week prior, when there were 398 outages. Specific to the U.S., there were 199 outages, which is up 2% from 196 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages decreased from 205 to 190 outages, a 7% decrease compared to the week prior. In the U.S., ISP outages decreased from 88 to 64, a 27% decrease.

Public cloud network outages: Globally, cloud provider network outages increased from 96 to 137 outages. In the U.S., cloud provider network outages increased from 69 to 96 outages.

Collaboration app network outages: Globally, there was one collaboration application network outage, same as the week prior. In the U.S., there was one collaboration application outage, ending a four-week run of zero outages.

Two notable outages

On February 17, UUNET Verizon, acquired by Verizon in 2006 and now operating as Verizon Business, experienced an outage that affected customers and partners across multiple regions, including the U.S., Singapore, the Netherlands, the Philippines, Brazil, Germany, Switzerland, Canada, the U.K., Ireland, Japan, South Korea, Australia, France, and India. The outage lasted a total of an hour, over a one hour and 15-minute period. The outage was first observed around 2:00 PM EST and initially centered on Verizon Business nodes in Washington, D.C. Five minutes into the outage the nodes located in Washington, D.C., were joined by nodes located in Brooklyn, NY, in exhibiting outage conditions. A further five minutes later the nodes located in Brooklyn, NY, were replaced by nodes located in New York, NY, in exhibiting outage conditions. The outage was cleared around 3:15 PM EST. Click here for an interactive view.

On February 18, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers as well as Cogent customers across various regions, including the U.S., Brazil, Japan, the Philippines, Ghana, Hong Kong, India, the U.K., Singapore, Indonesia, Canada, South Africa, Spain, Mexico, and Taiwan. The outage, which lasted 20 minutes, was first observed around 8:15 AM EST and initially appeared to center on Cogent nodes located in Washington, D.C., Los Angeles, CA, and Dallas, TX. Around ten minutes after first being observed, the nodes located in Washington, D.C., and Dallas, TX appeared to clear. Around five minutes later the nodes experiencing outage conditions expanded to include nodes in Dallas, TX, San Francisco, CA, and Phoenix, AZ. This increase in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted regions, downstream partners, and customers. A further five minutes later, nodes located in Phoenix, AZ and Dallas, TX, appeared to clear, leaving only the nodes located in San Francisco, CA, and Los Angeles, CA, exhibiting outage conditions. The outage was resolved around 8:40 AM EST. Click here for an interactive view.

Internet report for Feb. 10-16

ThousandEyes reported 398 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 10-16. That’s up 13% from 353 outages the week prior. Specific to the U.S., there were 196 outages, which is down 7% from 210 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages increased from 173 to 205 outages, an 18% increase compared to the week prior. In the U.S., ISP outages increased slightly from 86 to 88, a 2% increase.

Public cloud network outages: Globally, cloud provider network outages decreased from 124 to 96 outages. In the U.S., cloud provider network outages decreased from 96 to 69 outages.

Collaboration app network outages: Globally, collaboration application network outages increased to one outage. In the U.S., levels remained at zero for the third week in a row. 

Two notable outages

On February 12, GTT Communications, a Tier 1 provider headquartered in Tysons, VA, experienced an outage that impacted some of its partners and customers across multiple regions, including the U.S., Germany, the Dominican Republic, Canada, the U.K., Australia, Mexico, Spain, Singapore, Taiwan, Colombia, and Japan. The outage, which lasted 39 minutes, was first observed around 3:05 AM EST and appeared to initially be centered on GTT nodes located in Washington, D.C.  Around ten minutes into the outage, nodes located in Washington, D.C., were joined by GTT nodes located in New York, NY, and Frankfurt, Germany, in exhibiting outage conditions. This increase in the number of nodes and locations exhibiting outage conditions appeared to coincide with an increase in the number of impacted regions, downstream partners, and customers. A further five minutes later, the nodes located in New York, NY, and Frankfurt, Germany, appeared to clear. The outage was cleared around 3:45 AM EST. Click here for an interactive view.

On February 12, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across the U.S. The outage, lasting 40 minutes, was first observed around 3:10 AM EST and appeared to initially be centered on Lumen nodes located in Kansas City, MO. Around 15 minutes after first being observed, the nodes located in Kansas City, MO, were joined by nodes located in Dallas, TX, in exhibiting outage conditions. This increase appeared to coincide with an increase in the number of impacted downstream partners, and customers. The outage was cleared around 3:55 AM EST. Click here for an interactive view.

Internet report for Feb. 3-9

ThousandEyes reported 353 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Feb. 3-9. That’s up 7% from 331 outages the week prior. Specific to the U.S., there were 210 outages, which up 12% from 188 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages increased from 126 to 173 outages, a 37% increase compared to the week prior. In the U.S., ISP outages increased from 65 to 86, a 32% increase.

Public cloud network outages: Globally, cloud provider network outages decreased from 144 to 124 outages. In the U.S., however, cloud provider network outages increased from 88 to 96 outages.

Collaboration app network outages: Both globally and in the U.S., collaboration application network outages remained at zero for the second week in a row.

Two notable outages

On February 5, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Canada, and Singapore. The outage, lasting a total of 35 minutes over a forty-five-minute period, was first observed around 3:30 AM EST and appeared to initially be centered on Lumen nodes located in Seattle, WA. Around five minutes into the outage, the nodes located in Seattle, WA were joined by nodes located in Los Angeles, CA, in exhibiting outage conditions. This increase in the number and location of nodes exhibiting outage conditions appeared to coincide with the peak in terms of the number of impacted regions, downstream partners, and customers. A further five minutes later, the nodes located in Los Angeles, CA, appeared to clear, leaving only the nodes located in Seattle, WA, in exhibiting outage conditions. The outage was cleared around 4:15 AM EST. Click here for an interactive view.

On February 6, Internap, a U.S. based cloud service provider, experienced an outage that impacted many of its downstream partners and customers within the U.S. The outage, lasting a total of one hour and 14 minutes, over a one hour and 28-minute period, was first observed around 12:15 AM EST and appeared to be centered on Internap nodes located in Boston, MA. The outage was at its peak around one hour and 10 minutes after being observed, with the highest number of impacted partners, and customers. The outage was cleared around 1:45 AM EST. Click here for an interactive view.

Internet report for Jan. 27-Feb. 2

ThousandEyes reported 331 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 27-Feb. 2. That’s down 16% from 395 outages the week prior. Specific to the U.S., there were 188 outages, which down 4% from 195 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages decreased from 199 to 126 outages, a 37% decrease compared to the week prior. In the U.S., ISP outages decreased slightly from 67 to 65, a 3% decrease.

Public cloud network outages: Globally, cloud provider network outages increased slightly from 142 to 144 outages. In the U.S., however, cloud provider network outages decreased from 110 to 88 outages.

Collaboration app network outages: Both globally and in the U.S., collaboration application network outages dropped down to zero. 

Two notable outages

On January 29, Arelion (formerly known as Telia Carrier), a global Tier 1 provider headquartered in Stockholm, Sweden, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Australia, Argentina, Belgium, Bahrain, Germany, France, Brazil, India, Peru, Mexico, and Guatemala. The disruption, which lasted a total of 24 minutes over a 55-minute period, was first observed around 12:40 PM EST and appeared to initially center on nodes located in Dallas, TX, and Ghent, Belgium. Fifteen minutes after appearing to clear, the nodes located in Dallas, TX, began exhibiting outage conditions again. Around 12:20 PM EST, the nodes located in Dallas, TX, were joined by nodes located in Atlanta, GA, in exhibiting outage conditions. This rise in nodes and locations exhibiting outage conditions also appeared to coincide with an increase in the number of downstream customers, partners, and regions impacted. The outage was cleared around 1:35 PM EST. Click here for an interactive view.

On February 2, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Poland, and Spain. The outage, lasting a total of 22 minutes, was first observed around 3:10 AM EST and appeared to initially center on nodes located in Washington, D.C. Fifteen minutes after first being observed, the nodes located in Washington, D.C., appeared to clear and were replaced by nodes located in Miami, FL, in exhibiting outage conditions. A further five minutes later, the nodes located in Miami, FL, were joined by nodes located in Atlanta, GA, in exhibiting outage conditions. This increase in nodes exhibiting outage conditions appeared to coincide with an increase in the number of impacted downstream partners and customers. The outage was cleared around 3:55 AM EST. Click here for an interactive view.

Internet report for Jan. 20-26

ThousandEyes reported 395 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 20-26. That’s up 20% from 328 outages the week prior. Specific to the U.S., there were 195 outages, which up 24% from 157 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages increased slightly from 186 to 199 outages, a 7% increase compared to the week prior. In the U.S., ISP outages increased from 53 to 67, a 26% increase.

Public cloud network outages: Globally, cloud provider network outages jumped from 76 to 142 outages. In the U.S., cloud provider network outages increased from 69 to 110 outages.

Collaboration app network outages: Globally, collaboration application network outages remained unchanged from the week prior, recording 1 outage. In the U.S., collaboration application network outages dropped to zero.

Two notable outages

On January 24, Lumen, a U.S. based Tier 1 carrier, experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Italy, Canada, France, India, the U.K., Germany, and the Netherlands. The outage, lasting a total of 37 minutes, over a period of 45 minutes, was first observed around 1:20 AM EST and appeared to be centered on Lumen nodes located in New York, NY.  Around five minutes into the outage, a number of Lumen nodes exhibiting outage conditions in New York, NY, appeared to reduce. This drop in the number of nodes exhibiting outage conditions appeared to coincide with a decrease in the number of impacted downstream partners, and customers. The outage was cleared around 7:05 AM EST. Click here for an interactive view.

On January 23, AT&T, U.S.-based telecommunications company, experienced an outage on its network that impacted AT&T customers and partners across the U.S. The outage, lasting a total of 13 minutes over a 20-minute period, was first observed around 10:35 AM EST and appeared to center on AT&T nodes located in Dallas, TX. Around 15 minutes after first being observed, the number of nodes exhibiting outage conditions located in Dallas, TX, appeared to reduce. This decrease in nodes exhibiting outage conditions appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared at around 10:35 AM EST. Click here for an interactive view.

Internet report for Jan. 13-19

ThousandEyes reported 328 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 13-19. That’s up 11% from 296 outages the week prior. Specific to the U.S., there were 157 outages, which up 34% from 117 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages increased slightly from 182 to 186 outages, a 2% increase compared to the week prior. In the U.S., ISP outages increased from 40 to 53, a 33% increase.

Public cloud network outages: Globally, cloud provider network outages increased from 72 to 76 outages. In the U.S., cloud provider network outages increased from 54 to 69 outages.

Collaboration app network outages: Globally, and in the U.S., collaboration application network outages dropped from two outages to one.

Two notable outages

On January 15, Lumen, a U.S. based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions including the U.S., Hong Kong, Germany, Canada, the U.K., Chile, Colombia, Austria, India, Australia, the Netherlands, Spain, France, Singapore, Japan, South Africa, Nigeria, China, Vietnam, Saudi Arabia, Israel, Peru, Norway, Argentina, Turkey, Hungary, Ireland, New Zealand, Egypt, the Philippines, Italy, Sweden, Bulgaria, Estonia, Romania, and Mexico. The outage, lasting a total of one hour and 5 minutes over a nearly three-hour period, was first observed around 5:02 AM EST and appeared to initially be centered on Lumen nodes located in Dallas, TX.  Around one hour after appearing to clear, nodes located in Dallas, TX, began exhibiting outage conditions again, this time joined by Lumen nodes located in San Jose, CA, Washington, D.C., Chicago. IL, New York, NY, London, England, Los Angeles, CA, San Francisco, CA, Sacramento, CA, Fresno, CA, Seattle, WA, Santa Clara, CA, and Colorado Springs, CO, in exhibiting outage conditions. This increase in the number and location of nodes exhibiting outage conditions appeared to coincide with the peak in terms of the number of regions and downstream partners, and customers impacted. The outage was cleared around 7:25 AM EST. Click here for an interactive view.

On January 16, Hurricane Electric, a network transit provider headquartered in Fremont, CA, experienced an outage that impacted customers and downstream partners across multiple regions, including the U.S., Malaysia, Singapore, Indonesia, New Zealand, Hong Kong, the U.K., Canada, South Korea, Japan, Thailand, and Germany. The outage, lasting 22 minutes, was first observed around 2:28 AM EST and initially appeared to center on Hurricane Electric nodes located in Chicago, IL. Five minutes into the outage, the nodes located in Chicago, IL, were joined by Hurricane Electric nodes located in Portland, OR, Seattle, WA, and Ashburn, VA, in exhibiting outage conditions. This coincided with an increase in the number of downstream partners and countries impacted. Around 12 minutes into the outage, all nodes, except those located in Chicago, IL, appeared to clear. The outage was cleared at around 2:55 AM EST. Click here for an interactive view.

Internet report for Jan. 6-12

ThousandEyes reported 296 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Jan. 6. That’s double the number of outages the week prior (148). Specific to the U.S., there were 117 outages, which up 50% from 78 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages increased from 80 to 182 outages, a 127% increase compared to the week prior. In the U.S., ISP outages increased from 25 to 40, a 60% increase.

Public cloud network outages: Globally, cloud provider network outages increased from 34 to 72 outages. In the U.S., cloud provider network outages increased from 31 to 54 outages.

Collaboration app network outages: Globally, and in the U.S., there were two collaboration application network outages, up from one a week earlier.

Two notable outages

On January 8, Cogent Communications, a multinational transit provider based in the U.S., experienced an outage that impacted multiple downstream providers and customers across various regions, including the U.S., India, Canada, Mexico, Singapore, South Africa, Indonesia, Sweden, the U.K., Honduras, Japan, Vietnam, Thailand, Poland, the Netherlands, Australia, the Philippines, Greece, Germany, Argentina, New Zealand, France, Malaysia, Taiwan, and Colombia. The outage lasted a total of one hour and nine minutes, distributed across a series of occurrences over a period of three hours and 50 minutes. The first occurrence of the outage was observed around 6:00 AM EST and initially seemed to be centered on Cogent nodes located in Los Angeles, CA. Around three hours and 20 minutes after first being observed, nodes in Los Angeles, CA, began exhibiting outage conditions again, this time accompanied by nodes in Chicago, IL, El Paso, TX, and San Jose, CA. This increase in nodes experiencing outages appeared to coincide with a rise in the number of affected downstream customers, partners, and regions. Five minutes later, the nodes located in Chicago, IL, and El Paso, TX, appeared to clear, leaving only the nodes in Los Angeles, CA, and San Jose, CA, exhibiting outage conditions. The outage was cleared around 9:50 AM EST. Click here for an interactive view.

On January 10, Lumen, a U.S. based Tier 1 carrier (previously known as CenturyLink), experienced an outage that affected customers and downstream partners across multiple regions including Switzerland, South Africa, Egypt, the U.K., the U.S., Spain, Portugal, Germany, the United Arab Emirates, France, Hong Kong, and Italy The outage, lasting a total of 19 minutes, was first observed around 9:05 PM EST and appeared to be centered on Lumen nodes located in London, England, and Washington, D.C. Around twenty-five minutes from when the outage was first observed, the nodes located in London, England, appeared to clear, leaving only Lumen nodes located in Washington, D.C. exhibiting outage conditions. This drop in the number of nodes and locations exhibiting outage conditions appeared to coincide with a decrease in the number of impacted downstream partners, and customers. The outage was cleared around 9:55 PM CET. Click here for an interactive view.

Internet report for Dec. 30, 2024-Jan. 5, 2025

ThousandEyes reported 148 global network outage events across ISPs, cloud service provider networks, collaboration app networks and edge networks (including DNS, content delivery networks, and security as a service) during the week of Dec. 30, 2024. That’s up 95% from 76 outages the week prior. Specific to the U.S., there were 78 outages, which up nearly threefold from 28 outages the week prior. Here’s a breakdown by category:

ISP outages: Globally, total ISP outages increased from 46 to 80 outages, a 74% increase compared to the week prior. In the U.S., ISP outages increased from 10 to 25, a 150% increase.

Public cloud network outages: Globally, cloud provider network outages increased from 18 to 34 outages. In the U.S., cloud provider network outages increased from 13 to 31 outages.

Collaboration app network outages: There was one collaboration application network outage globally and in the U.S., which is an increase from zero in the previous week.

Two notable outages

On December 30, Neustar, a U.S. based technology service provider headquartered in Sterling, VA, experienced an outage that impacted multiple downstream providers, as well as Neustar customers within multiple regions, including the U.S., Mexico, Taiwan, Singapore, Canada, the U.K., Spain, Romania, Germany, Luxembourg, France, Costa Rica, Ireland, Japan, India, Hong Kong, and the Philippines. The outage, lasting a total of one hour and 40 minutes, was first observed around 2:00 PM EST and appeared to initially center on Neustar nodes located in Los Angeles, CA, and Washington, D.C. Around 10 minutes into the outage, nodes located in Washington, D.C., were replaced by nodes located in Ashburn, VA, in exhibiting outage conditions. Around 10 minutes later, nodes located in Virginia, VA, and Los Angeles, CA, appeared to clear and were replaced by nodes located in Dallas, TX and San Jose, CA, in exhibiting outage conditions. Five minutes later, these nodes were replaced by nodes located in London, England, Ashburn, VA, New York, NY, and Washington, D.C. A further five minutes later, these nodes were joined by nodes located in Dallas, TX, in exhibiting outage conditions. This increase in nodes exhibiting outage conditions also appeared to coincide with an increase in the number of downstream partners and regions impacted. The outage was cleared around 3:40 PM EST. Click here for an interactive view.

On January 4, AT&T experienced an outage on their network that impacted AT&T customers and partners across multiple regions including the U.S., Ireland, the Philippines, the U.K., France, and Canada. The outage, lasting around 23 minutes, was first observed around 3:35 AM EST, appearing to initially center on AT&T nodes located in Phoenix, AZ, Los Angeles, CA, San Jose, CA, and New York, NY. Around ten minutes into the outage, nodes located in Phoenix, AZ, and San Jose, CA, appeared to clear, leaving just nodes located in Los Angeles, CA, and New York, NY, exhibiting outage conditions. This decrease in nodes exhibiting outage conditions appeared to coincide with a drop in the number of impacted partners and customers. The outage was cleared at around 4:00 AM EST. Click here for an interactive view.

]]>
https://www.networkworld.com/article/3630303/2025-global-network-outage-report-and-internet-health-check.html 3630303Internet Service Providers, Network Management Software, Networking
Altera targets low-latency AI edge applications with new FPGA products Tue, 11 Mar 2025 21:06:26 +0000

Altera has introduced the latest family of Agilex FPGAs, along with Quartus Prime Pro software, and FPGA AI Suite to enable the rapid development of highly customized embedded systems for use in robotics, factory automation systems, and medical equipment.

Altera, was acquired by Intel in 2015 but last year, Intel spun the FPGA maker out as a standalone business. The vendor has spent the better part of last year filling out things like accounting, human resources, and other general business operations.

The Agilex family of FPGA processors uses the same naming scheme as Intel’s Core brand the consumer use, with the brands 3, 5, 7, and 9, with 3 being on the low end of the spectrum, 9 on the top-of-the-line, and the other two in between.

This Embedded World conference announcement features the low-power, cost-optimized Agilex 3 FPGAs, which the vendor says delivers nearly double the fabric performance compared to the previous generation at up to 38% lower power. The FPGA’s lets businesses modernize their edge and embedded infrastructure by deploying customized AI solutions that deliver the low latency, energy efficiency and agility needed for system longevity, the company said.

“Having an AI infused fabric that allows you to configure that FPGA with the precise algorithms and capabilities and resources of the underlying platform to deliver on those AI tasks is really one of the benefits and appeals of an FPGA, that flexibility, that reprogrammability and the ability to run many different algorithms and customize the data paths that you need for your applications,” said Sandra Rivera, CEO of Altera in a conference call with journalists.

Agilex 3 FPGAs also support multi-arm robots with multi-axis arms by using machine learning capabilities. Agilex 3 FPGAs also support smart factory cameras to improve defect detection by using fine-grained parallel processing and Convolutional Neural Networks (CNNs) trained for object recognition to analyze vast amounts of data.

Support for Agilex 3 and other Agilex product lines is available through Altera’s free Quartus software suite. Quartus is a design software suite for programmable logic devices. It allows engineers to design, analyze, optimize, and program Intel FPGAs, CPLDs, and SoCs using system-level design techniques and advanced place-and-route algorithms.

For AI developers, Altera has upgraded its FPGA AI Suite to release 25.1, adding support for Agilex 3 and Agilex 5 FPGA development for AI inferencing using familiar industry-standard frameworks like TensorFlow and PyTorch along with OpenVino. “We’re just making it easier for our embedded customers to deploy more AI machine learning into their embedded platforms with FPGAs inside,” said Rivera.

In other Agilex news, the first wave of Agilex 5 FPGAs E-Series devices are now fully qualified and released for high-volume production. Compared to the Agilex 5 D-Series FPGAs, the Agilex 5 E-Series FPGAs are optimized for more power-sensitive applications that require high-performance with smaller form factors and logic densities.

Altera is also expanding the MAX 10 FPGA family with new package options. The MAX 10 10M40 and 10M50 product lines are now offered in variable pitch BGA packages. This new package option significantly increases the value of these highly integrated devices by reducing form factor while maintaining a high I/O count, resulting in a lower total cost of ownership for users the company claims.

]]>
https://www.networkworld.com/article/3843263/altera-targets-low-latency-ai-edge-applications-with-new-fpga-products.html 3843263CPUs and Processors, Robotics, Servers
Fortinet reinforces OT network security platform Tue, 11 Mar 2025 20:41:16 +0000

Fortinet has bolstered its OT Security Platform to help customers more effectively protect industrial control systems and other operational technology networks from cyberattacks.

Fortinet’s OT Security Platform includes firewalls, switches, network access control, security information and event management, analytics and AI management capabilities. New to the upgraded platform are ruggedized switches and next generation firewalls (NGFW), services to increase threat detection, and improved AI capabilities to better delve into cyber threats.

“Asset and network visibility is a basic challenge for any organization with an OT environment,” wrote Nirav Shah, senior vice president and head of products and solutions at Fortinet, in a blog about the news.

“As OT infrastructure transforms and connects to more external networks, such as enterprise IT, the internet, and the cloud, visibility into OT networks is often extremely limited or nonexistent,” Shah wrote. “The unique assets typically found in OT networks operate on unique protocols. Traditional IT visibility solutions can’t see the assets, their vulnerabilities, or the traffic traversing the OT network, which makes OT security challenging to plan or implement.”

On the hardware side, Fortinet rolled out the FortiSwitch Rugged 108F and FortiSwitch Rugged 112F-POE. The Layer 2 108F and 112F-POE switches expand the vendor’s entry-level secure switch family and support port-level security enforcement that ensures unauthorized lateral movement across OT networks, Shah stated.

“These FortiSwitch Rugged models come in a small form factor and are DIN-rail mountable to fit most deployment scenarios. These products are designed to withstand extreme temperatures, vibration, and humidity,” Shah stated.

The company also added ruggedized switches for customers looking to utilize 5G connectivity to industrial sites. The FortiExtender Rugged 511G features embedded Wi-Fi 6 and new eSIM capabilities, removing the need for physical SIM cards.

“When connected to a FortiGate NGFW, a FortiSwitch essentially becomes a secure switch, with firewall protections and security policies implemented at each port and deep visibility provided for each asset, traffic, user, and activity enabled through the switch,” Shah wrote.

The company built out its NGFW offerings as well, adding the FortiGate Rugged 70G and FortiGate Rugged 50G-5G to provide advanced security and networking performance thanks to proprietary security and networking ASICs, according to Shah. “These devices also have an advanced digital I/O port. This feature allows the firewall to automate and secure digital and physical processes on site,” Shah wrote.

On the software side, Fortinet has enhanced the FortiGuard OT Security Service to deepen visibility and asset discovery capabilities.

“OT asset owners can now add known exploited vulnerabilities (KEVs) information to Internet-of-Things and OT vulnerabilities in the user and device store. They can also display KEV counts and warnings on the GUI Asset Identity Center page and see OT protocol bandwidth traffic and inbound connections. These enhancements to the OT Security Service can help OT security teams better understand the assets, traffic, and users on their OT networks,” Shah wrote.

Other enhancements let customers garner information about security threats and simplify compliance reporting for OT security teams, according to Shah.

Specifically, the vendor enhanced the AI support in its FortiAnalyzer security analytics and log management platform, to better learn and detect network problems.  The package already applies AI help with managing configurations, events, and alerts, along with advanced threat visualization, according to the company. The AI support also improves the company’s FortiDeceptor package designed to detect and stop active in-network attacks, the vendor stated.

]]>
https://www.networkworld.com/article/3843209/fortinet-reinforces-ot-network-security-platform.html 3843209Internet of Things, IoT Security, Network Security, Remote Access Security, Security Information and Event Management Software
Observe links end-user experience with back-end troubleshooting Tue, 11 Mar 2025 13:43:21 +0000

Observe has bolstered its observability platform with frontend monitoring capabilities that it says will enable developers and IT teams to gain visibility into application performance on end-user browsers and mobile applications.

Frontend Observability is designed to give IT teams the ability to identify and diagnose application performance issues based on how the apps behave for end users. Using open-source agents and OpenTelemetry-based software development kits (SDK) to collect data from browsers and mobile applications, Frontend Observability can monitor application performance and collect data that will help IT teams correlate frontend performance problems with backend services, according to Observe.

“To deliver great user experiences, DevOps teams need to see the big picture of how people interact with their applications, and how this relates to backend systems,” Observe CEO Jeremy Burton said in a statement. “Observe’s new Frontend Observability is based on OpenTelemetry and open-source agents so there is no vendor lock-in. Observability now starts with the moment a customer interacts with an application so DevOps teams can pull a thread through the entire stack in order to determine impact and root cause of any issues.”

Frontend Observability uses a capability called Browser Real User Monitoring (RUM) to enable IT and developer teams to quickly identify and diagnose performance issues across browsers, devices, and locations. For instance, RUM identifies anomalies in page load times, core web vitals, and JavaScript or HTTP errors. RUM also provides developers visibility into mobile app performance and mobile user experiences.

“Developers increasingly view end-user experience as essential to an application’s success,” said Kate Holterhoff, senior analyst at industry analyst firm RedMonk, in a statement. “With the shift toward client-side interactivity, they now demand broader insights into user interactions in order to optimize performance.”

Observe is a SaaS platform, and customers deploy Observe agents to collect telemetry data. The agents can collect data from a variety of sources, including infrastructure such as Kubernetes, databases such as MongoDB or Snowflake, and other applications. The agents collect time-series data, logs, traces/spans, and performance data from these various sources and send the data to Observe’s platform. Observe then takes the raw telemetry data, curates and normalizes it, and structures it to make it more easily navigable and usable for troubleshooting by customer teams.

“Modern web applications have gotten super complex. Users access your site from countless different devices, browsers, or geo locations,” wrote Amit Sharma, vice president of product marketing at Observe, in a blog post announcing the news. “Frontend Observability bridges this gap by connecting what happens in your users’ browsers and mobile apps with what’s happening on your backend services and infrastructure.”

According to Observe, Browser RUM is available today, Mobile RUM is available in private preview, and Observe customers can begin using Frontend Observability at no additional licensing cost.

]]>
https://www.networkworld.com/article/3842995/observe-links-end-user-experience-with-back-end-troubleshooting.html 3842995Network Management Software, Network Monitoring
ServiceNow to pay $2.85B for Moveworks’ AI tools Mon, 10 Mar 2025 22:04:05 +0000

ServiceNow announced plans to purchase Moveworks, its front-end AI assistant and enterprise search technology for $2.85 billion in cash and stock.

ServiceNow expects the deal to close in the second half of this year, and initial technology integrations between the two companies will provide customers with a universal AI assistant and agentic AI capabilities, according to ServiceNow. The acquisition will build on ServiceNow’s agentic AI ServiceNow Platform with Moveworks’ front-end AI agent and enterprise search capabilities by expanding ServiceNow’s reach to every requestor in an organization, ServiceNow said in a statement.

“With the acquisition of Moveworks, ServiceNow will take another giant leap forward in agentic AI-powered business transformation,” said Amit Zavery, president, chief operating officer, and chief product officer at ServiceNow, in a statement. “As agentic AI and enterprise-grade search forever change how we work, ServiceNow moved early to empower employees through AI.”

ServiceNow and Moveworks will deliver a unified, end‑to‑end search and self‑service experience for all employee requestors across every workflow, according to ServiceNow. A majority of Moveworks’ current customer deployments already use ServiceNow in their environments to access enterprise AI, data, and workflows. ServiceNow said this acquisition will build upon the existing synergies to enable employee engagement with “more perceptive AI-based enterprise search,” find fast answers to requests, automate and complete everyday tasks, and increase productivity. Moveworks lists companies such as Hearst, Instacart, Palo Alto Networks, Siemens, Toyota, Unilever, and others among its customers.

“Moveworks’ talented team and elegant AI-first experience, combined with ServiceNow’s powerful AI-driven workflow automation will supercharge enterprise-wide AI adoption and deliver game-changing outcomes for employees and their customers,” Zavery added.

Founded in 2016, Moveworks’ founders recognized a need for an AI assistant that could understand natural, conversational language, according to its website. The company provides customers with AI assistants to deal with employee requests, from IT tickets to human resources requests and policy questions. Moveworks last raised $200 million in Series C venture funding in 2021, with investors Tiger Global and Alkeon Capital. The last round of funding brought the company’s total to $315 million.

“Moveworks hides the complexity employees face at work by giving them an intuitive, engaging starting place to search and drive action across any enterprise system,” said Bhavin Shah, co‑founder and CEO, Moveworks, in a statement. “Becoming part of ServiceNow presents an incredible opportunity to accelerate our innovation and deliver on our promise through their AI agent‑fueled platform to redefine the user experience for employees and customer service teams.”

More by Denise Dubie:

>
]]>
https://www.networkworld.com/article/3842338/servicenow-to-pay-2-85b-for-moveworks-ai-tools.html 3842338Generative AI, Network Management Software
IBM wins UK lawsuit against LzLabs for mainframe intellectual property theft Mon, 10 Mar 2025 21:44:52 +0000

When it comes to protecting its mainframe technology, IBM wields a pretty big sword. This week, its defense triumphed as IBM won a judgement against LzLabs for violating Big Blue’s intellectual property rights.

“IBM is delighted that the Court has upheld our claims against Winsopia, LzLabs GmbH and John Moores,” IBM wrote in a statement about the verdict. 

Entrepreneur Moores is the owner of Switzerland-based LzLabs and well known for founding BMC Software in 1980.

“The Court found that these parties had conspired to breach Winsopia’s license agreement in a deliberate, systematic and intentionally hidden effort to unlawfully reverse engineer critical IBM mainframe technology. This technology represents billions of dollars of IBM investment,” IBM wrote.

In the case, which IBM filed in England and was decided by the London Technology & Construction Court (TCC), IBM alleged that LzLabs’ UK subsidiary Winsopia acquired an IBM mainframe and then illegally reverse-engineered the Big Iron software to build LzLabs’ core Software Defined Mainframe (SDM) package. 

The TCC is a specialized court within the High Court of Justice in England and Wales and is designed to settle technically complex cases. In its decision released March 10, the court included background on the dispute, noting that the purpose of the SDM is to allow IBM mainframe customers to run their existing applications, written for a mainframe, without mainframe hardware or software: “The SDM comprises a number of programs which can run on conventional x86 hardware (used by most laptops, PCs and servers) using Linux or other open-source operating systems and open-source database products. The aim of the SDM is to migrate existing applications which have been written to run on a mainframe and enable such programs to be run on the x86 runtime environment without recompilation,” the judgement reads.

IBM licensed its mainframe software to Winsopia beginning in 2013, according to the court documents. “IBM’s primary case is that the defendants breached, or procured breach of, the ICA [IBM customer agreement], using Winsopia’s access to the IBM mainframe software to develop the SDM by unlawful reverse engineering of the licensed software,” the court wrote.

LzLabs deliberately misappropriated IBM trade secrets by reverse engineering, reverse compiling and translating IBM software, IBM claimed. IBM also alleged that LzLabs has made false and misleading claims about the capabilities of LzLabs’ products. 

In the court filing, the judge wrote that “Winsopia breached that ICA and that LzLabs and Moores unlawfully procured the above breaches of the ICA by Winsopia.”

The March 10 ruling followed a 2024 trial. Another hearing at an undetermined date will determine damages or further actions, the court stated.

IBM previously noted LzLabs is owned and run by some of the same individuals who owned and ran Neon Enterprise Software, LLC of Austin, Texas.

“Neon previously attempted to free ride on IBM’s mainframe business, and prior litigation between IBM and Neon ended with a U.S. District Court permanently barring Neon and certain of its key employees from, among other things, reverse engineering, reverse compiling and translating certain IBM software, and also from continuing to distribute certain Neon software products,” IBM wrote.

More IBM mainframe litigation

The LzLabs ruling came on the same day IBM won another legal battle with its mainframe technology. In this case, the U.S. Supreme Court declined to take up a $1.6 billion contract dispute between IBM and BMC software.

BMC had asked the justices to reconsider a U.S. appeals court’s decision last year that overturned its win against IBM, according to a Reuters report. “BMC had persuaded a lower court judge that IBM unlawfully replaced BMC’s mainframe software at AT&T, but the 5th U.S. Circuit Court of Appeals threw out the award last year, ruling that BMC had “lost out to IBM fair and square.”

AT&T had hired IBM to run its mainframe operations. BMC filed a lawsuit in Houston federal court accusing IBM of breaching their contract when AT&T abandoned its software for IBM’s, Reuters stated.

]]>
https://www.networkworld.com/article/3842319/ibm-wins-uk-lawsuit-against-lzlabs-for-mainframe-intellectual-property-theft.html 3842319Data Center, Mainframes, Technology Industry
Nvidia GTC 2025: What to expect from the AI leader Mon, 10 Mar 2025 11:01:00 +0000

No company has both driven and benefited from AI advancements more than Nvidia. Last year, Nvidia’s GTC 2024 grabbed headlines with its introduction of the Blackwell architecture and the DGX systems powered by it. With GTC 2025 right around the corner (it runs March 17- 21 in San Jose, Calif.), the tech world is eager to see what Nvidia – and its partners and competitors – will unveil next. 

Expect GTC 2025 to further solidify Nvidia’s position as an AI leader as it showcases practical applications of generative AI, moving beyond theoretical concepts to real-world implementations. With the evolution of large language models (LLMs), Nvidia will likely demonstrate how these technologies are transforming industries, from healthcare and finance to manufacturing and entertainment

Center stage, of course, will be Nvidia’s founder and CEO, Jensen Huang. Known for captivating presentation style and bold pronouncements, Huang’s keynote is expected to set the tone for the conference, offering a glimpse into Nvidia’s vision for the future of AI.

Given the increasing demand for AI workloads, expect to see advancements in Nvidia GPUs aimed at addressing power efficiency and scalability, enabling more complex and demanding AI applications.

You can also expect a focus on edge AI. Given the proliferation of IoT devices and the need for real-time data processing, Nvidia will likely bring AI capabilities closer to the data source.

Nvidia will likely introduce updates to its CUDA (compute unified device architecture) platform, which the company developed to expand the capabilities of GPU acceleration. It’s designed to allow developers to access the computing power of CUDA GPUs and offers libraries and frameworks built to simplify AI development and deployment.

In addition to Nvidia, the exhibitor list is a who’s who of the tech industry, featuring companies like AWS, Dell Technologies, HPE, Microsoft Azure, Google Cloud, Databricks, Cisco, Cloudflare, Snowflake, Equinix to name a few.

Follow this page for previews and coverage from Nvidia GTC 2025 and follow Networkworld’s ongoing coverage of Nvidia throughout the year.

Cisco, Nvidia expand AI partnership to include Silicon One technology

February 18, 2025:  Cisco and Nvidia expanded their collaboration to support enterprise AI implementations by tying Cisco’s Silicon One technology to Nvidia’s Ethernet networking platform.

Nvidia forges healthcare partnerships to advance AI-driven genomics, drug discovery

February 14, 2025: Through new partnerships with industry leaders, Nvidia aims to help advance practical use cases for AI in healthcare and life sciences.

Nvidia partners with cybersecurity vendors for real-time monitoring

February 12, 2025: Nvidia has partnered with cybersecurity firms to provide real-time security protection using its accelerator and networking hardware in combination with its AI software. Nvidia will partner to integrate BlueField and Morpheus hardware with cyber defenses software from Armis, Check Point Software Technologies, CrowdStrike, Deloitte and World Wide Technology (WWT).

Nvidia claims near 50% boost in AI storage speed

February 5, 2025: Nvidia is touting a 50% gain in storage read bandwidth thanks to intelligence in its Spectrum-X Ethernet networking equipment. Spectrum-X is a combination of the company’s Spectrum-4 Ethernet switch and BlueField-3 SuperNIC smart networking card, which supports RoCE v2 for remote direct memory access (RDMA) over Converged Ethernet.

F5, Nvidia team to boost AI, cloud security

October 24, 2024: F5 and Nvidia are expanding their partnership to help enterprises build AI infrastructure and bolster cloud-based application security. Specifically, the companies will be integrating F5’s BIG-IP Next for Kubernetes platform and Nvidia BlueField-3 DPUs to offer customers a package capable of supporting AI networking and security duties while ensuring traffic management for cloud-based Kubernetes applications. 

Nvidia contributes Blackwell rack design to Open Compute Project

October 15, 2024:  Nvidia contributed its Blackwell GB200 NVL72 electro-mechanical designs – including the rack architecture, compute and switch tray mechanicals, liquid cooling and thermal environment specifications, and Nvidia NVLink cable cartridge volumetrics – to the Open Compute Project (OCP).

Coverage of Nvidia GTC 2024

Nvidia GTC 2024 wrap-up: Blackwell not the only big news

March 29, 2024: As Nvidia GTC 2025 approaches, let’s look back at news from GTC 2024. While the introduction of Blackwell architecture and the massive new DGX systems were the stars of the show, here’s a rundown of some of the other announcements.

Nvidia launches Blackwell GPU architecture

March 18, 2024: Nvidia kicked off its GTC 2024 conference with the formal launch of Blackwell, its next-generation GPU architecture Blackwell uses a chiplet design, to a point. Whereas AMD’s designs have several chiplets, Blackwell has two very large dies that are tied together as one GPU with a high-speed interlink that operates at 10 terabytes per second, according to Ian Buck, vice president of HPC at Nvidia.

Nvidia debuts massive Blackwell-powered systems

March 18, 2024: Along with its new Blackwell architecture, Nvidia unveiled new DGX systems that offer significant performance gains compared to the older generation. Iterations of Nvidia’s existing DGX servers range from 8 Hopper processors to 256 processors Nvidia is following a similar configuration structure for the Blackwell generation.

]]>
https://www.networkworld.com/article/3833841/nvidia-gtc-2025-what-to-expect-from-the-ai-leader.html 3833841GPUs
Sovereign European Cloud API claims to offer interoperability without lock-in Fri, 07 Mar 2025 17:56:30 +0000

Europe’s efforts to free itself from the domination of US cloud platforms have reached an important first milestone with the announcement of the Sovereign European Cloud API (SECA).

A collaboration between European providers Aruba and IONOS, and cloud marketplace Dynamo, the SECA API is being positioned as a building block in the continent’s larger EuroStack initiative, an ambitious project to challenge the economic domination and standards-setting power of mainly US tech companies and hyperscalers.

Today’s cloud is often inconvenient. Large cloud platforms lack interoperability, which leads to data silos as well as increasing costs when data is moved around. The complaint is that this lack of interoperability leads to vendor lock-in, where organizations get stuck inside discrete platforms.

The result is that life is harder for organizations adopting a hybrid cloud or multicloud approach. The suspicion is that large cloud platforms — read US hyperscalers — aren’t in any hurry to address this issue.

SECA addresses the interoperability issue head on, claiming to make it easier for rival cloud providers to offer customers the ability to run applications and workloads across different clouds.

It does this, its backers said, while removing the problem of lock-in and remaining compliant with European rules on data sovereignty, AI, and data protection.

“AI and Cloud are transforming the global economy, and Europe cannot afford to be left behind. Europe needs a strong, sovereign digital ecosystem. SECA is a critical step in building a secure, independent, and future-proof digital infrastructure — one that keeps Europe strong, competitive, and in control,” IONOS CEO Achim Weiss said in a statement about the project’s launch.

This was echoed by Aruba CEO Stefano Cecconi: “The creation of these common APIs — with Aruba and IONOS as first movers — marks a pivotal and voluntary step for the European cloud industry towards enhanced interoperability, strengthening the continent’s cloud services ecosystem.”

SECA is also a critical building block for the emerging EuroStack initiative, an attempt to carve out alternatives to the standards and technologies that cement US tech domination across multiple fields from microprocessors to computing standards.

Not long ago, EuroStack would have been viewed as worthy but unlikely to go anywhere quickly, not least because of its estimated 300 billion ($325 billion) cost. Europe seemed too competitive and fragmented to get its act together. But a few weeks of US President Donald Trump’s second term of office has changed that. Suddenly, US tech domination is no longer viewed as entirely benign.

“There is a growing desire among European organizations to have data sovereignty. There are concerns for the growing dependance on non-European cloud providers, and if you combine that with the current political climate, you have a strong case for SECA being adopted,” said Jason Wingate of Emerald Ocean Ltd which , as a Canadian company, could also have an interest in reducing its reliance on US technology vendors.

However, SECA still faces formidable obstacles: “The biggest challenge will be legal,” said Wingate. “The EU is a patchwork of national laws and regulations. It’s going to be complicated to navigate this and still be EU compliant, all the while being compliant with nation-level data and privacy laws.”

For cloud providers, security won’t be far behind as a concern, he said: “On paper is one thing, but the real test for adoption will be how secure it is. It can pass all the regulations and paperwork in the world, but if it’s insecure, no one will adopt it.”

The proof will be in the speed of SECA’s uptake. If European providers take to it, this could give smaller European companies an edge over the larger hyperscalers that dominate the market today.

Proprietary initiatives such as Microsoft’s EU Data Boundary overlap with some of the SECA’s aims. However, the two initiatives are otherwise very different. SECA is about broad data interoperability whereas Microsoft’s EU Data Boundary is about making it less tortuous for its customers to comply with the EU’s complex rules on data residency.

One is trying to foment an independence movement while the other is more of a convenience for people already inside Microsoft’s tent.

“Microsoft is centralized and proprietary. SECA is unproven but is open and offers greater flexibility,” said Wingate.

]]>
https://www.networkworld.com/article/3841550/sovereign-european-cloud-api-claims-to-offer-interoperability-without-lock-in.html 3841550Cloud Computing, Hybrid Cloud, Multicloud
Lenovo introduces compact, liquid-cooled AI edge server Fri, 07 Mar 2025 16:33:08 +0000

Lenovo has announced the ThinkEdge SE100, an entry-level AI inferencing server that’s designed to make edge AI affordable for enterprises as well as small and medium-sized businesses.

AI systems are not normally small and compact; they’re big, decked out servers with lots of memory, GPUs, and CPUs. But the server is for inferencing, which is the less compute-intensive portion of AI processing, Lenovo stated. GPUs are considered overkill for inferencing, and there are multiple startups making small PC cards with inferencing chips on them instead of the more power-hungry CPUs and GPUs.

[Related: What is an AI server?]

This design brings AI to the data rather than the other way around. Instead of sending the data to the cloud or data center to be processed, edge computing uses devices located at the data source, reducing latency and the amount of data being sent up to the cloud for processing, Lenovo stated. 

Rolled out at the Mobile World Conference show, the SE100 forms part of Lenovo’s family of new ThinkSystem V4 servers, with the V4 serving as the on-premises training systems and the SE100 placed on the edge, for hybrid cloud deployments. Like the V4, the SE100 comes with Intel Xeon 6 processors and the company’s Neptune liquid cooling technology.

But it is also compact. Lenovo says the SE100 is 85% smaller than a standard 1U server. It’s power draw is designed to be under 140W, even with a GPU-equipped configuration, according to Lenovo.

The ThinkEdge SE100 is designed for constrained spaces, and because it uses liquid cooling instead of fans, it can go into public places without being exceptionally noisy. The company said the server has been specifically engineered to reduce air flow requirements while also lowering fan speed and power consumption, keeping parts cooler in order to extend the system health and lifespan.

[ Related: Networking terms and definitions ]

“Lenovo is committed to bringing AI-powered innovation to everyone with continued innovation that simplifies deployment and speeds the time to results,” said Scott Tease, vice president of Lenovo infrastructure solutions group, products, in a statement. “The Lenovo ThinkEdge SE100 is a high-performance, low-latency platform for inferencing. Its compact and cost-effective design is easily tailored to diverse business needs across a broad range of industries. This unique, purpose-driven system adapts to any environment, seamlessly scaling from a base device, to a GPU-optimized system that enables easy-to-deploy, low-cost inferencing at the Edge.”

]]>
https://www.networkworld.com/article/3841518/lenovo-introduces-entry-level-liquid-cooled-ai-edge-server.html 3841518Edge Computing, Servers
Networking terms and definitions Fri, 07 Mar 2025 13:55:04 +0000

To find a brief definition of the networking term you are looking for user your browser’s “Find” feature then follow links to a fuller explanation.

AI networking

AI networking refers to the application of artificial intelligence (AI) technologies to network management and optimization. It involves using AI algorithms and machine learning techniques to analyze network data, identify patterns and make intelligent decisions to improve network performance, security and efficiency.

AIOps

AIOps, (AI for IT operations) refers to the application of AI and machine learning technologies to automate and improve the management and operations of IT systems, particularly in networking. By analyzing vast amounts of data generated by network devices, applications, and users, AIOps leverages AI/ML algorithms to identify and resolve issues, automate routine tasks, enhance network visibility, and improve overall operational efficiency. This enables IT teams to shift their focus from reactive problem-solving to proactive maintenance and strategic initiatives.

AI server

An AI server is a specialized computing system designed to handle demanding tasks required for artificial intelligence (AI) applications. AI servers are optimized with advanced hardware and software components to process large amounts of data and execute complex algorithms. An AI server relies on high-performance CPUs and GPUs, such as those from Nvidia, to handle complex computations, large memory capacity to store and process large datasets,  fast storage solutions like SSDs for quick data access, and advanced networking capabilities to move data for AI workloads. AI servers are used for training and deploying machine learning models, executing neural networks for tasks like image and speech recognition, analyzing and understanding human language, and processing and analyzing large datasets.

5G

5G is fast cellular wireless technology for enterprise IoT, IIoT, and phones that can boost wireless throughput by a factor of 10.

Private 5G

Private 5G: a dedicated mobile network built and operated within a private environment, such as a business campus, factory or stadium. Unlike public 5G networks, which are shared by multiple users, private 5G networks are exclusively used by a single organization or entity. While private 5G offers significant advantages, it requires specialized expertise and investment to build and manage.

Network slicing

Network slicing can make efficient use of carriers’ wireless capacity to enable 5G virtual networks that exactly fit customer needs.

Open RAN (O-RAN)

O-RAN is a wireless-industry initiative for designing and building 5G radio access networks using software-defined technology and general-purpose, vendor-neutral hardware.

Beamforming

Beamforming is a technique that focuses a wireless signal towards a specific receiving device rather than have the signal spread in all directions as with a broadcast antenna. The resulting connection is faster and more reliable than it would be without beamforming.

Data Center

Data centers are physical facilities that enterprises use to house business-critical applications and information and which are evolving from centralized, on-premises facilities to edge deployments and public-cloud services.

Power usage effectiveness (PUE)

Power usage effectiveness (PUE) is metric that measures the energy efficiency of a data center.

Data center automation

Data center automation is the process of using technology to automate routine data center tasks and workflows. By leveraging software and automation tools, data center operators can streamline operations, reduce human error, improve efficiency and enhance overall performance. Areas where data center automation is often deployed include provisioning, monitoring,  network orchestration and maintenance. Benefits of data center automation to benefits such as increased efficiency, reduced costs, improved reliability, enhanced scalability and improved security. Data center automation can be implemented using scripting languages (e.g., Python, PowerShell), automation platforms (e.g., Ansible, Puppet, Chef), and cloud-based management tools.

Data center infrastructure management

Data center infrastructure management (DCIM) is a comprehensive approach to managing all aspects of a data center, encompassing both IT equipment and supporting infrastructure. It’s a holistic system that helps data center operators keep their facilities running efficiently and effectively. 

DCIM provides a centralized platform for managing all aspects of a data center, enabling operators to make informed decisions, optimize performance, and ensure the reliable operation of their critical infrastructure. 

Here’s what DCIM does:

  • Monitoring: DCIM tools provide real-time visibility into the data center environment, tracking metrics like power consumption, temperature, humidity, and equipment status.  
  • Management: DCIM enables administrators to control and manage various aspects of the data center, including power distribution, cooling systems, and IT assets. 
  • Planning: DCIM facilitates capacity planning, helping data center operators understand current resource utilization and forecast future needs. 
  • Optimization: DCIM helps identify areas for improvement in energy efficiency, resource allocation, and overall operational efficiency. 

Data center sustainability

Data center sustainability is the practice of designing, building and operating data centers in a way that minimizes their environmental by reducing energy consumption, water usage and waste generation, while also promoting sustainable practices such as renewable energy and efficient resource management.

Hyperconverged infrastructure (HCI)

Hyperconverged infrastructure combines compute, storage and networking in a single system and is used frequently in data centers. Enterprises can choose an appliance from a single vendor or install hardware-agnostic hyperconvergence software on white-box servers.

Edge computing

Edge computing is a distributed computing architecture that brings computation and storage closer to the sources of data. That is, instead of sending all data to a centralized cloud or data center, processing occurs at or near the edge of the network, where devices like sensors, IoT devices, or local servers are located to process, analyze and retain the data.  In short, it’s about processing data closer to where it’s generated, which is designed to minimize latency, reduce bandwidth usage,and enable real-time responses.

Edge AI

Edge AI is the deployment and execution of artificial intelligence (AI) algorithms on edge devices or local servers, rather than relying solely on cloud-based, more centralized, AI processing. This involves running machine learning models and AI applications directly on devices at the edge of the network. Some key aspects of edge AI include the following:

  • Local processing: AI calculations happen on the device.
  • Reduced latency: Faster responses due to not sending all data to a data center or cloud.
  • Privacy: Sensitive data can be processed locally.
  • Offline capabilities: AI functions can work even without constant internet connectivity.

Think of edge computing as the infrastructure and edge AI as the intelligence at the edge of the network.

Firewall

Network firewalls were created as the primary perimeter defense for most organizations, but since its creation the technology has spawned many iterations: proxy, stateful, Web app, next-generation.

Next-generation firewall (NGFW)

Next-generation firewalls defend network perimeters and include features to inspect traffic at a fine level including intrusion prevention systems, deep-packet inspection, and SSL inspection all integrated into a single system.

Infiniband

Infiniband is a highly specialized technology, Infiniband’s performance and scalability make it a valuable tool for organizations that require the highest levels of network performance. The high-performance interconnect technology designed to provide low-latency, high-bandwidth communication between servers, storage devices, and other high-performance computing (HPC) components. It’s particularly well-suited for applications that require rapid data transfer, such as scientific computing, financial modeling and video rendering. Infiniband is commonly used for HPC clusters,  data centers, supercomputers and scientific research.

Ethernet

Ethernet is one of the original networking technologies and was invented 50 years ago. Despite its age, the communications protocol can be deployed and incorporate modern advancements without losing backwards compatibility, Ethernet continues to reign as the de facto standard for computer networking. As artificial intelligence (AI) workloads increase, network industry giants are teaming up to ensure Ethernet networks can keep pace and satisfy AI’s high performance networking requirements. At its core, Ethernet is a protocol that allows computers (from servers to laptops) to talk to each other over wired networks that use devices like routers, switches and hubs to direct traffic. Ethernet works seamlessly with wireless protocols, too.

Internet

The internet is a global network of computers using internet protocol (IP) to communicate globally via switches and routers deployed in a cooperative network designed to direct traffic efficiently and to provide resiliency should some part of the internet fail.

Internet backbone

Tier 1 internet service providers (ISP) mesh their high-speed fiber-optic networks together to create the internet backbone, which moves traffic efficiently among geographic regions.

IP address

An IP address is a unique set of numbers or combination of letters and numbers that are assigned to each device on an IP network to make it possible for switches and routers to deliver packets to the correct destinations.

PaaS, NaaS, IaaS and IDaaS

Platform as a service (PaaS): In PaaS, a cloud provider delivers a platform for developers to build, run and manage applications. It includes the operating system, programming languages, database and other development tools. This allows developers to focus on building applications without worrying about the underlying infrastructure.

Network as a service (NaaS): NaaS is a cloud-based service that provides network infrastructure, such as routers, switches and firewalls, as a service. This allows organizations to access and manage their network resources through a cloud-based platform.

Infrastructure as a service (IaaS): IaaS provides the building blocks of cloud computing — servers, storage and networking. This gives users the most control over their cloud environment, but it also requires them to manage the operating system, applications, and other components.

Identity as a service (IDaaS): providers maintain cloud-based user profiles that authenticate users and enable access to resources or applications based on security policies, user groups, and individual privileges. The ability to integrate with various directory services (Active Directory, LDAP, etc.) and provide single sign-on across business-oriented SaaS applications is essential.

IPv6

IPv6 is the latest version of internet protocol that expands the number of possible IP addresses from the 4.3 billion possible with IPv4 to 340 trillion trillion in order to accommodate unique addresses for every device likely to be attached to the public internet.

Internet of things (IoT)

The internet of things (IoT) is a network of connected smart devices providing rich operational data to enterprises. It is a catch-all term for the growing number of electronics that aren’t traditional computing devices, but are connected to the internet to to gather data, receive instructions or both.

Industrial internet of things (IIoT)


The industrial internet of things (IIoT) connects machines and devices in industries. It is the application of instrumentation and connected sensors and other devices to machinery and vehicles in the transport, energy and manufacturing sectors.

Industry 4.0

Industry 4.0 blends technologies to create custom industrial solutions that make better use of resources. It connects the supply chain and the ERP system directly to the production line to form integrated, automated, and potentially autonomous manufacturing processes that make better use of capital, raw materials, and human resources.

IoT standards and protocols

There’s an often-impenetrable alphabet soup of protocols, standards and technologies around the Internet of Things, and this is a guide to essential IoT terms.

Narrowband IoT (NB-IoT)


NB-IoT is a communication standard designed for IoT devices to operate via carrier networks, either within an existing GSM bandwidth used by some cellular services, in an unused “guard band” between LTE channels, or independently.

IP


Internet protocol (IP) is the set of rules governing the format of data sent over IP networks. 

DHCP

DHCP stands for dynamic host-configuration protocol, an IP-network protocol  used for a server to automatically assign networked devices with IP addresses on the fly and and share other information to those devices so they can communicate efficiently with other endpoints.

DNS

The Domain Name System (DNS) resolves the common names of Web sites with their underlying IP addresses, adding efficiency and even security in the process.

IPv6


IPv6 is the latest version of internet protocol that identifies devices across the internet so they can be located but also can handle packets more efficiently, improve performance and increase security.

IP address

An IP address is a number or combination of letters and numbers used to label devices connected to a network on which the Internet Protocol is used as the medium for communication. IP addresses give devices on IP networks their own identities so they can find each other.

Network management

Network management is the process of administering and managing computer networks.

Intent-based networking

Intent-based networking (IBNS) is network management that gives network administrators the ability to define what they want the network to do in plain language, and having a network-management platform automatically configure devices on the network to create the desired state and enforce policies.

Microsegmentation

Microsegmentation is a way to create secure zones in networks, in data centers, and cloud deployments by segregating sections so only designated users and applications can gain access to each segment.

Software-defined networking (SDN)

Software-defined networking (SDN) is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring. It operates by separating the network control plane from the data plane, enabling network-wide changes without manually reconfiguring each device.

Network security

Network security consists of the policies, processes, and practices adopted to prevent, detect, and monitor unauthorized access, misuse, modification, or denial of service on a computer network and network-accessible resources.

Identity-based networking

Identity-based networking ties a user’s identity to the networked services that user can receive.

Microsegmentation

Microsegmentation is a way to create secure zones in networks, in data centers, and cloud deployments by segregating sections so only designated users and applications can gain access to each segment.

Network access control (NAC)

Network Access Control is an approach to computer security that attempts to unify endpoint-security technology, user or system authentication, and network security enforcement.

SASE

Secure access service edge (SASE) is a network architecture that rolls software-defined wide area networking (SD-WAN) and security into a cloud service that promises simplified WAN deployment, improved efficiency and security, and to provide appropriate bandwidth per application. SASE, a term coined by Gartner in 2019, offers a comprehensive solution for securing and optimizing network access in today’s hybrid work environment.   Its core elements include the following: 

Secure web gateway (SWG): Filters and inspects web traffic, blocking malicious content and preventing unauthorized access to websites.  
Cloud access security broker (CASB): Enforces security policies and controls for cloud applications, protecting data and preventing unauthorized access. 
Zero trust network access (ZTNA): Grants access to applications based on user identity and device posture, rather than relying on network location. 
Firewall-as-a-service (FWaaS): Provides a cloud-based firewall that protects networks from threats and unauthorized access. 
Unified management: A centralized platform for managing and monitoring both network and security components.  
Automation: Automated workflows and policies to simplify operations and improve efficiency. 
Analytics: Advanced analytics to provide insights into network and security performance. 

Multivendor SASE

Refers to a SASE platform that is provided by multiple vendors. This means you’d source that different components of the SASE platform, such as the secure web gateway (SWG), cloud access security broker (CASB), and zero-trust network access (ZTNA) from different vendors. This allows you to choose the best-of-breed solutions for each component of the platform. By using multivendor SASE platform, you avoid being tied to a single vendor and reduce the risk of vendor lock-in. On the negative side, managing multiple vendors is time-consuming than managing a single-vendor solution. Also, issues among vendors can impact the performance, efficiency and reliability of the SASE solution.

Single-vendor SASE

Single-vendor SASE refers to a solution that is provided by a single vendor. This means that all of the components of the SASE platform, such as the secure web gateway (SWG), cloud access security broker (CASB), and zero-trust network access (ZTNA) are delivered by a single vendor. Advantages of single-vendor SASE include simplified management, smoother integration and enhanced support. Disadvantages include vendor lock-in, more limited capabilities compared to multivendor platforms, and higher costs for large organizations.

Network switch

A network switch is a device that operates at the Data Link layer of the OSI model — Layer 2. It takes in packets being sent by devices that are connected to its physical ports and sends them out again, but only through the ports that lead to the devices the packets are intended to reach. They can also operate at the network layer — Layer 3 where routing occurs.

Open systems interconnection (OSI) reference model

Open Systems Interconnection (OSI) reference model is a framework for structuring  messages transmitted between any two entities in a network.

Power over Ethernet (PoE)

PoE is the delivery of electrical power to networked devices over the same data cabling that connects them to the LAN. This simplifies the devices themselves by eliminating the need for an electric plug  and power converter, and makes it unnecessary to have separate AC electric wiring and sockets installed near each device.

Routers

A router is a networking device that forwards data packets between computer networks. Routers operate at Layer 3 of the OSI model and perform traffic-directing functions between subnets within organizations and on the internet.

Border-gateway protocol (BGP)

Border Gateway Protocol is a standardized protocol designed to exchange routing and reachability information among the large, autonomous systems on the internet.

UDP port

UDP (User Datagram Protocol) is a communications protocol primarily used for establishing low-latency and loss-tolerant connections between applications on the internet. It speeds up transmissions by enabling the transfer of data before the receiving device agrees to the connection.

Storage networking

Storage networking is the process of interconnecting external storage resources over a network to all connected computers/nodes.

Network attached storage (NAS)

Network-attached storage (NAS) is a category of file-level storage that’s connected to a network and enables data access and file sharing across a heterogeneous client and server environment.

Non-volatile memory express (NVMe)

A communications protocol developed specifically for all-flash storage, NVMe enables faster performance and greater density compared to legacy protocols. It’s geared for enterprise workloads that require top performance, such as real-time data analytics, online trading platforms, and other latency-sensitive workloads.

Solid-state drive (SSD)

Solid-solid drives, or an SSD, are storage device that uses flash memory to store data. Unlike traditional hard disk drives (HDDs), SSDs have no moving parts, making them faster, more reliable, and quieter.

Storage-area network (SAN)

A storage-area network (SAN) is a dedicated, high-speed network that provides access to block-level storage. SANs were adopted to improve application availability and performance by segregating storage traffic from the rest of the LAN. 

Virtualization

Virtualization is the creation of a virtual version of something, including virtual computer hardware platforms, storage devices, and computer network resources. This includes virtual servers that can co-exist on the same hardware, but behave separately.

Hypervisor

A hypervisor is software that separates a computer’s operating system and applications from the underlying physical hardware, allowing the hardware to be shared among multipe virtual machines.

Network virtualizaton

Network virtualization is the combination of network hardware and software resources with network functionality into a single, software-based administrative entity known as a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization.

Network function virtualization (NFV)

Network functions virtualization (NFV) uses commodity server hardware to replace specialized network appliances for more flexible, efficient, and scalable services.

Application-delivery controller (ADC)

An application delivery controller (ADC) is a network component that manages and optimizes how client machines connect to web and enterprise application servers. In general, a ADC is a hardware device or a software program that can manage and direct the flow of data to applications.

Virtual machine (VM)

A virtual machine (VM) is software that runs programs or applications without being tied to a physical machine. In a VM instance, one or more guest machines can run on a physical host computer.

VPN (virtual private network)

A virtual private network can create secure remote-access and site-to-site connections inexpensively, are a stepping stone to software-defined WANs, and are proving useful in IoT.

Split tunneling

Split tunneling is a device configuration that ensures that only traffic destined for corporate resources go through the organization’s internet VPN, with the rest of the traffic going outside the VPN, directly to other sites on the internet.

WAN

A WAN  or wide-area network, is a network that uses various links—private lines, Multiprotocol Label Switching (MPLS), virtual private networks (VPNs), wireless (cellular), the Internet — to connect organizations’ geographically distributed sites. In an enterprise, a WAN could connect branch offices and individual remote workers with headquarters or the data center.

Data deduplication

Data deduplication, or dedupe, is the identification and elimination of duplicate blocks within a dataset, reducing the amount of traffic that must go on WAN connections. Deduplication can find redundant blocks of data within files from different directories, different data types, even different servers in different locations.

MPLS

Multi-protocol label switching (MPLS) is a packet protocol that ensures reliable connections for real-time applications, but it’s expensive, leading many enterprises to consider SD-WAN as a means to limit its use.

SASE

Secure access service edge (SASE) is a network architecture that rolls software-defined wide area networking (SD-WAN) and security into a cloud service that promises simplified WAN deployment, improved efficiency and security, and to provide appropriate bandwidth per application. SASE, a term coined by Gartner in 2019, offers a comprehensive solution for securing and optimizing network access in today’s hybrid work environment.   Its core elements include the following: 

Secure web gateway (SWG): Filters and inspects web traffic, blocking malicious content and preventing unauthorized access to websites.  
Cloud access security broker (CASB): Enforces security policies and controls for cloud applications, protecting data and preventing unauthorized access. 
Zero trust network access (ZTNA): Grants access to applications based on user identity and device posture, rather than relying on network location. 
Firewall-as-a-service (FWaaS): Provides a cloud-based firewall that protects networks from threats and unauthorized access. 
Unified management: A centralized platform for managing and monitoring both network and security components.  
Automation: Automated workflows and policies to simplify operations and improve efficiency. 
Analytics: Advanced analytics to provide insights into network and security performance. 

SD-WAN

Software-defined wide-area networks (SD-WAN) is sofware that can manage and enforce the routing of WAN traffic to the appropriate wide-area connection based on policies that can take into consideration factors including cost, link performance, time of day, and application needs based on policies. Like its bigger technology brother, software-defined networking, SD-WAN decouples the control plane from the data plane. 

VPN

Virtual private networks (VPNs) can create secure remote-access and site-to-site connections inexpensively, can be an option in SD-WANs, and are proving useful in IoT.

Wi-Fi

Wi-Fi refers to the wireless LAN technologies that utilize the IEEE 802.11 standards for communications. Wi-Fi products use radio waves to transmit data to and from devices with Wi-Fi software clients to access points that route the data to the connected wired network..

802.11ad

802.11ad is an amendment to the IEEE 802.11 wireless networking standard, developed to provide a multiple gigabit wireless system standard at 60 GHz frequency, and is a networking standard for WiGig networks.

802.11ay

802.11ay is a proposed enhancement to the current (2021) technical standards for Wi-Fi. It is the follow-up to IEEE 802.11ad, quadrupling the bandwidth and adding MIMO up to 8 streams. It will be the second WiGig standard.

802.11ax (Wi-Fi 6)

802.11ax, officially marketed by the Wi-Fi Alliance as Wi-Fi 6 and Wi-Fi 6E, is an IEEE standard for wireless local-area networks and the successor of 802.11ac. It is also known as High Efficiency Wi-Fi, for the overall improvements to Wi-Fi 6 clients under dense environments.

Wi-Fi 6E

Wi-Fi 6E is an extension of Wi-Fi 6 unlicensed wireless technology operating in the 6GHz band, and it provides lower latency and faster data rates than Wi-Fi 6. The spectrum also has a shorter range and supports more channels than bands that were already dedicated to Wi-Fi, making it suitable for deployment in high-density areas like stadiums.

Beamforming

Beamforming is a technique that focuses a wireless signal towards a specific receiving device, rather than having the signal spread in all directions from a broadcast antenna, as it normally would. The resulting more direct connection is faster and more reliable than it would be without beamforming.

Controllerless Wi-Fi

It’s no longer necessary for enterprises to install dedicated Wi-Fi controllers in their data centers because that function can be distributed among access points or moved to the cloud, but it’s not for everybody.

MU-MIMO

MU-MIMO stands for multi-user, multiple input, multiple output, and is wireless technology supported by routers and endpoint devices. MU-MIMO is the next evolution from single-user MIMO (SU-MIMO), which is generally referred to as MIMO. MIMO technology was created to help increase the number of simultaneous users a singel access point can support, which was initially achieved by increasing the number of antennas on a wireless router.

OFDMA

Orthogonal frequency-division multiple-access (OFDMA) provides Wi-Fi 6 with high throughput and more network efficiency by letting multiple clients connect to a single access point simultaneously.

Wi-Fi 6 (802.11ax)

802.11ax, officially marketed by the Wi-Fi Alliance as Wi-Fi 6 and Wi-Fi 6E, is an IEEE standard for wireless local-area networks and the successor of 802.11ac. It is also known as High Efficiency Wi-Fi, for the overall improvements to Wi-Fi 6 clients under dense environments.

Wi-Fi 7

Wi-Fi 7 is currently the leading edge of wireless internet standards, providing more bandwidth, lower latency and more resiliency than prior standards. A year ago, there was some speculation that 2024 would be the breakout year for Wi-Fi 7. While some Wi-Fi 7 gear began to emerge in 2024, it looks like 2025 will be the year for Wi-Fi 7 rollouts. 

Wi-Fi standards and speeds

Ever-improving Wi-Fi standards make for denser, faster Wi-Fi networks.

WPA3

The WPA3 Wi-Fi security standard tackles WPA2 shortcomings to better secure personal, enterprise, and IoT wireless networks.

]]>
https://www.networkworld.com/article/970224/networking-terms-and-definitions.html 970224Networking
HPE cuts 2,500 jobs, remains committed to Juniper buy, faces tariff issues Fri, 07 Mar 2025 00:27:05 +0000

After sharing a mostly positive revenue report for the first quarter of its 2025 fiscal year, HPE executives detailed a number of challenges the company will face in the coming months, including layoffs, a court case over its proposed buy of Juniper Networks, and the U.S. government’s tariff plan.

Revenue was $7.9 billion, up 16% from the prior-year period, CEO Antonio Neri told Wall Street analysts. Still, “We could have executed better,” Neri said.

At the same time, Neri said the company would begin implementing a cost-cutting program involving layoffs of about 2,500 employees over the next 18 months. HPE employs about 61,000 people worldwide.

“Corporate cost actions will further strengthen our financial profile,” Neri said. “These are not easy decisions to make, as they directly affect the lives of our team members. We will treat all those transitions with the highest level of care and compassion.”

“This tough decision will help streamline our organization, improve productivity and speed up decision making,” added Marie Myers, executive vice president and CFO of HPE. “We expect to achieve at least $350 million in gross savings by fiscal 2027 with about 20% of the savings achieved by the end of this year, the timing of reductions will vary by geography.”

In terms of challenges, Neri said HPE was dealing with a higher-than-normal AI server inventory driven by the fact that the company didn’t respond quickly enough to the shift to next-generation Blackwell GPUs from Nvidia. That situation has been remedied, Neri said. HPE partners with Nvidia to resell a packaged AI server offering.

AI systems backlog rose 29% quarter over quarter to $3.1 billion, and total server revenue totaled $4.29 billion, Myers said.

The company reported Intelligent Edge revenue was down 5% from the prior-year period to $1.1 billion, but Hybrid Cloud revenue was $1.4 billion, up 10% from the prior-year period.

Then there’s the matter of HPE’s proposed $14 billion buy of Juniper Networks that is now being held up by the U.S. Justice Department. A trial has been set for July 9, Neri said.

“The DOJ analysis of the market is fundamentally flawed. We strongly believe this transaction will positively change the dynamics in the networking market by enhancing competition, HPE and Juniper remain fully committed to the transaction, which we expect will deliver at least $450 million in gross annual run rate synergies to shareholders within three years of the deal closing,” Neri said. 

“We believe we have a compelling case and expect to be able to close the transaction before the end of fiscal 2025.”

Like other industry players, HPE is trying to negotiate the tariffs that the U.S. has threatened to or implemented against China, Mexico, and other countries.

“In anticipation of this decision, we have been evaluating numerous scenarios and mitigation strategies since December to assess the potential net impact,” Myers said. “We intend to leverage our global supply chain to mitigate aspects of the expected impact with pricing adjustments. Our outlook for the balance of the year reflects our best estimate of the net impact from this tariff policy,” Myers said. 

“We build products around the globe close to the customer, because obviously we need to be close to them to meet the turnaround times that they require, but we’re able to shift productions from one side to the other,” Neri said. “What I don’t know, and this is best guess for everyone, what the overall price increases will be and how that will materialize and what that means for the market and demand in the second half of the year,” Neri said.

]]>
https://www.networkworld.com/article/3840596/hpe-cuts-2500-workers-expects-juniper-buy-to-close-end-of-25-faces-tariff-issues.html 3840596Data Center, GPUs, Networking, Servers, Technology Industry
Top network and data center events 2025 Thu, 06 Mar 2025 19:33:36 +0000

Ready to travel to gain hands-on experience with new networking and infrastructure tools? Tech conferences – in person and virtual – give attendees a chance to access product demos, network with peers, earn continuing education credits, and catch a celebrity keynote or live entertainment.

Check out our calendar of upcoming network, I&O, and data center conferences, and let us know if we’re missing any of your favorites.

March 2025

April 2025

May 2025

June 2025

July 2025

August 2025

September 2025

October 2025

November 2025

December 2024

]]>
https://www.networkworld.com/article/2138316/top-network-and-data-center-events.html 2138316Cloud Computing, Data Center, Network Security, Networking
Seven important trends in the server sphere Thu, 06 Mar 2025 18:58:11 +0000

The pace of change around server technology is advancing considerably, driven by hyperscalers but spilling over into the on-premises world as well. There are numerous overall trends, experts say, including:

  • AI everything: AI mania is everywhere and without high power hardware to run it, it’s just vapor. But it’s more than just a buzzword, it is a very real and measurable trend. AI servers are notable because they are decked out with high end CPUs, GPU accelerators, and oftentimes a SmartNIC network controller.  All the major players — Nvidia, Supermicro, Google, Asus, Dell, Intel, HPE — as well as smaller vendors are offering purpose-built AI hardware, according to a recent Network World article.
  • AI edge server growth: There is also a trend towards deploying AI edge servers. The Global Edge AI Servers Market size is expected to be worth around $26.6 Billion by 2034, from $2.7 Billion in 2024, according to a Market.US report. Considerable amounts of data are collected on the edge.  Edge servers do the job of culling the useless data and sending only the necessary data back to data centers for processing. The market is rapidly expanding as industries such as manufacturing, automotive, healthcare, and retail increasingly deploy IoT devices and require immediate data processing for decision-making and operational efficiency, according to the report.
  • Liquid cooling gains ground: Liquid cooling is inching its way in from the fringes into the mainstream of data center infrastructure. What was once a difficult add-on is now becoming a standard feature, says Jeffrey Hewitt, vice president and analyst with Gartner. “Server providers are working on developing the internal chassis plumbing for direct-to-chip cooling with the goal of supporting the next generation of AI CPUs and GPUs that will produce high amounts of heat within their servers,” he said. 
  • New data center structures: Not so much a server trend as a data center trend, but data center layouts are changing to accommodate AI server hardware. AI hardware is extremely dense and runs very hot, more so than typical server systems. Data center operators of every type deploying AI hardware have to be mindful of where they place it, says Naveen Chhabra, senior analyst with Forrester Research.

    “You need to identify the zones in which you can put those put that power,” he said. “You can’t simply concentrate the power into a particular zone in the data center and say here is where I’m going to run all my AI applications. That may not be the most pragmatic architecture.”
  • Virtualization land grab: Broadcom’s handling of the VMware acquisition has soured many potential customers and they are looking elsewhere, says Hewitt. “I would say that some server OEMs have been moving to support additional server virtualization options since the acquisition of VMware by Broadcom. This last trend is intended to support other virtualization choices if their clients are seeking those,” he said.
  • InfiniBand starts to fade: InfiniBand will start to fade as an option for high speed interconnectivity in favor of Ethernet, Chhabra said. “The way Ethernet is evolving, expectations are that in two to three years it would have the capability to handle high speed interconnect.  Organizations would not want to maintain two different stacks of connectivity when one would be able to do the job,” he said.
  • Component shortages drive people to the cloud: Chhabra says that with this current component shortage and demand  for data center equipment, that might drive people to the cloud rather than on premises. “I can tell you that if you want, let’s say, 20 server units with Nvidia GPUs, you are going to wait for at least a year, year and a half, to effectively get that shipped to your doors. And that is forcing companies to think about for that interim, can I go source it from somewhere? And people are exploring all those options,” he said.
]]>
https://www.networkworld.com/article/3840436/seven-important-trends-in-the-server-sphere.html 3840436Critical Infrastructure, GPUs, Servers
Network jobs watch: Hiring, skills and certification trends Thu, 06 Mar 2025 17:42:14 +0000

Network and infrastructure roles continue to shift as enterprises adopt technologies such as AI-driven network operations, multicloud networking, zero trust network access (ZTNA), and SD-WAN. Here’s a recap of some of the latest industry research, hiring statistics, and certification trends that impact today’s network professionals, infrastructure and operations (I&O) leaders, and data center teams. Check back for regular updates.

Companies struggle to retain tech talent as IT pros switch jobs

A recent ISACA study found that nearly three-fourths (74%) of companies surveyed are concerned about retaining technology talent. The same study also found that one in three IT professionals switched jobs in the past two years.

The global ISACA Tech Workplace and Culture study surveyed 7,726 technology professionals in the fourth quarter of 2024 to learn more about career satisfaction, compensation, and more. The study found that a majority (79%) of IT pros experience stress on the job, and respondents identified the main work-related stressors as:

  • Heavy workloads: 54%
  • Long hours: 43%
  • Tight deadlines: 41%
  • Lack of resources: 41%
  • Unsupportive management: 41%

Survey respondents also cited the top reasons for leaving a job as the following:

  • Desire for higher compensation
  • Improve career prospects
  • Want more interesting work

“A robust and engaged tech workforce is essential to keeping enterprises operating at the highest level,” said Julia Kanouse, Chief Membership Officer at ISACA, in a statement. “In better understanding IT professionals’ motivations and pain points, including how these may differ across demographics, organizations can strengthen the resources and support these employees need to be effective and thrive, making strides in improving retention along the way.”

March 2025

Network pros: Upskill in AI and automation

Networking skills must advance alongside emerging technologies, according to industry watchers. Networking professionals should get training around artificial intelligence and automation to design, build, and manage the networks businesses need to succeed today. Networking pros must have the skills to enable the integration of new AI applications with the underlying AI infrastructure and enable AI to assist with networking tasks.

“Networking roles are undergoing significant evolution, with a key emphasis on the integration of emerging technologies such as AI and automation,” says Joost Heins, head of intelligence at Randstad Enterprise, a global talent solutions provider.

By developing skills in networking monitoring, performance management, and cost optimization through automation and AI-powered tools, networking pros can become more adept at troubleshooting while offloading repetitive tasks such as copy-pasting configurations. Over time, they can gain the skills to better understand which behaviors and patterns to automate.

Read the full story here.

February 2025

CompTIA launches CloudNetX certification

The vendor-neutral CompTIA CloudNetX certification is now available, targeted at senior-level tech pros who want to validate that they’ve got the skills to design, engineer, and integrate networking solutions from multiple vendors in hybrid cloud environments. Professionals should have a minimum of ten years of experience in the IT field and five years of experience in a network architect role, with specific experience in a hybrid cloud environment, CompTIA recommends.

“The demand for highly skilled network architects has surged as organizations increasingly adopt hybrid cloud solutions,” said Katie Hoenicke, senior vice president, product development at CompTIA, in a statement. “For seasoned network professionals, CompTIA CloudNetX can help them enhance and validate the advanced skills needed to excel in these complex environments.”

CompTIA says the exam covers:

  • Technologies such as container networking, software-defined cloud interconnect, and generative AI for automation and scripting.
  • Network security, including threats, vulnerabilities, and mitigations; identity and access management; and wireless security and appliance hardening; and zero-trust architecture.
  • Business requirements analysis to design and implement network solutions, ensuring candidates can align technical skills with organizational goals.

 Read more about CompTIA’s Xpert Series certifications here.

February 2025

Tech skills gap worries HR, IT leaders

An overwhelming majority (84%) of HR and IT leaders surveyed by technology talent-as-a-service provider Revature reported that they are concerned with finding tech talent in the coming year. The survey polled some 230 HR and IT decision-makers, and more than three-quarters (77%) said that their team has been affected by the current IT skills gap. While 56% of respondents sad upskilling/reskilling is their strategy for closing the IT skills gaps, many reported ongoing challenges. Among the challenges survey respondents have experienced are:

  • Finding qualified talent with the necessary skills: 71%
  • IT staffing companies can’t deliver talent quickly: 57%
  • Upskilling/reskilling in-house talent: 53%
  • Learning Management Systems are ineffective: 30%
  • Overall cost of training and staffing: 23%

When asked which technical skills are important, 29% of respondents pointed to artificial intelligence, generative AI and machine learning skills. And 75% of respondents believe they are highly prepared or prepared for the influx of new technologies such as genAI, with 63% believing genAI will positively impact training and 56% saying it will help with hiring and retention in 2025.

February 2025

CompTIA releases AI Essentials program

CompTIA recently launched its AI Essentials program that promises to help professionals develop skills in AI fundamentals.

The CompTIA Essentials program provides self-paced lessons with videos, activities, reflection questions, and assessments. The training will help professionals distinguish AI from other types of intelligence and computing and teach them how to communicate about AI effectively. Students will also learn how to create AI prompts and navigate the privacy and security concerns that AI technology presents.

The program uses both realistic scenarios and practice activities to experience how AI is applied in real-world situations. According to CompTIA, topics covered in the training include: AI Unveiled; Generative AI Frontiers; Engineering Effective Prompts; Balancing Innovation and Privacy; and Future Trends and Innovations in AI.

Available now, CompTIA AI Essentials costs $129 and includes a license that would be valid for 12 months. Read the full story here.

January 2025

Mixed bag for IT, tech jobs

Industry watchers continue to keep close tabs on the IT workforce as some research shows 70,900 tech jobs were cut from the economy, while other organizations report that unemployment rates for technology workers has dropped to 2%, representing the lowest level in more than a year.

Janco Associates reports that 48,600 jobs were lost in 2023 along with 22,300 positions eliminated in 2024, based on U.S. Bureau of Labor Statistics data. “In 2023 and 2024, there was a major re-alignment in the way things are done within the IT function. With all the new advances in technology, many jobs have been eliminated or automated out of existence,” said M. Victor Janulaitis, CEO of Janco.

Separately, CompTIA recently reported that the tech unemployment rate dropped to 2%, while the national unemployment rate remained unchanged at 4.1% for December. CompTIA reported that the base of tech employment throughout the economy increased by a net new 7,000 positions, putting the total number of tech workers at about 6.5 million.

January 2025

CompTIA updates penetration testing cert

CompTIA recently announced it had upgraded its PenTest+ certification program to educate professionals on cybersecurity penetration testing with training on artificial intelligence (AI), scanning and analysis, and vulnerability management, among other things.

PenTest+ will help cybersecurity professionals demonstrate their competency of current trends, prove they are up-to-date on the latest trends, and show they can perform hands-on tasks. According to CompTIA, professionals completing the PenTest+ certification course will learn the following skills: engagement management, attacks and exploits, reconnaissance and enumeration, vulnerability discovery and analysis, and post-exploitation and lateral movement.

The PenTest+ exam features a maximum of 90 performance-based and multiple-choice questions and runs 165 minutes. Testers must receive a score of 750 or higher to pass the certification test. CompTIA recommends professionals taking the certification course and exam also have Network+ and/or Security+ certifications or equivalent knowledge, and three to four years of experience in a penetration testing job role. Pricing for the exam has yet to be determined. Read the full story here.

January 2025

CompTIA launches SecurityX cert

CompTIA this week made available its SecurityX certification, which it had announced as part of its Xpert Series of certifications. SecurityX is designed for IT professionals with multiple years of work experience as security architects and senior security engineers who want to validate their expert-level knowledge of business-critical technologies. The program will cover the technical knowledge and skills required to architect, engineer, integrate, and implement enterprise security solutions across complex environments. CompTIA expects to release another expert-level certification program, CompTIA CloudNetX, in the coming months. Read the full story here.

December 2024

CompTIA unveils starter courses for network, security certs

The new CompTIA a+ Network and CompTIA a+ Cyber courses aim to provide newcomers with the knowledge they need to start a tech career in networking and security. The skills gained will help people to train for higher-level certifications, according to CompTIA. CompTIA a+ Network includes 31 hours of instruction and teaches individuals to set up and support networks, troubleshoot issues, and manage Linux and Windows systems. CompTIA a+ Cyber covers the skills to secure devices and home networks. The price for each course is $499. Read the full story here.

A third new certification from CompTIA aims to teach newcomers tech foundations. CompTIA Tech+ is designed to provide a “spectrum of tech knowledge and hands-on skills” to students looking to ultimately work in tech-based roles, according to the provider. The Tech+ certification covers basic concepts from security and software development as well as information on emerging technologies such as artificial intelligence, robotics, and quantum computing. Specific details on the exam for the CompTIA Tech+ certification are not yet available. Read the full story here.

December 2024

AI helps drive IT job growth

Artificial intelligence is driving growth for the IT workforce at some enterprises. Nearly half (48%) of organizations polled for the Motion Recruitment 2025 Tech Salary Guide said they plan to add workers due to an increase in AI investments, compared to 19% that said they would downsize in relation to the technology. AI is also credited with transforming existing roles, with 23% of organizations shifting existing staff positions into roles that directly address AI, according to Motion Recruitment.

Also of note: The number of fully remote tech positions is decreasing as the average time spent in office has grown from 1.1 days per week to 3.4 days per week. Read the full story here.

December 2024

New OpenTelemetry certification

A new certification program aims to validate the skills needed to use OpenTelemetry, which helps IT teams gain visibility across distributed systems of cloud-native and microservices-based applications. Created by the Cloud Native Computing Foundation (CNCF) and Linux Foundation, the OpenTelemetry Certified Associate (OTCA) certification is designed for application engineers, DevOps engineers, system reliability engineers, platform engineers, or IT professionals looking to increase their abilities to leverage telemetry data across distributed systems.

Telemetry data is critical to observability technologies because it provides raw, detailed information about system behavior to provide insights beyond basic monitoring metrics. Telemetry data can also be used to enable proactive problem detection and resolution in distributed systems.

OTCA includes 12 months to schedule and take the exam and two exam attempts, and the exam is priced at $250. Read the full story here.

November 2024

AI, cybersecurity top skill shortages for 2025

IT leaders are planning investments for 2025, and they expect to be putting budget dollars toward technologies such as artificial intelligence (AI), machine learning (ML), cybersecurity, cloud, and more. Yet while investing in innovative technologies is part of future planning, IT decision-makers also expect to struggle to staff certain roles due to an ongoing tech skills shortage.

According to a global Skillsoft survey of 5,100 global IT decision-makers, the most difficult technology areas to hire for included cybersecurity/information security (38%), cloud computing (22%), and AI/ML (20%), among several others. As new technologies emerge, IT leaders must take inventory of the skills they have in-house and this survey found that 19% of respondents believe there is a “high risk of organizational objectives not being met due to skills gaps.” Read the full story here.

November 2024

Tech employment remains flat in October

Tech employment experienced little to no change in October, indicating that by year-end there will not be enough roles available for the number of unemployed technology professionals. The U.S. Bureau of Labor Statistics (BLS) monthly jobs report shows that the unemployment rate remained mostly unchanged, and separate analysis of the findings reveals that unemployment for technology professionals also remained flat.

“The job market for IT Pros had a major shift losing an average of 4,983 jobs per month over the past 12 months,” said M. Victor Janulaitis, CEO of Janco Associates, in a statement. “According to the latest BLS data analyzed, there are now approximately 4.18 million jobs for IT Professionals in the US. Layoffs at big tech companies continued to hurt overall IT hiring. Large high-tech firms continue to lay off to have better bottom lines. Included in that group of companies that have recently announced new layoffs are Intel, Microsoft, and Google.”

According to CompTIA’s analysis of the BLS data, technology roles increased by 70,000 in October to about 6.5 million workers, and CompTIA pointed to job posting data that showed broad-based hiring across software, cybersecurity, support, data, and infrastructure. Still, CompTIA reports that tech industry employment declined by more than 4,000 jobs in October.

“Despite the higher than usual noise in this month’s labor market data, there are a number of positives to point to on the tech employment front. The data indicates employers continue a balanced approach to hiring across core tech job roles and innovation enabling roles,” said Tim Herbert, chief research officer at CompTIA, in a statement.

November 2024

Cloud certifications bring in big dollars

Skillsoft’s most recent ranking of the highest-paid IT certifications shows that IT professionals with certs in AWS, Google, and Nutanix earn more on average in the U.S.—some more than $200,000. According to Skillsoft’s tally, the top five highest-paying certifications are:

  • CCNP Security: $168,159
  • AWS Certified Security – Specialty: $203,597
  • Google Cloud – Professional Cloud Architect: $190,204
  • Nutanix Certified Professional – Multicloud Infrastructure (NCP-MCI) v6.5: $175,409
  • CCSP – Certified Cloud Security Professional: $171,524

“Overall, the IT job market is characterized by a significant imbalance between supply and demand, which continues to drive salaries higher. Our data suggests that tech professionals skilled in cloud computing, security, data privacy, and risk management, as well as able to handle complex, multi-faceted IT environments, will be well-positioned for success,” says Greg Fuller, vice president of Codecademy Enterprise. “This year’s list shows that cloud computing skills remain in high demand and can be quite lucrative for tech professionals.” Read the full story here.

October 2024

Cybersecurity skills shortage persists

There are not enough cybersecurity workers to fill the current number of open roles in the U.S. or globally as an ever-increasing threat landscape demands more security professionals. Recent data from CyberSeek shows that 265,000 more cybersecurity workers would be needed to solve current staffing needs. In addition, ISC2 Research reports that 90% of organizations report having skill gaps within their security teams in areas that include AI/ML (34%), cloud computing security (30%), and zero trust implementation (27%). Read the full story here.  

October 2024

Women in IT report gender bias in the workplace

A recent survey revealed that 71% of 327 full-time female IT respondents said they work longer hours in hopes of more quickly advancing their careers. In addition, 70% of respondents said men in IT were likely to advance their careers or receive promotions more quickly than women. Some 31% of those surveyed said they believe that men are promoted faster. And almost two-thirds said their workplaces are not doing enough to promote or achieve gender equality, according to Acronis.

To help foster more gender diversity, survey respondents said they could benefit from training and other courses, including: master classes, learning courses, and workshops (63%); networking events (58%); and memberships in professional organizations (44%). On the employer side, respondents said they believe organizations can help foster more gender equality in the workplace by offering mentorship opportunities (51%), actively hiring more diverse candidates (49%), and ensuring pay equity (49%). Read the full story here.

October 2024

Tech unemployment decreases in September

Technology occupation employment increased by 118,000 new positions in September, according to CompTIA’s analysis of recent data released by the U.S. Bureau of Labor Statistics (BLS). The job growth pushed the tech unemployment rate down to 2.5% and included 8,583 net new positions for the month.

The CompTIA Tech Jobs Report shows that job postings for future tech hiring grew to more than 516,000 active postings, including 225,000 new listings added in September. The jobs that saw the largest growth in percentage points in September are tech support specialists and database administrators. New hiring was driven by cloud infrastructure, data processing and hosting, and tech services and customer software development sector, CompTIA concluded from the BLS data.

“It was never really a question of if, but when employers were going to resume hiring,” Tim Herbert, chief research officer, CompTIA, said in a statement. “A broad mix of companies viewed recent economic developments as the green light to move forward in addressing their tech talent needs.”

October 2024

CompTIA bolsters Cloud+ certification

CompTIA has updated its Cloud+ professional certification to include DevOps, combining software development know-how with network operations experience, and other areas of expertise such as troubleshooting common cloud management issues.

The updated certification course will cover cloud architecture, design, and deployment; security; provisioning and configuring cloud resources; managing operations throughout the cloud environment life cycle; automation and virtualization; backup and recovery; high-availability; fundamental DevOps concepts; and cloud management. The program will also include expertise on technologies such as machine learning, artificial intelligence, and the Internet of Things, according to CompTIA.

“Businesses need to ensure that their teams have the skills to manage cloud and hybrid environments,” said Teresa Sears, senior vice president of product management at CompTIA, said in a statement. “CompTIA Cloud+ gives team members the ability to manage complex migrations, oversee multi-cloud environments, secure data, and troubleshoot while maintaining cost-effective operations.”

Technology professionals with CompTIA Cloud+ or CompTIA Network+ certifications can further their skills and validate their knowledge with the CompTIA CloudNetX certification, which is scheduled to be released early next year and is part of the CompTIA Xpert Series, CompTIA says.

October 2024

Pearson debuts genAI certification

There’s a new genAI certification from Certiport, a Pearson VUE business. This week the provider unveiled its Generative AI Foundations certification, which is designed to equip professionals and students with the skills needed to work with genAI technologies. The certification will validate an individual’s knowledge in areas such as:

  • Understanding generative AI methods and models
  • Mastering the basics of prompt engineering and prompt refinement
  • Grasping the societal impact of AI, including recognizing bias and understanding privacy concerns

The Generative AI Foundations certification is available now through Mindhub and Certiport as well as Pearson VUE’s online testing platform, OnVUE, and in test centers within the Certiport network.

October 2024

Mixed bag for network, system admin jobs

Recent data from the U.S. Bureau of Labor Statistics (BLS) shows that while there will be growth for many IT positions between now and 2033, some network and computer systems administrator roles are expected to decline. The number of computer network architects will climb 13.4%, and computer network support specialists will see a 7.3% gain in jobs. Network and computer systems administrators will see a decline of 2.6%, however.

Overall, the market segment that BLS calls “computer and mathematical occupations” is projected to grow 12.9% between 2023 and 2033, increasing by 699,000 jobs. That makes it the second fastest growing occupational group, behind healthcare support occupations (15.2%).

Read the full story here: 10-year forecast shows growth in network architect jobs while sysadmin roles shrink

September 2024

IT employment ticks down in August

IT employment ticked down .05% in August, resulting in the loss of 2,400 jobs, month-over-month, according to an analysis of the high-tech employment market by TechServe Alliance. On a yearly basis, the IT job market shrunk by .33% with a loss of 17,500 positions. On a more positive note, the staffing company noted that engineering positions saw a more than 1% increase in a year-over-year comparison, adding 29,800 jobs in the same period.

“As the overall job market softened in August, IT employment continued to struggle to gain momentum,” said Mark Roberts, TechServe’s CEO, in a statement. “Throughout 2024, job growth in IT has been effectively flat after 23 consecutive months of job losses. I continue to see IT employment moving sideways until the fog of uncertainty lifts over the economy, the national election, and ongoing geopolitical turbulence.”

September 2024

Employee education holding back AI success

Employee education and training around AI will become more and more critical as research reveals that a majority of employees do not know how to apply the technology to their jobs.

According to Slingshot’s 2024 Digital Work Trends Report, 77% of employees reported that don’t feel they are completely trained or have adequate training on the AI tools offered to them by managers. And for the most part, managers agree with just 27% saying that they feel employees are completely trained on the AI tools provided to employees.

The research, conducted in Q2 2024 by Dynata and based on 253 respondents, also noted that AI skills and quality data are significant barriers to AI success. Nearly two-thirds (64%) of all respondents noted that their organization doesn’t have AI experts on their team, which is preventing their employers from offering AI tools. Another 45% pointed to the quality of data within the organization as a top reason AI tools aren’t offered at work. A third reason that AI isn’t prevalent in some workplaces is that organizations don’t have the tech infrastructure in place to implement AI tools.

“Data is top of mind for employees too when it comes to AI: 33% of employers say their company would be ready to support AI if their company’s data was combed through for accuracy, and 32% say they need more training around data and AI before their company is ready,” the report reads.

September 2024

U.S. labor market continues downward slide

The U.S. Bureau of Labor Statistics (BLS) this week released its most recent employment data that shows the ratio of job openings per unemployed worker continues to steadily decline, indicating unemployment rates will continue to rise.

According to BLS Job Openings and Labor Turnover Summary (JOLTS) data, the number of job openings hit 7.7 million on the last day of July, while the hires stood at 5.5 million and “separations” increased to 5.4 million. Separations (3.3 million) include quits, layoffs, and discharges (1.8 million) for the same timeframe. The most recent numbers hint at more bad news for unemployment in the country, according to industry watchers.

“The labor market is no longer cooling down to its pre-pandemic temperature … it’s dropped below,” an Indeed Hiring Lab report on the BLS data stated. “The labor market is past moderation and trending toward deterioration.”

For IT professionals, the BLS data shows that jobs in high tech might grow slightly by 5,000 jobs in 2024, but that will not be enough growth to offset the number of unemployed IT workers—which Janco Associates estimates is about 145,000.

“According to the latest BLS data analyzed, there are now approximately 4.18 million jobs for IT professionals in the US. Layoffs at big tech companies continued to hurt overall IT hiring. Large high-tech firms continue to lay off to have better bottom lines. Included in that group of companies that have recently announced new layoffs are Intel, Microsoft, and Google,” said M. Victor Janulaitis, CEO of Janco, in a statement. “At the same time, BLS data shows that around 81,000 IT pros were hired but that 147,000 were looking for work in June. Our analysis predicts the same will be the case for July and August.”

September 2024

CompTIA unveils data science certification program

Technology pros seeking to validate their data science competencies can now prove their knowledge with CompTIA’s DataX certification program.

Part of CompTIA’s recently launched Xpert Series, the DataX program is based on input from data scientists working in private and public sectors and focuses on the skills critical to a data scientist’s success, such as: mathematics and statistics; modeling, analysis, and outcomes; operations and processes; machine learning; and specialized applications of data science. The program is designed for data scientists with five or more years of experience, and it identifies knowledge gaps as well as provides learning content to get candidates current on expert-level topics.

“Earning a CompTIA DataX certification is a reliable indicator of a professional’s commitment to excellence in the field of data science,” said Teresa Sears, senior vice president of product management, CompTIA, in a statement. “This program validates the advanced analytics skills that help organizations enhance efficiency, mitigate risks, and maximize the value of their data assets.”

August 2024

CompTIA partners to provide IT training and certifications across Africa

CompTIA is partnering with Gebeya Inc. to provide access to CompTIA’s library of IT, networking, cybersecurity and cloud computing courses. The collaboration will allow Africans interested in technology to access IT training and certification classes via CompTIA.

Gebeya, a Pan-African talent cloud technology provider, says its mission “is to close the digital skills gap and drive digital transformation across Africa.” Partnering with CompTIA will enable aspiring technology workers in Africa to bolster their skills. “Our strategic partnership with CompTIA allows us to integrate a comprehensive skilling module within the Gebeya Talent Cloud, enabling our customers and partners to offer unmatched access to world-class IT training and certifications to their talent communities,” said Amadou Daffe, Gebeya CEO, in a statement.

CompTIA offers vendor-neutral IT certifications that cover the fundamentals of several IT functions. The organization says its library of courses can help individuals stay current with today’s in-demand technology skills as well as enhance technical competency worldwide.

“We have a shared mission to close the digital skills gap in Africa,” said Benjamin Ndambuki, CompTIA’s territory development representative for Africa, in a statement. “With Gebeya’s extensive reach and local expertise and CompTIA’s globally recognized certifications, we are confident we can empower a new generation of African tech professionals to thrive in the digital economy.”

August 2024

U.S. job growth weaker than forecast, unemployment rate creeping upward  

New data released from the U.S. Bureau of Labor Statistics (BLS) shows earlier estimates of job growth were miscalculated. The agency reported this week that there were 818,000 fewer jobs added in the 12 months ending in March 2024 than previously reported. This information coupled with reports from Indeed that the unemployment rate continues to slowly increase is raising recession fears.

According to Indeed’s Hiring Lab, “on a three-month average basis, the unemployment rate has risen .55 percentage points since its low of 3.5% in January 2023.” The adjusted BLS numbers suggest weak hiring and a cooler market than previously projected, but Indeed says there are reasons for “cautious optimism” about the U.S. labor market. For instance, the amount of available job postings and growth in wages could continue to attract more workers to the labor force.

“In addition to a relative abundance of job opportunities, another factor that may be drawing workers back to the labor force in greater numbers is persistently strong wage growth, which has slowed from recent highs but remains on par with pre-pandemic levels,” Indeed reported.

August 2024

Talent gap threatens US semiconductor industry

The semiconductor industry could be facing a major labor shortage as industry growth has outpaced the availability of skilled workers in the US. A recent report by McKinsey & Company found that public and private investment in the semiconductor industry in the US will expand to more than $250 billion by 2032 and will bring more than 160,000 new job openings in engineering and technical support to the industry. This coupled with the steep decline of the US domestic semiconductor manufacturing workforce – which has dropped 43% from its peak employment levels in 2000 – means the industry will struggle to fill those jobs. At the current rate, the shortage of engineers and technicians could reach as high as 146,000 workers by 2029, according to the report.

August 2024

CompTIA wants to help build high-tech careers

New career resources from CompTIA are designed to teach people about specific tech-related roles and empower them to tailor a career path that best aligns with their skills and experiences.

“Too many people don’t know what it means to work in tech, so they’re scared, or they think the jobs are boring or are too hard,” said Todd Thibodeaux, president and CEO of CompTIA, in a statement. “We want to educate people about the dynamic employment opportunities available in tech; encourage them to know they can thrive in these jobs; and empower them with the knowledge and skills to succeed.”

Among the new resources is CompTIA Career Explorer, which the nonprofit organization says will help professionals tailor a career path that aligns with their workstyles and lifestyles. With the tool, jobseekers can test drive “a day in the life of specific job roles and challenge themselves with real-time, true-to-life problem solving” related to the jobs.

CompTIA Career+ will provide users with an immersive, interactive video experience that “showcases a day in the life of in-demand job roles,” according to CompTIA. This resource will feature up to 30 job roles, representing about 90% of all tech occupations.

The organization announced the new resources at its CompTIA ChannelCon and Partner Summit conference. “We want people to associate CompTIA with the competencies and skills to work in technology,” Thibodeaux said.

August 2024

Where STEM jobs pay the most

A new study conducted by Germany-based biotechnology provider Cytena shows that California provides the highest average salaries in the U.S. for those working in science, technology, engineering, and math (STEM) professions.

Cytena analyzed salary data for more than 75 STEM jobs listed on company review website Glassdoor to determine which states in the U.S. paid the most for technology talent. California ranks first with an average salary of $124,937 across all the jobs in the study, which included positions ranging from medical professionals to mathematicians and data scientists to network and software engineers. Washington state placed a close second with the average annual salary falling just below $124,000, and New York landed in third place with an average annual salary of $114,437. Following the top three, Nevada, Maryland, Massachusetts, Idaho, Hawaii, Colorado, and Connecticut rounded out the top ten states in the U.S. that pay the highest salaries for STEM-related positions.

July 2024

SysAdmin Day 2024: Celebrate your systems administrators

Friday, July 26 marks the 25th annual System Administrator Appreciation Day. Always celebrated on the last Friday in July, SysAdmin Day recognizes IT professionals who spend their days ensuring organizations and the infrastructure supporting them run smoothly. Some may say it is a thankless job, which is why Ted Kekatos created the day to honor the men and women working to install and configure hardware and software, manage networks and technology tools, help end users, and monitor the performance of the entire environment.

Network and systems admins field complaint calls and solve incidents for end users, often without hearing how much they helped their colleagues. The unsung heroes of IT, sysadmins deserve this day of recognition — they might even deserve a gesture or gift to acknowledge all the long hours they work and how much they do behind the scenes.

July 2024

NetBrain launches network automation certification program

NetBrain Technologies debuted its Network Automation Certification Program, which will recognize engineers with advanced network automation skills. The program will enable network engineers to validate their skills and communicate the skillsets to others, according to NetBrain. Initial exams for the program will be offered October 3 following the NetBrain Live Conference in Boston.

NetBrain currently lists three network automation certifications on its website:

  • NetBrain Certified Automation Associate (NCAA): This certification demonstrates a mastery of the essentials of NetBrain Automation. Engineers with this certification can design, build, and implement automation that can be scaled networkwide to achieve an organization’s automation goals.
  • NetBrain Certified Automation Professional (NCAP): This certification validates network engineers as experts with proficiencies in network automation to enhance critical troubleshooting and diagnostic workflows across network operations, security, and IT infrastructures.
  • NetBrain Certified Automation Architect (NCAE): This certification distinguishes network engineers as network automation visionaries capable of shaping a corporate NetDevOps strategy from initial concept design and rollout through operation and enablement.

July 2024

Skillsoft develops genAI skills program with Microsoft

Skillsoft announced it collaborated with Microsoft to develop its AI Skill Accelerator program, which will help organizations upskill their workforce to effectively use Microsoft AI technologies such as Copilot and Azure Open AI as well as generative AI technologies more broadly. The goal is to drive improved business productivity and innovation using genAI applications more effectively.

“This collaboration with Microsoft is the first of many AI learning experiences we will deliver to help our customers and their talent—from everyday end users to business leaders to AI developers—acquire the skills and tools they need to succeed in the age of AI,” said Ron Hovsepian, executive chair at Skillsoft, in a statement. According to Skillsoft’s annual IT Skills and Salary report that surveyed 5,700 tech professionals worldwide, 43% of respondents say their team’s skills in AI need improvement.

Skillsoft’s AI Skill Accelerator offers a blended learning experience, including on-demand courses, one-on-one and group coaching, live instructor-led training, and hands-on practice labs. According to Skillsoft, the program will enable customers to:

  • Assess the current state of AI-related technology and leadership skills across the workforce
  • Index skills to make data-driven decisions about where talent can drive strategic business outcomes with AI
  • Develop AI skills rapidly with emerging training methods powered by Microsoft’s Azure Open AI
  • Reassess existing talent and skills gaps through post-training benchmarks

“Microsoft and Skillsoft have a long-standing relationship and share a common goal to enable AI transformation across every area of business,” said Jeana Jorgensen, corporate vice president of worldwide learning at Microsoft, in a statement. “This learning experience is designed to empower individuals and organizations to harness the full capabilities of generative AI, Microsoft Copilot, and Microsoft’s AI apps and services.”

July 2024

Tech industry adds jobs, IT unemployment increases

Data from IT employment trackers shows that the technology industry added more than 7,500 new workers in June, while at the same time the overall unemployment rate for IT pros increased.

According to CompTIA, the tech industry added some 7,540 new workers in June, which marks the biggest monthly increase so far this year. CompTIA’s analysis of U.S. Bureau of Labor Statistics (BLS) data also shows that the positive growth was offset by a loss of 22,000 tech occupations throughout the U.S. economy. “Despite pockets of growth, the recent data indicates a degree of downward pressure on tech employment,“ said Tim Herbert, chief research officer, CompTIA, in a statement. “A combination of factors, including AI FOMO, likely contributes to segments of employers taking a wait and see approach with tech hiring.”

Separately, Janco Associates reported that the overall unemployment rate for IT pros in June grew to 5.9%, which is higher than the 4.1% U.S. national unemployment rate. Janco Associates also estimated that 7,700 jobs were added to the IT job market in May 2024. “The number of unemployed IT Pros rose from 129,000 to 147,000.  There still is a skills mismatch as positions continue to go unfilled as the available IT Pros do not have the requisite training and experience required. The BLS data shows that around 78,000 IT pros were hired but that 147,000 are looking for work,” Janco Associates reported.

July 2024

CompTIA Network+ cert gets an update

CompTIA updated its Network+ certification to include more extensive coverage of modern network environments, factors related to physical network installations, and know-how to better secure and harden networks.

Software-defined networking (SDN) and SD-WAN are covered in the updated Network+ exam, or N10-009. According to CompTIA, “the program introduces infrastructure as code (IaC), which is considered a transformative approach that leverages code for improved provisioning and support for computing infrastructure.”

The updated Network+ certification program also now integrates zero-trust architecture and other forms of network fortification. Read more in the full story: CompTIA updates Network+ certification

June 2024

AWS adds two AI-focused certifications

Amazon Web Services (AWS) launched two new certifications in artificial intelligence for IT professionals looking to boost their skills and land AI-related jobs. The additional know-how will help practitioners secure jobs that require emerging AI skills, which could offer a 47% higher salary in IT, according to an AWS study.

AWS Certified AI Practitioner is a foundational program that validates knowledge of AI, machine learning (ML), and generative AI concepts and use cases, according to AWS. Candidates who are familiar with using AI/ML technologies on AWS and who complete a 120-minute, 85-question course will be able to sharpen their skills with fundamental concepts as well as use cases for AI, ML, and genAI. The exam will cover topics such as prompt engineering, responsible AI, security and compliance for AI systems, and more.

AWS Certified Machine Learning Engineer—Associate is a 170-minute exam with 85 questions that validates technical ability to implement ML workloads in production and to operationalize them. Individuals with at least one year of experience using Amazon SageMaker and other ML engineering AWS services would be good candidates for this certification. The exam will cover topics such as data preparation for ML models, feature engineering, model training, security, and more.

Registration for both new AWS certifications opens August 13.

June 2024

Cisco unveils AI-focused certification

Cisco’s new AI certification aims to help prepare IT pros to design, provision and optimize networks and systems needed for demanding AI/ML workloads. Unveiled at its Cisco Live conference in Las Vegas, the Cisco Certified Design Expert (CCDE)-AI Infrastructure certification is a vendor-agnostic, expert-level certification. With it, tech professionals will be able to design network architectures optimized for AI workloads, and “they’ll be able to do this while incorporating the unique business requirements of AI, such as trade-offs for cost optimization and power, and the matching of computing power and cloud needs to measured carbon use,” wrote Par Merat, vice president of Cisco Learning and Certifications, in a blog post about the new cert.

According to Cisco, the new CCDE-AI Infrastructure certification addresses topics including designing for GPU optimization as well as building high-performance generative AI network fabrics. Those seeking this certification will also learn about sustainability and compliance of networks that support AI. The skills will be needed across organizations, according to the Cisco AI Readiness Index, which found that 90% of organizations are investing to try to overcome AI skills gaps. Read more here: Cisco debuts CCDE-AI Infrastructure certification

June 2024

U.S. cybersecurity talent demand outpaces supply

As businesses continue to seek cybersecurity talent, the current supply of skilled workers will not meet the demand in 2024, according to recent data from CyberSeek, a data analysis and aggregation tool powered by a collaboration among Lightcast, NICE, and CompTIA.

There are only enough available workers to fill 85% of the current cybersecurity jobs throughout the U.S. economy, according to CyberSeek data, and more than 225,000 workers are needed to close the cybersecurity skills gap. The data also shows that job postings for all tech occupations declined by 37% between May 2023 and April 2024.

“Although demand for cybersecurity jobs is beginning to normalize to pre-pandemic levels, the longstanding cyber talent gap persists,” said Will Markow, vice president of applied research at Lightcast, in a statement. “At the same time, new threats and technologies are causing cybersecurity skill requirements to evolve at a breakneck pace, forcing employers, educators, and individuals to proactively anticipate and prepare for an ever-changing cyber landscape.”

Positions in the highest demand include network engineers, systems administrators, cybersecurity engineers, cybersecurity analysts, security engineers, systems engineers, information systems security officers, network administrators, information security analysts, and software engineers, according to the CyberSeek data.

“Building a robust cybersecurity presence often requires changes in talent acquisition strategies and tactics,” said Hannah Johnson, senior vice president, tech talent programs, CompTIA, in a statement. “That can include upskilling less experienced cybersecurity professionals for more advanced roles, or hiring people who demonstrate subject matter expertise via professional certifications or other credentials.”

June 2024

Average salary for IT pros surpasses $100k

Recent employment data shows that the median salary for IT professionals is now $100,399, with total compensation (including bonuses and fringe benefits) reaching $103,692. Management consulting firm Janco Associates, Inc. reported that IT salaries have risen by 3.28% in the past 12 months, even while the unemployment rate for IT workers hits 5%. Executives continue to see the biggest paychecks with total compensation packages increasing by 7.48% and median compensation reaching $184,354.

“Salary compression” is another trend Janco Associates noted. This occurs when new hires are offered salaries at the higher end of the pay range for existing positions, often getting paid more than current employees in the same roles.

Midsized enterprise companies are seeing more attrition than their large enterprise counterparts, while salaries in midsized companies are also rising faster than they are in large enterprises. Salary levels in midsized enterprises increased 5.46% versus 2.56% in larger enterprises, according to Janco Associates.

May 2024

AI, IT operations among the most in-demand IT skills

New research and survey results from IDC show that a growing lack of in-demand IT skills could be negatively impacting businesses’ bottom lines.

The IDC report, Enterprise Resilience: IT Skilling Strategies, 2024, reveals the most in-demand skills at enterprise organizations right now. Among the 811 respondents, artificial intelligence tops the list, cited by 45% of respondents, followed closely by IT operations (44%) and cloud solutions-architecture (36%). Other skills in demand right now include: API integration (33%), generative AI (32%), cloud solutions-data management/storage (32%), data analysis (30%), cybersecurity/data security (28%), IoT software development (28%), and IT service management (27%).

Nearly two-thirds (63%) of the IT leaders at North American organizations said the lack of these skills has delayed digital transformation initiatives, most by an average of three to 10 months. Survey respondents detailed the negative impacts of lacking skills in their IT organizations:

  • Missed revenue goals: 62%
  • Product delays: 61%
  • Quality problems: 59%
  • Declining customer satisfaction: 59%
  • Lost revenue: 57%

Considering these survey results, IDC predicts that by 2026, 90% of organizations worldwide will feel the pain of the IT skills crisis, potentially costing up to $5.5 trillion in delays, quality issues, and revenue loss. “Getting the right people with the right skills into the right roles has never been so difficult,” says Gina Smith, PhD, research director for IDC’s IT Skills for Digital Business practice, said in a statement. “As IT skills shortages widen and the arrival of new technology accelerates, enterprises must find creative ways to hire, train, upskill, and reskill their employees. A culture of learning is the single best way to get there.”

May 2024

Organizations abandon IT projects due to skills gap

A lack of specific technology skills worries IT executives, who report they will not be able to adopt new technologies, maintain legacy systems, keep business opportunities, and retain clients if the skills gap persists.

In a recent survey by online professional training provider Pluralsight, 96% of technologists said their workload has increased due to the skills gap, and 78% also reported that they abandoned projects partway through because they didn’t have employees with the necessary IT skills to successfully finish. While most organizations (78%) said their skills gap has improved since last year, survey respondents reported that cybersecurity, cloud, and software development are the top three areas in which a skills gap exists. IT executives surveyed said they worry the skills gap in their organizations will make it difficult to:

  • Adopt new technology: 57%
  • Maintain legacy systems: 53%
  • Keep business opportunities: 44%
  • Retain clients: 33%

Pluralsight surveyed 1,400 executives and IT professionals across the U.S., U.K., and India to learn more about the technical skills gap and how organizations are addressing a lack of expertise in specific technology areas.

May 2024

Lack of skills stymies network automation efforts

Network automation continues to challenge IT leaders, and one factor is a lack of skills on staff.

When research firm Enterprise Management Associates surveyed 354 IT professionals about network automation, just 18% rated their network automation strategies as a complete success, and 54% said they have achieved partial success. The remaining 38% said they were uncertain of the level of success achieved or admitted failure with their network automation projects.

More than one-fourth (26.8%) of the respondents pointed to staffing issues such as skills gaps and staff churn as a business challenge. “The most challenging thing for me is the lack of network engineers who can contribute to automation,” said a network engineer at a midmarket business services company in the EMA report. “The community is small, and it’s hard to find people who can help you solve a problem.”

April 2024

CompTIA plans AI certification roadmap

IT certification and training group CompTIA is expanding its product and program roadmap to meet the growing demand for AI-related skill sets.

AI becoming critical to existing job functions. At the same time, new roles are starting to land on employers’ radar. “Two entirely new job roles—prompt engineering and AI systems architects—are emerging. These positions align with the AI priorities of many organizations,” said Teresa Sears, vice president of product management at CompTIA.

Millions of IT professionals will need to acquire new AI skills to meet the needs of the job market, said Thomas Reilly, CompTIA’s chief product officer, in a statement. “We intend to create a range of certifications and training offerings spanning the complete career arc, from foundational knowledge for pre-career and early career learners to advanced skills for professionals with years of workforce experience.”

February 2024

IT job growth flattened in 2023

The number of new IT jobs created in calendar year 2023 flattened with just 700 positions added, which signals continued concerns about the economy and growing demand for skills focused on emerging technologies. For comparison, 2022 saw 267,000 jobs added, with industry watchers attributing the dramatic difference to tech layoffs and other cost-cutting measures.

According to Janco Associates, despite companies adding some 21,300 jobs in the fourth quarter of 2023, the overall increase for the entire calendar year still comes to just 700 new positions. 

“Based on our analysis, the IT job market and opportunities for IT professionals are poor at best. In the past 12 months, telecommunications lost 26,400 jobs, content providers lost 9,300 jobs, and other information services lost 10,300 jobs,” said M. Victor Janulaitis, CEO at Janco, in a statement. “Gainers in the same period were computer system designers gaining 32,300 jobs and hosting providers gaining 14,000.”

January 2024

Positive hiring plans for new year

Robert Half reports that the job market will remain resilient heading into 2024. According to the talent solutions provider’s recent survey, more than half of U.S. companies plan to increase hiring in the first half of 2024. While the data is not limited to the IT sector, the research shows 57% plan to add new permanent positions in the first six months of the year while another 39% anticipate hiring for vacant positions and 67% will hire contract workers as a staffing strategy.

Specific to the technology sector, 69% of the more than 1,850 hiring managers surveyed reported they would be adding new permanent roles for those professions. Still, challenges will persist into the new year, according to Robert Half, which reported 90% of hiring managers have difficulty finding skilled professionals and 58% said it takes longer to hire for open roles compared to a year ago.

December 2023

Cisco CCNA and AWS cloud networking rank among highest paying IT certifications

Cloud expertise and security know-how remain critical in building today’s networks, and these skills pay top dollar, according to Skillsoft’s annual ranking of the most valuable IT certifications. At number one on its list of the 20 top-paying IT certifications is Google Cloud-Professional Cloud Architect with an average annual salary of $200,960.

In addition to several cloud certifications, there are five security, networking, and system architect certifications on Skillsoft’s top 20 list:

  • ISACA Certified Information Security Manager (CISM): Average annual salaries for those with CISM certification is $167,396, a slight increase over last year’s 162,347 salary.
  • ISC2 Certification Information Systems Security Professional (CISSP): This certification consistently delivers an average annual salary of $156,699, according to Skillsoft.
  • ISACA Certified Information Systems Auditor (CISA): Professionals with a CISA certification earn an average annual salary of $154,500, an increase over last year’s $142,336.
  • AWS Certified Advanced Networking-Specialty: This certification commands an annual average salary of $153,031.
  • Cisco Certified Network Associate (CCNA): This certification commands an average annual salary of $128,651.

November 2023

]]>
https://www.networkworld.com/article/2093749/network-jobs-watch-hiring-skills-and-certification-trends.html 2093749Careers, Data Center, Networking
Data center vacancies hit historic lows despite record construction Thu, 06 Mar 2025 17:23:21 +0000

The supply of data center space in major markets increased by 34% year-over-year to 6,922.6 megawatts in 2024, according to research from CBRE. That rate of growth is considerably higher than the 26% increase seen in 2023, the commercial real estate service provider notes. Primary markets such as Northern Virginia, Atlanta, and San Francisco had a record 6,350 megawatts (MW) under construction at the end of 2024, which is more than double the 3,077.8 MW capacity being built at the end of 2023.

Yet even with those record construction rates, the overall vacancy rate in primary markets fell to a record-low 1.9% at the end of 2024. What that means is colocation providers have almost no free space to rent. There is only a handful of facilities with 10 MW or more that are slated for delivery in 2025 and are not yet leased, a reflection of how scarce inventory really is.

Because of this, the cost of capacity is going up. The average monthly asking rate for a 250-to-500-kilowatt requirement across primary markets increased by 12.6% year-over-year to $184.06 per kilowatt (kW).   

Atlanta was the top growing primary market in 2024, with net new capacity of 705.8 MW, easily beating perennial leader Northern Virginia’s 451.7 MW of new capacity. This was the first time any market surpassed Northern Virginia in annual net absorption. Significant growth is also taking place in Charlotte, Northern Louisiana, and Indiana thanks to tax incentives, available land and greater power availability.

Data center headwinds

The growth comes despite considerable headwinds facing data center operators, including higher construction costs, equipment pricing, and persistent shortages in critical materials like generators, chillers and transformers, CRBE stated.

There is a considerable pricing disparity between newly built data centers and legacy facilities, reflecting the premium placed on modern, energy-efficient infrastructure. Specifically, liquid/immersion cooling is preferred over air cooling for modern server requirements, CRBE found.

On the networking side of things, major telecom companies made substantial investments in fiber in the second half of 2024, reflecting the growing need for more network infrastructure and capacity to accommodate growing demand from AI and data providers.

There have also been many notable deals recently: AT&T’s multi-year, $1 billion agreement with Corning to provide next-generation fiber, cable and connectivity solutions; Comcast’s proposed acquisition of Nitel; Verizon’s agreement to acquire Frontier, the largest pure-play fiber internet provider in the U.S.; and T-Mobile’s entry into the fiber internet market via partnerships with fiber-optic providers.

In the quarter, Meta announced plans for a 25,000-mile undersea fiber cable that would connect the U.S. East and West coasts with global markets across the Atlantic, Indian and Pacific oceans. The project would mark the first privately owned and operated global fiber cable network.

Data center outlook

CBRE made a series of projections for the coming years:

  • Transitioning from coal-generated to renewable-energy power generation will continue to gain traction in 2025. On-site solar, wind, geothermal and nuclear generation are all being evaluated, with natural gas being an interim alternative to coal.
  • Power will remain the number one priority for selection of greenfield development sites.
  • Flood plains will be less of a concern for data center development sites, so long as they have ample power availability and raised construction to avoid flood damage.
  • Despite record construction activity, the data center market will struggle to keep pace with demand, leading to higher utilization rates in existing facilities and tighter vacancy rates.
  • Supply chain challenges will persist and keep large project timelines over three years.
  • Applications on cell phones, smart devices, laptops and desktops are using large amounts of data for processing, storing and computing data, driving growth and network demand as well as demand for data center capacity.
]]>
https://www.networkworld.com/article/3840367/data-center-vacancies-hit-historic-lows-despite-record-construction.html 3840367Data Center, Data Center Management
Microsoft’s Veeam partnership signals data resiliency market shift Thu, 06 Mar 2025 15:11:22 +0000

Veeam recently announced a multi-faceted expansion of its partnership with Microsoft. The first facet is that Microsoft is making an equity investment in Veeam. Although no terms of the financial arrangement were given, this does follow a $2 billion round in late 2024 in which Veeam was valued at $15 billion. There have been rumors swirling regarding the Kirkland, Wash.-based company going public, but Veeam is nearing $2 billion in revenue and maintains healthy margins so there’s no urgency to join the capital markets.

Also, as part of the partnership, Veeam will integrate Microsoft AI services and machine learning (ML) capabilities into its data resilience platform, Veeam Data Cloud. The platform, hosted on Microsoft Azure, uses zero trust and isolated Azure Blob Storage to secure backups. It combines software, infrastructure, and storage into an all-in-one cloud service. This allows organizations to bring down costs and simplify data management.

Three Veeam Data Cloud offerings will benefit directly from the Microsoft integration: Data Cloud for Microsoft 365, a Microsoft 365 backup service with 23.5 million users; Data Cloud Vault, a cloud-based service that provides zero trust security and offsite backups in Azure; and the new Entra ID Solutions, designed for organizations operating in cloud environments to verify and protect user identities.

AI plays a key role in helping organizations detect suspicious activity early, preventing potential security breaches. It can identify weaknesses in backup systems, so organizations can address any vulnerabilities before they cause major problems. Also, AI automates manual work and speeds up the data recovery process, helping businesses restore lost or damaged data quickly.

This partnership signals a shift in data resiliency and its importance. Historically, a company like Microsoft would build software and customers would use a product like Veeam to protect it and provide backup services when required. However, that’s a very reactive approach to data resiliency and a lot of things must go right for the backup to be restored in a timely enough manner to not impact business operations. Also, many customers have assumed that with cloud services, the cloud providers take care of backing up data, but that’s not true. Every SaaS provider assumes a shared risk model where the customers must take steps to backup and protect their data.

With this partnership, Microsoft and Veeam are co-innovating to infuse AI into the Microsoft Cloud to protect against threats. Using AI, Veeam can predict where an attack might happen and protect against it before it occurs. This becomes increasingly important in the AI era as AI is fueled by data and if the data is not protected, the output from AI can be wrong and lead to bad decisions. This is a case of “fight fire with fire” in which companies need to use AI to protect the data that fuels AI.

In my conversations with IT and business leaders, I’ve seen a significant increase in interest in re-thinking data resilience. It’s always been important, but the Russia-Ukraine war put a magnifying glass on where data was stored and how fast it could be recovered. Since then, the growth of ransomware, the CrowdStrike breach, and other events have only added fuel the data resiliency fire. In fact, it was one of the top topics of discussion at RSA 2024, and I expect that to also be the case at the event later this year.

This has also been reflected in IT budget allocation. While most companies have kept IT budgets flat or seen a moderate increase, I consistently see more money allocated to security, ransomware recovery, and data resilience. Historically, these have fallen under the domain of an IT priority, those areas of spending are rapidly becoming a board-level issue.

I look at this partnership as a win-win-win. Microsoft gets a trusted partner with Veeam around which it can build a better data resiliency portfolio. For Veeam, protecting and recovering Microsoft workloads is what it does best. This partnership adds to the strong tailwinds the company currently has. Last year, it booted Dell as the top share player in backup and recovery, it has a great partnership with Salesforce, and now Microsoft sees it as such a strong partner, it invested in them.

The big winner, though, is customers. Microsoft software is used by almost every company, and being able to protect and recover data as needed brings a level of assurance to move forward with AI projects.

]]>
https://www.networkworld.com/article/3840288/microsofts-veeam-partnership-signals-data-resiliency-market-shift.html 3840288Data and Information Security, Data Privacy
AI driving a 165% rise in data center power demand by 2030 Wed, 05 Mar 2025 12:59:45 +0000

Global power demand from data centers will increase 50% by 2027 and by as much as 165% by the end of the decade compared with 2023, according to a recent report from Goldman Sachs Research

The industry is shrugging off the potential of DeepSeek – which claimed large-scale AI modeling capabilities from low-end hardware – and is instead focused on efficiency, wrote James Schneider, a senior equity research analyst covering US telecom, digital infrastructure, and IT services, in the data center demand report. Still, several questions remain about DeepSeek’s training, infrastructure, and ability to scale, Schneider stated.

 “In the long run, if we see efficiency driving lower capex levels (from either hyperscalers or new investment plans from new players), this would mitigate the risk of long-term market oversupply we see in 2027 and beyond – which we think is an important consideration that could drive more durability and less cyclicality in the data center market,” Schneider stated.

On the demand side for data centers, large hyperscale cloud providers and other corporations are building increasingly bigger large language models (LLMs) that must be trained on massive compute clusters.

Meanwhile, hyperscale cloud companies, data center operators, and asset managers are deploying large amounts of capital to build new high-capacity data centers. But the balance of data center supply and demand is forecast by Goldman Sachs Research to tighten in the coming years, according to the report.

This means data center occupancy for this infrastructure is projected to increase from around 85% in 2023 to a potential peak of more than 95% in late 2026. That will likely be followed by a moderation starting in 2027, as more data centers come online and AI-driven demand growth slows, Schneider stated.

Goldman Sachs Research estimates the power usage by the global data center market to be around 55 gigawatts, which breaks down as 54% for cloud computing workloads, 32% for traditional line of business workloads and 14% for AI.

By 2027, that number jumps to 84 GW, with AI growing to 27% of the overall market, cloud dropping to 50%, and traditional workloads falling to 23%, Schneider stated.

Goldman Sachs Research estimates that there will be around 122 GW of data center capacity online by the end of 2030, and the density of power use in data centers is likely to grow as well, from 162 kilowatts per square foot to 176 KW per square foot in 2027, thanks to AI, Schneider stated.

 “Data center supply — specifically the rate at which incremental supply is built — has been constrained over the past 18 months,” Schneider wrote. These constraints have arisen from the inability of utilities to expand transmission capacity because of permitting delays, supply chain bottlenecks, and infrastructure that is both costly and time-intensive to upgrade.

The result is that due to power demand from data centers, there will need to be additional utility investment, to the tune of about $720 billion of grid spending through 2030. And then they are subject to the pace of public utilities, which move much slower than hyperscalers.

“These transmission projects can take several years to permit, and then several more to build, creating another potential bottleneck for data center growth if the regions are not proactive about this given the lead time,” Schneider wrote.

]]>
https://www.networkworld.com/article/3838986/ai-driving-a-165-rise-in-data-center-power-demand-by-2030.html 3838986Data Center, Data Center Design, Energy Efficiency
Top data storage certifications to sharpen your skills Wed, 05 Mar 2025 08:00:00 +0000

Enterprise data storage skills are in demand, and that means storage certifications can be more valuable to organizations looking for people with those qualifications.

“As organizations grapple with exponential data growth and complex hybrid cloud environments, IT leaders and professionals who can effectively manage, optimize and secure data storage are indispensable,” says Gina Smith, research director at IDC and lead for its IT Skills for Digital Business practice. “From software-defined solutions to object storage and data protection strategies, the ability to navigate modern storage technologies can impact an organization’s competitiveness.”

No longer are storage skills a niche specialty, Smith says. “They are now a fundamental requirement for driving digital transformation and innovation in the enterprise.”

Both vendor-specific and general storage certifications are valuable, Smith says. “Their relative worth depends on a number of factors and will be different for different sectors, company sizes, regions and active job roles,” she says.

Vendor-specific certifications typically are more valuable for specialized roles that work with particular storage technologies, Smith says. “They are especially useful for companies that partner with or resell specific storage vendor products,” she says. “Vendor certifications support the creation of in-depth knowledge of particular storage systems and solutions.”

General storage certifications, by contrast, provide a broader understanding of storage concepts across different platforms, Smith notes. They typically are more valuable for roles that require versatility across multiple storage environments, she says, and they are beneficial for consultants or those in management positions overseeing diverse storage infrastructures.

Here are some of the leading data storage certifications, along with information on cost, duration of the exam, skills acquired, and other details.

Storage vendor certifications

HPE ASE – Storage Solutions

The HPE ASE – Storage Solutions certification validates that an individual can identify, recommend, and explain HPE Enterprise Storage Solutions architectures and technologies, and translate business requirements into storage solution designs that support applications and data across physical, virtual and cloud environments.

It also demonstrates the ability to design HPE Backup Solutions including the right backup, recovery, and archive (BURA) strategies for customer scenarios. The certification validates that an individual can differentiate and apply enterprise storage architectures and technologies; identify enterprise storage and backup opportunities then plan, design, and size the right HPE Storage Solution; and design, identify, and recommend HPE Storage Enterprise and Backup Solutions including proof-of-concepts,

  • Organization: HPE
  • Skills acquired: Identify, recommend, and explain HPE Enterprise Storage Solutions architectures and technologies.
  • Price: $145-$260
  • Exam duration: 120 minutes
  • How to prepare: Typical candidates for this certification are IT professionals with at least one to three years of experience in storage technologies.

HPE has a number of other storage-focused certifications. Read more about HPE’s storage training options here.

NetApp Certified Data Administrator

A NetApp Certified Data Administrator gains skills in managing and administering NetApp storage systems, specifically with the ONTAP operating system.

The certification covers proficiency in configuring, maintaining, and optimizing storage environments for data protection, high availability, and efficient data management across multi-protocol environments, demonstrating a deep understanding of concepts such as data replication, snapshotting, and performance tuning.

  • Organization: NetApp
  • Skills acquired: Management and administration of NetApp storage systems, specifically with the ONTAP operating system. Proficiency in configuring, maintaining, and optimizing storage environments for data protection, high availability, and efficient data management across multi-protocol environments.
  • Price: $150
  • Exam duration: 1.5 hours
  • How to prepare: At least six to 12 months of field experience implementing and administering NetApp data storage solutions in multiprotocol environments. Knowledge of how to implement HA controller configurations, SyncMirror software for rapid data recovery, or ONTAP solutions with either single- or multi-node configurations.

NetApp has many other storage-focused certifications. Read more about NetApp’s training and certification options here.

Pure Storage Certified Data Storage Associate

The Pure Storage Certified Data Storage Associate highlights a candidate’s knowledge of a complete multi-vendor enterprise storage solution. This includes demonstrating networking, virtualization, container, storage, cloud, data protection, and host knowledge to operate industry-recognized storage technology and solutions. Courses are currently available at no charge through the Pure Academy with a login ID and password.

  • Organization: Pure Storage
  • Skills acquired: Knowledge of a multi-vendor enterprise storage solution, including networking, virtualization, container, storage, cloud, data protection, and host.
  • Price: No charge
  • Exam duration: 120 minutes
  • How to prepare: Minimum of six to 12 months of general IT and storage knowledge. The Pure Storage Certified Data Storage Associate Study Guide is a resource to help prepare for the exam. It includes recommended study resources and sample questions.

Pure has a wide range of storage-focused certifications. Read more about Pure’s training and certification options here.

SNIA Certified Storage Professional (SCSP)

The Certified Storage Professional certification from the Storage Networking Industry Association (SNIA) demonstrates a foundational understanding of storage networking concepts, including basic components, protocols, data protection methods, and best practices for managing storage systems across various vendor technologies. This makes them qualified for entry-level storage administration roles.

The SNIA Certified Storage Professional certification is vendor-neutral, not tied to any specific storage solution. This allows it to be relevant across different storage environments. The exam primarily covers core storage networking concepts such as storage protocols, data protection techniques, and basic storage system management.

  • Organization: Storage Networking Industry Association
  • Skills acquired: Foundational understanding of storage networking concepts, including basic components, protocols, data protection methods, and best practices for managing storage systems.
  • Price: $220
  • Exam duration: 90 minutes
  • How to prepare: Study the fundamental concepts of storage networking, including components, protocols, data protection methods, capacity planning, and basic storage administration, primarily through the SNIA Storage Foundations course materials, practice exams, and recommended reading from the SNIA website.

Hitachi Vantara Qualified Professional-Ops Center Automation

Hitachi Vantara Qualified Professional-Ops Center Automation is mainly designed for Hitachi Vantara customers who use, administer, and operate Hitachi storage systems with Hitachi Ops Center. The test will validate that the successful candidate has knowledge of data center infrastructure management tasks automation using Hitachi Ops Center Automator. The test covers the user interface, creation and customization of tasks, REST API usage, and the Command Line Interface, according to Hitachi Vantara.

  • Organization: Hitachi Vantara
  • Skills acquired: Knowledge of data center infrastructure management tasks automation using Hitachi Ops Center Automator.
  • Price: $100
  • Exam duration: 60 minutes
  • How to prepare: Knowledge of all storage-related operations from an end-user perspective, including planning, allocating, and managing storage and architecting storage layouts.

Read more about Hitachi Vantara’s training and certification options here.

Certifications that bundle cloud, networking and storage skills

AWS Certified Solutions Architect – Professional

The AWS Certified Solutions Architect – Professional certification from leading cloud provider Amazon Web Services (AWS) helps individuals showcase advanced knowledge and skills in optimizing security, cost, and performance, and automating manual processes. The certification is a means for organizations to identify and develop talent with these skills for implementing cloud initiatives, according to AWS.

The ideal candidate has the ability to evaluate cloud application requirements, make architectural recommendations for deployment of applications on AWS, and provide expert guidance on architectural design across multiple applications and projects within a complex organization, AWS says. Certified individuals report increased credibility with technical colleagues and customers as a result of earning this certification, it says.

  • Organization: Amazon Web Services
  • Skills acquired: Helps individuals showcase skills in optimizing security, cost, and performance, and automating manual processes
  • Price: $300
  • Exam duration: 180 minutes
  • How to prepare: The recommended experience prior to taking the exam is two or more years of experience in using AWS services to design and implement cloud solutions

Cisco Certified Internetwork Expert (CCIE) Data Center

The Cisco CCIE Data Center certification enables individuals to demonstrate advanced skills to plan, design, deploy, operate, and optimize complex data center networks. They will gain comprehensive expertise in orchestrating data center infrastructure, focusing on seamless integration of networking, compute, and storage components.

Other skills gained include building scalable, low-latency, high-performance networks that are optimized to support artificial intelligence (AI) and machine learning (ML) workloads; and the ability to use automated solutions, programming concepts, and automated tools for programmable data center fabrics. The program includes two required exams.

  • Organization: Cisco
  • Skills acquired: Plan, design, deploy, operate, and optimize complex data center networks
  • Price: $400 for first exam, $1,600 for second
  • Exam duration: 120 minutes for first, eight hours for second
  • How to prepare: No formal prerequisites necessary

Microsoft Certified: Azure Administrator Associate

Candidates for the Microsoft Certified: Azure Administrator Associate certification should have subject matter expertise in implementing, managing, and monitoring an organization’s Azure environment, including, virtual networks, storage, compute, security, and governance. They should be familiar with operating systems, networking, servers, and virtualization.

The program measures skills including management of Azure identities and governance, implementation and management of storage, deployment and management of Azure compute resources, implementation and management of virtual networking, and monitoring and maintaining of Azure resources.

  • Organization: Microsoft
  • Skills acquired: Management of Azure identities and governance, implementation and management of storage, deployment and management of Azure compute resources.
  • Price: $165
  • Exam duration: 100 minutes
  • How to prepare: Take training in implementing, managing, and monitoring an organization’s Azure environment, including, virtual networks, storage, compute, security, and governance.

VMware Certified Data Center Virtualization

The VMware Certified Professional – Data Center Virtualization 2024 [v2] (VCP-DCV 2024) certification validates an individual’s knowledge and skills with VMware vSphere solutions, including virtual machines, networking, and storage. Job roles associated with this certification include virtualization administrators, system engineers, and consultants.

  • Organization: Broadcom
  • Skills acquired: How to use VMware vSphere solutions, including virtual machines, networking, and storage.
  • Price: $250
  • Exam duration: 130 minutes
  • How to prepare: Gain experience with VMware’s vSphere 7.x or vSphere 8.x platforms.
]]>
https://www.networkworld.com/article/3837739/top-data-storage-certifications-to-sharpen-your-skills.html 3837739Careers, Certifications, Enterprise Storage, Hybrid Cloud
10 Linux commands for testing connectivity and transfer rates Wed, 05 Mar 2025 03:52:33 +0000

There are quite a few tools that can help test your connectivity on the Linux command line. In this post, we’ll look at a series of commands that can help estimate your connection speed, test whether you can reach other systems, analyze connection delays, and determine whether particular services are available. This post provides intros and example output from these commands:

  • ping
  • traceroute
  • mtr
  • ncat
  • speedtest
  • fast
  • nethogs
  • ss
  • iftop
  • ethtool

ping

The ping command is the simplest and most often used command for doing basic connectivity testing. It sends out packets called “echo requests” — packets that request a response. The command looks for the responses and displays them along with how long each response took and then reports what percentage of the requests were answered.

Response times will largely depend on how many routers the requests need to cross and whether your network is congested. Pinging a local system might look like this. Note the small number of milliseconds required for each response and the 0% packet loss.

$ ping 192.168.0.11
PING 192.168.0.11 (192.168.0.11) 56(84) bytes of data.
64 bytes from 192.168.0.11: icmp_seq=1 ttl=64 time=4.36 ms
64 bytes from 192.168.0.11: icmp_seq=2 ttl=64 time=5.86 ms
64 bytes from 192.168.0.11: icmp_seq=3 ttl=64 time=2.87 ms
^C
--- 192.168.0.11 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 2.867/4.361/5.859/1.221 ms

On Linux systems, the pings will continue until you type ^c to stop them. Some systems, including Windows, issue four pings and then stop on their own. A remote system will take considerably longer to respond. Zero packet loss is always a good sign and, even when you’re pinging a remote system, will generally be what you should expect to see unless there is a problem.

ping command provides an easy way to check network connectivity for a home network. Send requests to a publicly accessible system and you should expect 0% packet loss. If you’re experiencing problems, a ping command is likely to show significant packet loss.

$ ping 180.65.0.22
PING 180.65.0.22 (180.65.0.22) 56(84) bytes of data.
64 bytes from 180.65.0.22: icmp_seq=1 ttl=46 time=362 ms
64 bytes from 180.65.0.22: icmp_seq=2 ttl=46 time=305 ms
64 bytes from 180.65.0.22: icmp_seq=3 ttl=46 time=276 ms
64 bytes from 180.65.0.22: icmp_seq=4 ttl=46 time=257 ms
^C
--- 180.65.0.22 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 257.172/300.119/362.431/39.775 ms

traceroute

Traceroute is a much more complex command as it runs a series of checks to see how long each hop between routers takes and reports it back. If the overall check takes a long time, it might be that one or two of the hops are congested. If the reported results descend into a sequence of asterisks, the last router reached is not able to respond to the packet type being used (UDP by default on Linux systems).

The traceroute command uses a clever technique to time each hop. It uses a time to live (TTL) setting that is decremented with each hop to ensure that each router along the route will at some point send back an error message. This allows traceroute to report on the duration of time between each hop.

Here’s an example of using traceroute to reach a local system (a single hop and a quick response):

$ traceroute 192.168.0.11
traceroute to 192.168.0.11 (192.168.0.11), 30 hops max, 60 byte packets
1 192.168.0.11 (192.168.0.11) 9.228 ms 12.797 ms 12.782 ms

This next traceroute command tries to reach a remote system, but is unable to report on each hop (those showing asterisks) because the routers at some hops don’t respond to the type of packet used. This is not unusual.

The default maximum number of hops for traceroute is 30. Notice that this setting is displayed in the first line of output. It can be changed, however,  using the -m argument (e.g., traceroute -m 50 distant.org).

$ traceroute www.amazon.com
traceroute to www.amazon.com (99.84.218.165), 30 hops max, 60 byte packets
1 router (192.168.0.1) 1.586 ms 3.842 ms 4.074 ms
2 10.226.32.1 (10.226.32.1) 27.342 ms 28.485 ms 29.529 ms
3 10.17.1.25 (10.17.1.25) 30.769 ms 31.584 ms 32.379 ms
4 10.17.0.221 (10.17.0.221) 33.126 ms 34.390 ms 35.284 ms
5 10.17.0.226 (10.17.0.226) 37.000 ms 38.837 ms 40.808 ms
6 204.111.0.145 (204.111.0.145) 44.083 ms 42.671 ms 42.582 ms
7 99.82.178.164 (99.82.178.164) 44.254 ms 30.422 ms 31.666 ms
8 * * *
9 * * *
10 * * *
11 52.93.40.225 (52.93.40.225) 41.548 ms 52.93.40.223 (52.93.40.223) 41.808 ms 52.93.40.225 (52.93.40.225) 43.326 ms
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 server-99-84-218-165.iad79.r.cloudfront.net (99.84.218.165) 44.862 ms 44.746 ms 44.713 ms

mtr

The mtr (my traceroute) command combines the functionality of the ping and traceroute commands. In the example below, the mtr command is evaluating the connectivity between the local system and the default router. Notice that it reports on the percentage of packets lost and the number sent.

fedora (192.168.0.19) -> 192.168.0.1 (192.168.0.1)                                                       2025-02-21T14:16:27-0500
Keys: Help Display mode Restart statistics Order of fields quit
Packets Pings
Host Loss% Snt Last Avg Best Wrst StDev
1. _gateway 0.0% 13 3.3 3.5 3.0 7.1 1.1

The fields reported by the mtr command include:

  • Loss %: percentage of packets lost
  • Snt: number of packets sent
  • Last: Latency of last packet sent
  • Avg: average latency of packets sent
  • Best: fastest response
  • Wrst: slowest response
  • StDev: standard deviation

Keep in mind that network latency is affected by several factors, including distance, bandwidth and network congestion levels.

ncat

The ncat command is a many-featured network utility for writing data across networks from the command line but, in the form shown below, allows you to simply determine whether you can connect to a particular service. It was originally written for nmap (the network mapper).

By sending zero bytes (the -z setting) to a particular port on a remote system, we can determine whether the related service is available without actually having to make use of the connection.

$ nc -z -v 192.168.0.11 22
Ncat: Version 7.80 ( https://nmap.org/ncat )
Ncat: Connected to 192.168.0.11:22.
Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds.

The command above tells us that ssh is responding on the specified system, but doesn’t try to log in or run a remote command. Checking for a web site on the same system shows us the opposite (i.e., no web server running) for port 80.

$ nc -z -v 192.168.0.11 80
Ncat: Version 7.80 ( https://nmap.org/ncat )
Ncat: Connection refused.

We get a predictably different response when we check on a popular website:

$ ncat -z -v 205.251.242.103 80
Ncat: Version 7.93 ( https://nmap.org/ncat )
Ncat: Connected to 205.251.242.103:80.
Ncat: 0 bytes sent, 0 bytes received in 0.10 seconds.

NOTE: the ncat command can be called by typing nc or ncat.

speedtest

The speedtest tool tests the speed of your connectivity with your Internet provider. Note that it is not at all uncommon for upload speeds to be considerably slower than download speeds. Internet providers understand that most people download considerably more data than they upload. The speedtest tool will highlight any differences. In the test below, the download speed is nearly nine times the upload speed.

$ speedtest

Speedtest by Ookla

Server: Winchester Wireless - Winchester, VA (id = 21859)
ISP: Shentel Communications
Latency: 25.86 ms (0.96 ms jitter)
Download: 10.34 Mbps (data used: 10.7 MB)
Upload: 1.00 Mbps (data used: 1.1 MB)
Packet Loss: 0.0%
Result URL: https://www.speedtest.net/result/c/bb2e002a-d686-4f9c-8f36-f93fbcc9b752

Command results will differ somewhat from one test to the next.

You can also use speedtest through a browser by going to https://www.speedtest.net/. Click GO and you will see a moving graphical display of your speed measurements.

Related: How to use speedtest: 2-Minute Linux Tips

fast

You can also install a tool called fast that checks your download speed a number of times and then reports the average speed. It displays download speed only and uses the Netflix speed-testing service (Fast.com).

$ fast
$   10.08 Mbps

The fast tool can be installed using these commands:

$ wget https://github.com/ddo/fast/releases/download/v0.0.4/fast_linux_amd64
$ sudo install fast_linux_amd64 /usr/local/bin/fast
$ which fast
/usr/local/bin/fast

nethogs

The nethogs command takes an entirely different approach from the commands explained above. It groups bandwidth usage by process to help you pinpoint particular processes that might be causing a slowdown in your network traffic. In other words, it helps you pinpoint the “net hogs,” so it is aptly named.

NetHogs version 0.8.6
PID USER PROGRAM DEV SENT RECEIVED
127832 nemo /usr/lib/firefox/firefox enp0s2 11.120 432.207 KB/sec
413216 shs sshd: shs@pts/1 enp0s2 0.246 0.059 KB/sec
696 root /usr/sbin/NetworkManager enp0s2 0.000 0.000 KB/sec
? root unknown TCP 0.000 0.000 KB/sec

TOTAL 0.246 432.266 KB/sec

In the output shown, the process using the bulk of the bandwidth is quite obvious.

Related: Who’s hogging the network? Bandwidth usage on a Linux system

ss

The ss command (which stands for “Socket Statistics”) is a potent tool for inspecting and displaying detailed information about network sockets on a Linux system. When no options are used, it shows sockets which are not listening. With the -a option, ss will list all sockets, so expect a lot of output. With both the -a and -g options, you’ll see something like this:

$ ss -a -t
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.1:ipp 0.0.0.0:*
LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:*
LISTEN 0 4096 127.0.0.54:domain 0.0.0.0:*
LISTEN 0 5 127.0.0.1:dey-sapi 0.0.0.0:*
LISTEN 0 4096 0.0.0.0:hostmon 0.0.0.0:*
LISTEN 0 4096 127.0.0.53%lo:domain 0.0.0.0:*
LISTEN 0 5 127.0.0.1:44321 0.0.0.0:*
TIME-WAIT 0 0 192.168.0.19:42928 109.61.91.195:https
ESTAB 0 64 192.168.0.19:ssh 192.168.0.8:62656
LISTEN 0 5 [::1]:dey-sapi [::]:*
LISTEN 0 128 [::]:ssh [::]:*
LISTEN 0 4096 [::1]:ipp [::]:*
LISTEN 0 4096 [::]:hostmon [::]:*
LISTEN 0 5 [::1]:44321 [::]:*

iftop

The iftop command is a real-time network monitoring tool that displays network connections and their current bandwidth usage. Like nethogs, it can help you identify the connections using the most bandwidth. Here’s a sample of its output. Note that sudo privilege is required.

$ sudo iftop
[sudo] password for shs:
interface: wlp1s0
IP address is: 192.168.0.19
MAC address is: ec:0e:c4:24:7d:bf
12.5Kb 25.0Kb 37.5Kb 50.0Kb 62.5Kb
qqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqq
239.255.255.250 => 192.168.0.2 0b 0b 0b
<= 0b 12.2Kb 6.12Kb
239.255.255.250 => _gateway 0b 0b 0b
<= 0b 5.62Kb 2.81Kb
fedora => 192.168.0.8 2.19Kb 1.98Kb 2.26Kb
<= 184b 184b 244b
fedora => ns.shentel.net 0b 66b 31b
<= 0b 108b 70b
fedora => 216.72.190.35.bc.googleusercontent.com 0b 0b 39b
<= 0b 0b 21b





qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
TX: cum: 5.92KB peak: 4.47Kb rates: 4.47Kb 3.02Kb 2.96Kb
RX: 23.3KB 61.6Kb 392b 550b 11.7Kb
TOTAL: 29.2KB 64.0Kb 4.85Kb 3.56Kb 14.6Kb

ethtool

The ethtool command provides a way to examine and control network interface controller (NIC) parameters on Linux systems. Sysadmins can use it to extract details for network interfaces, modify settings to optimize performance, and troubleshoot issues effectively. It’s an essential command for managing and optimizing network interfaces and provides extremely detailed output like that shown below. Note that “fixed” doesn’t mean “repaired”; it means that those settings cannot be changed.

$ ethtool --show-features wlp1s0 | column
Features for wlp1s0: tx-ipxip6-segmentation: off [fixed]
rx-checksumming: off [fixed] tx-udp_tnl-segmentation: off [fixed]
tx-checksumming: off tx-udp_tnl-csum-segmentation: off [fixed]
tx-checksum-ipv4: off [fixed] tx-gso-partial: off [fixed]
tx-checksum-ip-generic: off [fixed] tx-tunnel-remcsum-segmentation: off [fixed]
tx-checksum-ipv6: off [fixed] tx-sctp-segmentation: off [fixed]
tx-checksum-fcoe-crc: off [fixed] tx-esp-segmentation: off [fixed]
tx-checksum-sctp: off [fixed] tx-udp-segmentation: off [fixed]
scatter-gather: off tx-gso-list: off [fixed]
tx-scatter-gather: off [fixed] fcoe-mtu: off [fixed]
tx-scatter-gather-fraglist: off [fixed] tx-nocache-copy: off
tcp-segmentation-offload: off loopback: off [fixed]
tx-tcp-segmentation: off [fixed] rx-fcs: off [fixed]
tx-tcp-ecn-segmentation: off [fixed] rx-all: off [fixed]
tx-tcp-mangleid-segmentation: off [fixed] tx-vlan-stag-hw-insert: off [fixed]
tx-tcp6-segmentation: off [fixed] rx-vlan-stag-hw-parse: off [fixed]
generic-segmentation-offload: off [requested on] rx-vlan-stag-filter: off [fixed]
generic-receive-offload: on l2-fwd-offload: off [fixed]
large-receive-offload: off [fixed] hw-tc-offload: off [fixed]
rx-vlan-offload: off [fixed] esp-hw-offload: off [fixed]
tx-vlan-offload: off [fixed] esp-tx-csum-hw-offload: off [fixed]
ntuple-filters: off [fixed] rx-udp_tunnel-port-offload: off [fixed]
receive-hashing: off [fixed] tls-hw-tx-offload: off [fixed]
highdma: off [fixed] tls-hw-rx-offload: off [fixed]
rx-vlan-filter: off [fixed] rx-gro-hw: off [fixed]
vlan-challenged: off [fixed] tls-hw-record: off [fixed]
tx-lockless: off [fixed] rx-gro-list: off
netns-local: on [fixed] macsec-hw-offload: off [fixed]
tx-gso-robust: off [fixed] rx-udp-gro-forwarding: off
tx-fcoe-segmentation: off [fixed] hsr-tag-ins-offload: off [fixed]
tx-gre-segmentation: off [fixed] hsr-tag-rm-offload: off [fixed]
tx-gre-csum-segmentation: off [fixed] hsr-fwd-offload: off [fixed]
tx-ipxip4-segmentation: off [fixed] hsr-dup-offload: off [fixed]

Wrap-up

Many tools are available for testing connectivity and connection speeds on Linux systems. Those mentioned in this post are only some of them, but they represent a range of tools that are both easy to use and informative.

]]>
https://www.networkworld.com/article/969808/linux-commands-for-testing-connectivity-and-transfer-rates.html 969808Linux
SolarWinds buys Squadcast to speed incident response Tue, 04 Mar 2025 21:10:05 +0000

Observability provider SolarWinds announced this week it had signed an agreement to acquire San Francisco-based Squadcast and its incident response technology for an undisclosed amount. The acquisition will let SolarWinds provide customers with intelligent incident response that will speed mean time to resolution (MTTR) for customers, SolarWinds says.

“With the industry battles to operationally manage and control hybrid ecosystems and the massive influx of alerts, IT professionals need a more powerful solution to cut through the noise,” said Cullen Childress, SolarWinds chief product officer, in a statement. “The addition of intelligent incident response from Squadcast to the SolarWinds Platform further accelerate MTTR, allowing practitioners to not only accelerate time to detection of incidents but to remediate those incidents in an accelerated manner, maximizing their operational resilience.”

Squadcast reports that its users see benefits such as a 68% reduction in the average MTTR and save some 1,000 work hours and $500,000 in costs using its incident response technology. SolarWinds says the Squadcast technology complements SolarWinds observability and service management offerings and offers expertise in the incident response management domain.

“Squadcast is excited to join forces with SolarWinds to help customers make their worlds more reliable,” said Amiya Adwitiya, Squadcast Founder and CEO. “By optimizing incident response with AI, customers reduce noise, enhance efficiency, and resolve incidents faster—so they can focus on what truly matters.”

Citing customers such as Redis and Charter, SolarWinds’ Childress detailed how the technologies will converge to help customers achieve better results.

“We aim to realize a new vision of operational resilience. Combining SolarWinds Observability and service management with Squadcast’s incident response management solution, we will enable enterprises to proactively manage service health, reduce MTTR, and drive exceptional user experiences,” Childress wrote in a blog about the acquisition news. “The integration of Squadcast’s incident response management solutions is more than just an enhancement—it redefines reliability and incident response.”

Squadcast customers shared their experiences with the technology. “Since implementing Squadcast, we’ve reduced incoming alerts from tens of thousands to hundreds, thanks to flexible deduplication. It has a direct impact on reducing alert fatigue and increasing awareness,” said Avner Yaacov, Senior Manager at Redis, in a statement.

According to SolarWinds, the SaaS-based offering will complement the company’s current portfolio, which serves the needs of IT organizations at businesses of all sizes, from SMBs to large enterprises. The new enhancements to the SolarWinds portfolio are available now

SolarWinds will share more details around its acquired capabilities at the free SolarWinds Day: The Era of Operational Resilience virtual event. The event will also feature a roundtable discussion with Torsten Volk, principal analyst, Application Modernization at Enterprise Strategy Group. 

“In today’s fragmented landscape, most IT environments lack not only a unified, end-to-end view of a system’s health and performance but also the ability to efficiently remediate issues and provide meaningful business insights,” said in a statement. “Despite investments in monitoring tools and platforms, organizations are struggling to maintain operational resilience. This tool sprawl has created a critical visibility gap, with 52% of organizations still lacking full-stack observability. Addressing this critical challenge is essential for organizations to diagnose and remediate issues proactively before they impact the business.”

More SolarWinds news:

>
]]>
https://www.networkworld.com/article/3837926/solarwinds-buys-squadcast-to-speed-incident-response.html 3837926Network Management Software, Network Monitoring