Americas

  • United States

Lenovo introduces compact, liquid-cooled AI edge server

News
Mar 07, 20253 mins

The entry-level AI server is designed to do AI inferencing in space-constrained environments, and it has been engineered to reduce air flow requirements and power consumption.

Top view of woman fingers typing on laptop keyboard, AI hologram with chatbot and online communication, digital lines and global connection. Concept of machine learning and computing
Credit: Shutterstock

Lenovo has announced the ThinkEdge SE100, an entry-level AI inferencing server that’s designed to make edge AI affordable for enterprises as well as small and medium-sized businesses.

AI systems are not normally small and compact; they’re big, decked out servers with lots of memory, GPUs, and CPUs. But the server is for inferencing, which is the less compute-intensive portion of AI processing, Lenovo stated. GPUs are considered overkill for inferencing, and there are multiple startups making small PC cards with inferencing chips on them instead of the more power-hungry CPUs and GPUs.

[Related: What is an AI server?]

This design brings AI to the data rather than the other way around. Instead of sending the data to the cloud or data center to be processed, edge computing uses devices located at the data source, reducing latency and the amount of data being sent up to the cloud for processing, Lenovo stated. 

Rolled out at the Mobile World Conference show, the SE100 forms part of Lenovo’s family of new ThinkSystem V4 servers, with the V4 serving as the on-premises training systems and the SE100 placed on the edge, for hybrid cloud deployments. Like the V4, the SE100 comes with Intel Xeon 6 processors and the company’s Neptune liquid cooling technology.

But it is also compact. Lenovo says the SE100 is 85% smaller than a standard 1U server. It’s power draw is designed to be under 140W, even with a GPU-equipped configuration, according to Lenovo.

The ThinkEdge SE100 is designed for constrained spaces, and because it uses liquid cooling instead of fans, it can go into public places without being exceptionally noisy. The company said the server has been specifically engineered to reduce air flow requirements while also lowering fan speed and power consumption, keeping parts cooler in order to extend the system health and lifespan.

[ Related: Networking terms and definitions ]

“Lenovo is committed to bringing AI-powered innovation to everyone with continued innovation that simplifies deployment and speeds the time to results,” said Scott Tease, vice president of Lenovo infrastructure solutions group, products, in a statement. “The Lenovo ThinkEdge SE100 is a high-performance, low-latency platform for inferencing. Its compact and cost-effective design is easily tailored to diverse business needs across a broad range of industries. This unique, purpose-driven system adapts to any environment, seamlessly scaling from a base device, to a GPU-optimized system that enables easy-to-deploy, low-cost inferencing at the Edge.”