Powerful GPU-based servers

Powerful Gpu-Based Servers

Kerno AI Servers

GPUs are special hardware designed to process many tasks simultaneously, making them highly efficient for certain types of workloads related to graphics and visual data. GPUs are optimized for parallel processing, allowing them to handle multiple operations at once. Originally designed for rendering images and videos, GPUs are now widely used for high-performance computing tasks like machine learning, data science, scientific simulations, and more

Kerno AI servers integrate one or more GPUs alongside traditional CPUs to accelerate specific tasks. Kerno GPU servers are equipped with powerful GPUs from NVIDIA and AMD and are connected via dedicated high-speed interfaces, allowing them to process data in parallel with the CPU. The servers are built with high-performance components such as high-speed interconnects (e.g., NVLink, PCIe), large memory capacity, and come in various types such as air cooled servers or specialized liquid cooled servers or immersion cooled servers

Kerno AI servers are transformative tools in a variety of industries, allowing tasks that require significant computational power to be completed more efficiently and quickly. Whether for AI, scientific research, or graphics rendering, the power of parallel processing provided by GPUs enables innovations that were previously not possible or would have been prohibitively time-consuming or expensive

Parallelism and Speed
  • Kerno AI servers are optimized for extremely parallel processing required in AI and Machine Learning workloads
  • They support workloads such as graphics and visual data and can run 3D rendering applications
  • Support for latest high performance GPU technology from NVIDIA and AMD
Scalable & Efficient
  • Integrate multiple GPUs from different nodes to form large clusters and achieve huge scalability
  • High speed interfaces such as NV Link and PCIe Interconnects
  • Efficient cooling technologies
Boost your AI & Machine Learning
  • Easy to run Machine Learning, Deep Learning and Inferencing workloads
  • Build and Run AI applications such as LLM & Digital Twin on Kerno servers
  • Perform AI based Video Analytics | Drive AI based Innovation in Government, Healthcare, Education & Energy Sectors using Kerno Servers

We present unique servers, aimed at solving a wide range of corporate problems.

Technical Characteristics of the Product:

Parameter Description
Form Factor

2U, 4U, and 5U

Server type

2 CPU

Computing Power
CPU
  • Intel Scalable Processor Family 4th, 5th Generation, AMD EPYC processor series Genoa and Turin
CPU Power Limit
  • 1U – Upto 350W/ 400W | 2U – Upto 350W/ 400W
GPU Support
  • 2U Server – Supports up to 8 x Dual slot Gen5 GPUs (PCIe only)
  • 4U Server – Supports upto 10x Dual slot Gen5 GPUs (PCIe and SXM)
  • 5U Server – Supports upto 10x Dual slot Gen5 GPUs (PCIe and SXM) 
Memory
  • Intel - Up to 32 DDR5 DIMMs (8 Channel per CPU) 
  • AMD - Up to 24 DDR5 DIMMs (12 Channel per CPU)
  • Supports RDIMM
  • Up to 8 Tb RAM (32×256 GB)
Local data storage
1U
  • Upto 12 drives 2.5” SFF form factor
  • Upto 4 drives 3.5” LFF form factor
2U
  • Upto 24 drives 2.5” SFF form factor
  • Upto 12 drives 3.5” LFF form factor
  • In some cases can support additional 4 rear drives 2.5” SFF form factor
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.