Communication Network
The communication infrastructure of the HPC PERUN system is designed to ensure extremely fast, reliable, and low-latency data transfer between computing nodes, storage systems, and user environments.
The network architecture is divided into two interconnected segments — the InfiniBand computing network and the data and management network — which together form the backbone of the entire supercomputing ecosystem.

1. InfiniBand Network - High-Performance Computing Architecture
The core of the computational environment is built on a non-blocking Fat-Tree InfiniBand network architecture, based on NVIDIA InfiniBand NDR 64-port OSFP Quantum-2 400 Gb/s switches.
This topology provides a fully connected structure between all computing nodes without throughput limitations.
The Fat-Tree design minimizes latency and eliminates communication bottlenecks between nodes, enabling efficient scaling of computational performance for parallel workloads.
Thanks to this configuration, the system achieves stable and balanced data flow even under maximum load, which is essential for HPC applications requiring high bandwidth and minimal latency.
2. Universal Data Network
The second part of the communication infrastructure is the data and universal network, which provides connectivity for user services, system management, and access to storage systems.
It consists of several devices, including Cisco Nexus 9364D-GX2A and Cisco Nexus 9348GC-FX3 switches.
These devices form a highly available and flexible network layer supporting the latest data-transfer standards, redundancy, and automated management features.
The data network ensures a stable environment for resource management, user access, and the integration of HPC infrastructure with the university’s and research partners’ external systems.
Together, these two networks form the backbone of the HPC PERUN system, enabling high-performance parallel computing, reliable data transmission, and efficient communication across all components of the supercomputer.