We utilised this node to foster our understanding on how to implement and operate a HPC Cluster.
For our competition cluster, we received hardware sponsorship from XENON. Our hardware architecture consists of four XENON NITRO GN29A Duo 2124GQ-NART+ nodes, each based on SuperMicro's robust AS-2124GQ-NART+ platform.
Each node is equipped with the following hardware.
CPU8 × AMD EPYC™ 7313 |
Memory4 × 512GB RAM |
Operating System2 × 240GB M5400 Pro |
Storage4 × 1.9TB PM9A3 |
GPU16 × NVIDIA A100 40GB SXM4 |
PowerDual redundant 3000W Titanium per node |
CPU8 × AMD EPYC™ 7313 |
Memory4 × 512GB RAM |
Operating System2 × 240GB M5400 Pro |
Storage4 × 1.9TB PM9A3 |
GPU16 × NVIDIA A100 40GB SXM4 |
PowerDual redundant 3000W Titanium per node |
High-Speed Network: Mellanox SX6025 InfiniBand switch (56Gb/s per port, 170ns latency, 4Tb/s total bandwidth) provides extremely fast inter-node communication, ideal for intensive computational tasks requiring minimal latency and maximum throughput.
Management & WAN Access: Dell PowerConnect 5548 managed switch with Layer 3 capabilities (48 × 1Gb/s ports) handles management traffic, internet access, and provides efficient routing for administrative and lower-priority network tasks.
This hardware setup is optimised for exceptional computational power and performance efficiency, well within a 10KW power budget.
Email us at u7681327@anu.edu.au