TU Berlin

ComputeserverSurvey of the hardware

Page Content

to Navigation

Technical data

The compute server can be used as a unified system, but the parts differ in terms of the age of the hardware, available hardware ressources and computing power.

The following tables show an overview about the Clusters.

When using the batch system, users have various options in choosing the appropriate hardware for their jobs.

Cluster 1
Number of nodes available
17
Processors per node

2x Opteron 252 (2.6 GHz)
Memory (per node)
4 GB
Network (intern)
SDR Infiniband + Gigabit Ethernet
Bandwith
~800 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Lokal disk (/scratch)
~40 GB
Cluster 2
Number of nodes available
8
Processors per node

2x DualCore-Opteron 2218 (2.6 GHz)
Memory (per node)
16 GB (1 Node mit 32 GB)
Network (intern)
SDR Infiniband + Gigabit Ethernet
Bandwith
~1500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~50 GB
Cluster 3
Number of nodes available
42
Processors (per node)
2x DualCore-Opteron 2222 (3.0 GHz)
Memory (per node)
16 GB (8 Nodes mit 32 GB)
Network (intern)
DDR-Infiniband + Gigabit Ethernet
Bandwith
~1500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~100 GB
Cluster 4
Number of nodes available
5 (exklusive genutzt)
Processors (per node)
2x DualCore-Opteron 2222 (3.0 GHz)
Memory (per node)
32 GB
Network (intern)
DDR-Infiniband + Gigabit Ethernet
Bandwith
~1500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~140 GB
Cluster 5
Number of nodes available
27
Processors (per node)
2x Opteron 248 (2.2 GHz)
Memory (per node)
2, 4, oder 8 GB
Network (intern)
SDR-Infiniband + Gigabit Ethernet
Bandwith
~700 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~40 GB
Cluster 6
Nodes available
9
Processors (per node)
4x QuadCore-Opteron 8380 (2.5 GHz)
4x SixCore-Opteron 8435 (2.6 GHz)
Memory (per node)
64GB oder 128 GB
Network (intern)
DDR-Infiniband + Gigabit Ethernet
Bandwith
~700 MB/s (MPI)
Netwith (outwards)
Gigabit Ethernet
Local disk (/scratch)
~220 GB
Cluster 7
Number of nodes
36
Processors (per node)
2x QuadCore-Xeon X5550 (2.67 GHz)
Memory (per node)
24GB oder 48GB
Network (intern)
QDR-Infiniband + Gigabit Ethernet
Bandwith
~2500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~220 GB
Cluster 8 (GPU-Cluster)
Number of nodes
16
Processors (per node)
2x QuadCore-Xeon X5550 (2.67 GHz)
GPUs
14x 2 nVidia Tesla C1060
2x 4 nVidia GTX 295
Memory (per node)
24GB
Network (intern)
QDR-Infiniband + Gigabit Ethernet
Bandwith
~2500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~220 GB
Cluster 9 (Cell-Cluster)
Number of nodes
8
Processors (per node)
2x IBM Cell/B.E (3.2GHz)
Network (intern)
DDR-Infiniband + Gigabit Ethernet
Bandwith
~1500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk
-
Cluster 10
Number of nodes
40
Processors (per node)
2x QuadCore-Xeon X5550 (2.67 GHz)
Memory (per node)
24GB oder 48GB
Network (intern)
QDR-Infiniband + Gigabit Ethernet
Bandwith
~2500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~260 GB
Cluster 11
Number of nodes
35
Processors (per node)
2x HexaCore-Xeon X5650 (2.67 GHz)
GPUs (only node291-node293)
je 2 nVidia Tesla S2050
Memory (per node)
48GB (GPU nodes) or 24GB
Network (intern)
QDR-Infiniband + Gigabit Ethernet
Bandwith
~2500 MB/s (MPI)
Network (outwards)
Gigabit Ethernet
Local disk (/scratch)
~260 GB

Navigation

Quick Access

Schnellnavigation zur Seite über Nummerneingabe