User Documentation

compared with
Current by Thekla Loizou
on Jan 13, 2014 15:32.

Key
This line was removed.
This word was removed. This word was added.
This line was added.

Changes (24)

View Page History
h1. Systems

h2. BA, Sun Microsystems cluster
{info}
Here is a detailed description of HPC systems available under the LinkSCEEM-2 & CyTera projects.
{info}

h3. Technical description
h2. Biblioteca Alexandrina, Egypt

|Peak Performance |11.8 TFlops, sustained performance 9.1 TFlops (LINPACK)|
|Number of Nodes |130 eight-core compute nodes
|Processors/node |2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz
|Memory/node |8 GB memory per node, Total memory 1.05 TBytes (132 * 8GB)
|Disk storage |36 TB shared scratch (GPFS)
|Other storage |Other, Storage: Tape library for backup (StorageTek SL48 Tape Library) w. backup software: Veritas NetBackup (by Symantec)|
|Node-node interconnect |
DDR Infiniband @ 10 GBbps network for MPI \\
DDR Infiniband @ 10 GBbps network for I/O to the global Lustre filesystems |
|accelerators | N/A |
|graphical pre- and post processing nodes|6 management nodes, incl. two batch nodes for job submission w. 64GB RAM|
|OS info|
OS, Compute Node: RedHat Enterprise Linux 5 (RHEL5)
OS, Front End & Service Nodes: RedHat Enterprise Linux 5 (RHEL5)|
|Further information|http://www.bibalex.org/ISIS/Frontend/Projects/ProjectDetails.aspx?th=a7Pg5AcpjauIQ1/Xoqw2GA==&id=m8fC7jXMTFprEy98pIPBFw==|

h3. BA, Sun Microsystems cluster

h2. CyI/euclid, IBM xSeries cluster:
Technical description

12 eight-core compute nodes (4 * x3550, 8 * x3650M2)
2 quad-core sockets, w. Intel Xeon Harpertown processors E5520 @ 2.27GHz
with 16 GB memory per node
Total peak performance ~1 TFlop/s
Total memory .192 TBytes
4x QDR Infiniband network for MPI
| Peak/Sustained Performance | 11.8 TFlops / 9.1 TFlops (LINPACK) |
| Number of Nodes | 130 eight-core compute nodes |
| Processors/node | 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz |
| Memory/node | 8 GB memory per node, Total memory 1.05 TBytes (132 * 8GB) |
| Disk storage | 36 TB shared scratch (lustre) |
| Other storage | Other, Storage: Tape library for backup (StorageTek SL48 Tape Library) \\
w. backup software: Veritas NetBackup (by Symantec) |
| Node-node interconnect | Ethernet & 4x SDR Infiniband network for MPI\\
4x QDR Infiniband network SDR Infiniband network for I/O to the global Lustre filesystems |
OS, Compute Node: CentOS release 5.3
| accelerators | N/A |
| pre\- and post processing nodes | 6 management nodes, incl. two batch nodes for job submission w. 64GB RAM |
| OS info | OS, Compute Node: RedHat Enterprise Linux 5 (RHEL5) \\
OS, Front End & Service Nodes: CentOS release 5.3 RedHat Enterprise Linux 5 (RHEL5) |
Other, Shared Scratch Storage: 10 TBytes
Further information: http://www-castorc.cyi.ac.cy/twiki/bin/view/Main/ClustersOverviewPublic
| Further information | [http://www.bibalex.org/ISIS/Frontend/Projects/ProjectDetails.aspx|http://www.bibalex.org/ISIS/Frontend/Projects/ProjectDetails.aspx? th=a7Pg5AcpjauIQ1/Xoqw2GA==&id=m8fC7jXMTFprEy98pIPBFw==] |

h2. Cyprus Institute, Cyprus

h2. NARSS, Blue Gene/L:
h3. CyI/Cy-Tera, IBM Hybrid CPU/GPU cluster

1024 dual processor compute nodes
2 sockets per node, each Processor is PowerPC 440 700MHz
with 512MB SDRAM-DDR memory per node
Total peak performance per rack – 5.73 TFlops
Total memory .5 TBytes (1024 * .5GB)
3D toroidal network for peer-to-peer communication
OS, Compute Node: Lightweight kernel, I/O Node – Embedded Linux
Technical description

| Peak/Sustained Performance | \~35 TFlop/s |
| Number of Nodes | 98 twelve-core compute nodes Unaccelerated \\
18 twelve-core compute nodes, each with dual NVidia M2070 GPUs |
| Processors/node | 2 hexa-core sockets per node, each is Intel Westmere X5650 |
| Memory/node | 48 GB memory per node \\
Total memory 4.7 TBytes |
| Disk storage | 350 TB GPFS raw |
| Other storage | N/A |
| Node-node interconnect | 4x QDR Infiniband network for MPI \\
4x QDR Infiniband network for I/O to the global GPFS filesystem |
| accelerators | GPUs available |
| pre\- and post processing nodes | N/A |
| OS info | OS, Compute Node: RHEL \\
OS, Front End & Service Nodes: SuSE SLES 9 Linux RHEL |
Further information: http://www.narss.sci.eg/Capabilities.aspx#Capability45
| Further information | Machine availability tentatively scheduled for the end of March 2012 |

h3. CyI/euclid, IBM Hybrid CPU/GPU Training Cluster

Technical description

| Peak/Sustained Performance | \~0.5 TFlop/s |
| Number of Nodes | 6 eight-core compute nodes (4 * x3550, 8 * x3650M2) |
| Processors/node | 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz |
| Memory/node | 16 GB memory per node \\
Total memory .096 TBytes |
| Disk storage | 4 TB shared scratch (lustre) - not backed up |
| Other storage | N/A |
| Node-node interconnect | Infiniband network for MPI \\
Infiniband network for I/O to the global Lustre filesystems |
| accelerators | GPUs available on all nodes |
| pre\- and post processing nodes | N/A |
| OS info | OS, Compute Node: CentOS release 6.1 \\
OS, Front End & Service Nodes: CentOS release 6.1 |