View Source

{toc}

h1. Systems

{info}
Here is a detailed description of HPC systems available under the LinkSCEEM-2 & CyTera projects.
{info}

h2. Biblioteca Alexandrina, Egypt

h3. BA, Sun Microsystems cluster

Technical description

| Peak/Sustained Performance | 11.8 TFlops / 9.1 TFlops (LINPACK) |
| Number of Nodes | 130 eight-core compute nodes |
| Processors/node | 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz |
| Memory/node | 8 GB memory per node, Total memory 1.05 TBytes (132 * 8GB) |
| Disk storage | 36 TB shared scratch (lustre) |
| Other storage | Other, Storage: Tape library for backup (StorageTek SL48 Tape Library) \\
w. backup software: Veritas NetBackup (by Symantec) |
| Node-node interconnect | Ethernet & 4x SDR Infiniband network for MPI\\
4x SDR Infiniband network for I/O to the global Lustre filesystems |
| accelerators | N/A |
| pre\- and post processing nodes | 6 management nodes, incl. two batch nodes for job submission w. 64GB RAM |
| OS info | OS, Compute Node: RedHat Enterprise Linux 5 (RHEL5) \\
OS, Front End & Service Nodes: RedHat Enterprise Linux 5 (RHEL5) |
| Further information | [http://www.bibalex.org/ISIS/Frontend/Projects/ProjectDetails.aspx|http://www.bibalex.org/ISIS/Frontend/Projects/ProjectDetails.aspx? th=a7Pg5AcpjauIQ1/Xoqw2GA==&id=m8fC7jXMTFprEy98pIPBFw==] |

h2. Cyprus Institute, Cyprus

h3. CyI/Cy-Tera, IBM Hybrid CPU/GPU cluster

Technical description

| Peak/Sustained Performance | \~35 TFlop/s |
| Number of Nodes | 98 twelve-core compute nodes Unaccelerated \\
18 twelve-core compute nodes, each with dual NVidia M2070 GPUs |
| Processors/node | 2 hexa-core sockets per node, each is Intel Westmere X5650 |
| Memory/node | 48 GB memory per node \\
Total memory 4.7 TBytes |
| Disk storage | 350 TB GPFS raw |
| Other storage | N/A |
| Node-node interconnect | 4x QDR Infiniband network for MPI \\
4x QDR Infiniband network for I/O to the global GPFS filesystem |
| accelerators | GPUs available |
| pre\- and post processing nodes | N/A |
| OS info | OS, Compute Node: RHEL \\
OS, Front End & Service Nodes: RHEL |
| Further information | Machine availability tentatively scheduled for the end of March 2012 |

h3. CyI/euclid, IBM Hybrid CPU/GPU Training Cluster

Technical description

| Peak/Sustained Performance | \~0.5 TFlop/s |
| Number of Nodes | 6 eight-core compute nodes (4 * x3550, 8 * x3650M2) |
| Processors/node | 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz |
| Memory/node | 16 GB memory per node \\
Total memory .096 TBytes |
| Disk storage | 4 TB shared scratch (lustre) - not backed up |
| Other storage | N/A |
| Node-node interconnect | Infiniband network for MPI \\
Infiniband network for I/O to the global Lustre filesystems |
| accelerators | GPUs available on all nodes |
| pre\- and post processing nodes | N/A |
| OS info | OS, Compute Node: CentOS release 6.1 \\
OS, Front End & Service Nodes: CentOS release 6.1 |