Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version. Compare with Current  |   View Page History

Systems

Here is a detailed description of HPC systems available under the LinkSCEEM-2 & CyTera projects.

Biblioteca Alexandrina, Egypt

BA, Sun Microsystems cluster

Technical description

Peak/Sustained Performance 11.8 TFlops / 9.1 TFlops (LINPACK)
Number of Nodes 130 eight-core compute nodes
Processors/node 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz
Memory/node 8 GB memory per node, Total memory 1.05 TBytes (132 * 8GB)
Disk storage 36 TB shared scratch (lustre)
Other storage Other, Storage: Tape library for backup (StorageTek SL48 Tape Library)
w. backup software: Veritas NetBackup (by Symantec)
Node-node interconnect DDR Infiniband @ 10 GBbps network for MPI
DDR Infiniband @ 10 GBbps network for I/O to the global Lustre filesystems
accelerators N/A
pre- and post processing nodes 6 management nodes, incl. two batch nodes for job submission w. 64GB RAM
OS info OS, Compute Node: RedHat Enterprise Linux 5 (RHEL5)
OS, Front End & Service Nodes: RedHat Enterprise Linux 5 (RHEL5)
Further information http://www.bibalex.org/ISIS/Frontend/Projects/ProjectDetails.aspx \\ th=a7Pg5AcpjauIQ1/Xoqw2GA==&id=m8fC7jXMTFprEy98pIPBFw==

Cyprus Institute, Cyprus

CyI/Cy-Tera, IBM Hybrid CPU/GPU cluster

Technical description

Peak/Sustained Performance ~35 TFlop/s
Number of Nodes 98 twelve-core compute nodes Unaccelerated
18 twelve-core compute nodes, each with dual NVidia M2070 GPUs
Processors/node 2 hexa-core sockets per node, each is Intel Westmere
Memory/node 48 GB memory per node
Total memory 4.7 TBytes
Disk storage 350 TB GPFS raw
Other storage N/A
Node-node interconnect 4x QDR Infiniband network for MPI
4x QDR Infiniband network for I/O to the global GPFS filesystem
accelerators GPUs available
pre- and post processing nodes N/A
OS info OS, Compute Node: RHEL
OS, Front End & Service Nodes: RHEL
Further information Machine availability tentatively scheduled for 7th March 2012

CyI/euclid, IBM Hybrid CPU/GPU cluster

Technical description

Peak/Sustained Performance ~1 TFlop/s
Number of Nodes 12 eight-core compute nodes (4 * x3550, 8 * x3650M2)
Processors/node 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz
Memory/node 16 GB memory per node
Total memory .192 TBytes
Disk storage 10 TB shared scratch (lustre) - not backed up
Other storage N/A
Node-node interconnect 4x QDR Infiniband network for MPI
4x QDR Infiniband network for I/O to the global Lustre filesystems
accelerators GPUs available on 8 nodes under queue qpuq
pre- and post processing nodes N/A
OS info OS, Compute Node: CentOS release 5.3
OS, Front End & Service Nodes: CentOS release 5.3
Further information Machine planned to be decommissioned by 10th February 2012

NARSS, Egypt

NARSS, Blue Gene/L

Technical description

Peak/Sustained Performance 5.73 TFlops per rack
Number of Nodes 1024 dual processor compute nodes
Processors/node 2 sockets per node, each Processor is a PowerPC 440 700MHz
Memory/node 512MB SDRAM-DDR memory per node
Total memory .5 TBytes (1024 * .5GB)
Disk storage 36 TB shared scratch (GPFS)
Other storage Other, Storage: Tape library for backup (StorageTek SL48 Tape Library)
w. backup software: Veritas NetBackup (by Symantec)
Node-node interconnect 3D toroidal network for peer-to-peer communication
accelerators N/A
graphical pre- and post processing nodes N/A
OS info OS, Compute Node: Lightweight kernel, I/O Node – Embedded Linux
OS, Front End & Service Nodes: SuSE SLES 9 Linux
Further information http://www.narss.sci.eg/Capabilities.aspx#Capability45
Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.