HPC Village

HPC Village from Openwall is an opportunity for HPC (High Performance Computing) hobbyists alike to program for a heterogeneous (hybrid) HPC platform. Participants are provided with remote access (via the SSH protocol) to a server with multi-core CPUs and HPC accelerator cards of different kinds - Intel MIC (Xeon Phi), AMD GPU, NVIDIA GPU - as well as with pre-installed and configured drivers and development tools (SDKs).

We provide within one machine access to the mentioned four types of computing devices, including OpenCL support for all of them, as well as support for development tools and usage models specific to some of them (OpenMP on CPU, OpenMP offload from CPU to MIC, CUDA on NVIDIA GPU). Although it is uncommon to use more than two types of computing devices within one node in real-world HPC setups, such configuration is convenient for getting acquainted with the different technologies, for trying out and comparing them on specific tasks, and for development of portable software programs (including debugging and optimization).

Hardware

The current hardware configuration is as follows:

  • Supermicro GPU SuperWorkstation 7047GR-TPRF workstation/server platform with MCP-290-00059-0B rackmount rail set
    • 4U chassis
    • Two 1620W PSUs (normally both are active and are sharing the load)
    • Dual socket 2011 motherboard with IPMI, 16 memory sockets, four PCIe 3.0 x16 slots for full-length dual-width PCIe cards and a fifth slot for a shorter card
    • A full set of cooling fans, including those pulling hot air out of passively-cooled accelerator cards
  • Two 8-core Intel Xeon E5-2670 CPUs
    • Sandy Bridge-EP, support AVX and AES-NI
    • A total of 16 CPU cores seen as 32 logical CPUs (two hardware threads per core), at a clock rate of at least 2.6 GHz
    • Turbo boost to up to 3.0 GHz with all cores in use or 3.3 GHz with few cores in use
  • 128 GB DDR3-1600 ECC RAM
    • 8x 16 GB DDR3-1600 ECC Registered modules on 8 channels (4 channels per CPU)
    • Theoretical bandwidth 102.4 GB/s, actual measured bandwidth ~85 GB/s (cumulative from 32 threads)
  • Intel Xeon Phi 5110P coprocessor module
    • Intel Many Integrated Core (MIC) architecture, Knights Corner
    • 60 cores (x86-ish with 512-bit SIMD units) seen as 240 logical CPUs (four hardware threads per core), 1053 MHz, 8 GB GDDR5 ECC RAM on a 512-bit bus, 320 GB/s
    • Peak performance of about 2 TFLOPS single-precision, 1 TFLOPS double-precision
  • NVIDIA GTX 1080 gaming graphics card (short form factor, manufactured by Gigabyte)
    • NVIDIA Pascal architecture
    • One GP104 GPU with 2560 SPs typically at 1607 MHz to 1771 MHz, 8 GB GDDR5X RAM on a 256-bit bus, 320 GB/s
    • Peak performance of over 8 TFLOPS single-precision
  • NVIDIA GTX Titan X gaming graphics card (reference design, manufactured by Gigabyte)
    • NVIDIA Maxwell architecture
    • One GM200 GPU with 3072 SPs at 1000 MHz to 1177 MHz, 12 GB GDDR5 RAM on a 384-bit bus, 336 GB/s
    • Peak performance of over 6 TFLOPS single-precision
  • NVIDIA GTX TITAN gaming graphics card (Zotac GeForce GTX TITAN AMP! Edition)
    • NVIDIA Kepler architecture
    • One GK110 GPU with 2688 SPs at 902 MHz to 1045 MHz in single-precision mode, 6 GB GDDR5 RAM on a 384-bit bus, 317.2 GB/s
    • Peak performance of over 5 TFLOPS single-precision, from 1.3 to 1.5 TFLOPS double-precision in the corresponding mode
    • This is a budget replacement for the TESLA K20X GPU card intended for workstations and servers (which would cost at least 3 times more and would run considerably slower at single-precision and integer code, but would offer ECC RAM)
  • AMD Radeon RX Vega 64 gaming graphics card (reference design, manufactured by MSI with slight overclocking)
    • AMD GCN 5th gen architecture
    • One Vega10 XT GPU with 4096 SPs typically at 1401 or 1576 MHz, 8 GB HBM2 RAM on a “2048-bit” bus, 483.8 GB/s
    • Peak performance of over 10 TFLOPS single-precision

Total peak performance is over 31 TFLOPS single-precision.

Pictures

Here's what the server looks like (click on the thumbnails for higher resolution pictures).

2019 upgrade (added Vega 64 and GTX 1080, removed HD 7990 and HD 6770 Green Edition):

super2019-uncovered1.jpg super2019-uncovered2.jpg

2015 upgrade (added GTX Titan X, as well as HD 6770 Green Edition into the short slot):

super2015-uncovered1.jpg super2015-uncovered2.jpg super2015-covered.jpg

2013:

Software

The operating system is Scientific Linux 6.10 (with several devtoolsets installed, such as providing a variety of newer GCC versions), since this is a common free option to run Intel MPSS as needed to access the Xeon Phi card (which, in turn, runs its own copy of Linux, coming from Intel MPSS). We also have CUDA 10.1 with its driver version 418.39, and AMD AMDGPU-PRO 18.50.

Here's what this looks like via OpenCL:

[solar@super ~]$ clinfo | fgrep Name: | tail -n +4
  Platform Name:				 AMD Accelerated Parallel Processing
  Name:						 gfx900
  Platform Name:				 Intel(R) OpenCL
  Name:						        Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
  Name:						 Intel(R) Many Integrated Core Acceleration Card
  Platform Name:				 NVIDIA CUDA
  Name:						 GeForce GTX 1080
  Name:						 GeForce GTX TITAN X
  Name:						 GeForce GTX TITAN

Who is eligible

Remote access will be provided, free of charge, to Open Source software developers. Access is provided for getting acquainted with the technologies and/or for Open Source software development. In the organizers' sole discretion, access may be denied or restricted (in particular, in case it is used for other than an intended purpose or/and if one's use of the system inconveniences other users in a substantial way). The information contained in this announcement does not formally constitute an offer to provide any service to the general public.

How to apply

To apply for an HPC Village account, please e-mail hpc-village-admin at openwall.com with the following information:

  • Names of and URLs to Open Source project(s) that you represent, and a way for us to confirm that you're in fact involved with those projects
  • Desired login name (must be non-misleading to other users)
  • Your SSH public key, preferably from a keypair generated according to our conventions

We intend to reply to all HPC Village accounts request e-mails.

Credits

The HPC Village project is provided by Openwall (idea, most computer hardware parts, software configuration, system administration) and DataForce (assembly and hosting of servers, Internet connectivity). NVIDIA GTX 1080 and AMD Vega 64 purchase was sponsored by a grant from Zcash Foundation. NVIDIA GTX Titan X purchase was sponsored by Sagitta HPC, a subsidiary of Stricture Group LLC. AMD Radeon HD 7990 (available in this machine until January 2019, then replaced with the Vega 64) was team john-users' prize in Hash Runner 2013 organized by Positive Technologies.

Please note that Openwall is not affiliated with any of these.

Free access to multi-CPU servers (including some non-x86) for Open Source development:

Use Sage, R, Octave, Python, Cython, GAP, Macaulay2, Singular, and much more, write, compile, and run code in most programming languages on remote systems using a free or paid service (with support from University of Washington, the National Science Foundation, and Google):

  • CoCalc (formerly SageMathCloud)

Time-limited free access to an HPC machine, with intent to promote this vendor's computer hardware sales:

Free access for academic researchers worldwide to a 384-node cluster with Intel Xeon CPUs and Altera Stratix V FPGAs (two CPUs and one FPGA per node), running Windows Server 2012:

OSUOSL hosting and OSS services, as well access to ARM and POWER machines:

GRID5000 - A large-scale testbed for distributed computing, used by CS researchers in HPC, Clouds, Big Data, Networking, AI:

HPC/Village.txt · Last modified: 2020/12/10 12:55 by mator
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate to DokuWiki Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki Powered by OpenVZ Powered by Openwall GNU/*/Linux