Thursday, March 29, 2012

rocks+6 , MPI, OFED and SGE test drive

StackIQ release rock+ 6 and it contain very interesting rolls
for 16 node rocks+ is free

rocks+ 6.0.1 Complete Stack

Click the below link to register and download Rocks+ as a complete bootable Big Infrastructure stack (free for up to 16-nodes). The stack includes the following modules (“Rolls”) for Rocks:

Rocks+ 6.0.1 ISO

  • Rocks Base
  • Rocks Core
  • Cassandra (beta)
  • CentOS 6.2
  • CUDA
  • Ganglia
  • Grid Engine
  • Hadoop
  • HPC
  • Kernel
  • MongoDB (beta)
  • OFED
  • Web Serve

6.0.1 Modules “a la carte”:

-All “Rocks+” Rolls require the “Rocks+ Core Roll” to be installed
-Rocks+ requires a license file for systems larger than 16-nodes

After one register you will receive email on how to download the iso

I did a test drive with SGE, OFED and Intel.
It also include version of environment modules.

Observations:
  • rocks+6 installation is almost the same as installing rocks
  • it  includes centos 6.2
  • it include OFED and cuda roll
  • it include Hadoop roll
  • free with 16 nodes
  • some open source rolls host on github
    • roll-base
    • roll-web-server
    • roll-sge
    • roll-os
    • roll-hpc
    • roll-ganglia
    • roll-kernel
    • these rolls seems a fork from open source rock??
    • MPI stack only has openmpi and it doesnot include mpich2 and mpich1
It turn out that MPI stack will still need to be compiled and intel and IB (I could be wrong)
I download the mpi roll from the triton github, e.g get a snapshot
the triton also contain ofd, intel , envmodules, hadoop , moab, myrinet_mx , myri10gbe and other roll.

The README contains very import info
  • This roll source supports building specified flavors of MPI with different compilers and for different network fabrics.  
  • By default, it builds mpich, mpich2, mvapich2, and openmpi using the gnu compilers for ethernet.  
  • To build for a different configuration, use the ROLLMPI, ROLLCOMPILER and ROLLNETWORK make variables, e.g.,
  • make ROLLMPI='mpich2 openmpi' ROLLCOMPILER=intel ROLLNETWORK=mx
  • The build process currently supports one or more of the values "intel" and  "gnu" for the ROLLCOMPILER variable, defaulting to "gnu".  
  • It uses any ROLLNETWORK variable value(s) to load appropriate openmpi modules, assuming that there are modules named openmpi_$(ROLLNETWORK) available (e.g., openmpi_ib, openmpi_mx, etc.).
  • setup envmodule inte/2011-sp1
  • The ROLLMPI, ROLLCOMPILER, and ROLLNETWORK variables values are incorporated into the names of the produced roll and rpms, e.g., 
  • make ROLLMPI=openmpi ROLLCOMPILER=intel ROLLNETWORK=ib produces a roll with a name that begins "mpi_intel_ib_openmpi"; it contains and installs similarly-named rpms 
  • e.g. mpi_intel_ib_openmpi-6.0.1-0.x86_64.disk1.iso
  • now in frontend (need to setup and use module intel/2011_sp1)
    • rocks add roll <path>/mpi_intel_ib_openmpi-6.0.1-0.x86_64.disk1.iso
    • rocks enable roll  mpi_intel_ib_openmpi
    • cd /export/rocks/install
    • rocks create distro
    • rocks run roll mpi_intel_ib_openmpi|bash
    • (install intel_ib_openmpi ... rpm)
    • init 6
  • reinstall all compute nodes
Observations:
  • mpich(1) is broken  one can not build so need to use ROLLMPI="mpich2 mvapich openmpi " or just ROLLMPI=openmpi
  • openmpi Makefile come with  option --with-tm=/opt/torque need to replace by --with-sge=/opt/gridengine in my case.
  • one need to setup envmodule for gnu and intel so one can build with eth, or ib and gnu or intel
  • not sure there is any difference between rock+ and rock build process
  • when one build various roll's iso in frontend  it will install the rpm into the frontend
  • One should try to build these iso in a development appliance





1 comment:

  1. 你好,老曹,我在google groups上看到你的很多回复,我想咨询一下你关于rocks HPC集群是否可以做双管理节点,这样可以避免单节点故障,还望指教!谢谢!

    ReplyDelete