Thursday, January 19, 2012

rocks-openmpi and gridengine in 5.4.3

it is well know that rocks-openmpi come with hpc roll in rocksclusters (5.4.3 current) did not compiled with-sge.
e.g.
  • /opt/openmpi/bin/ompi_info|grep gridengine
  • return nothing
(this bugs was fix by rocks's developer the new rpm is here )



  • As an experiment, I download the latest rocks source under /export/rocks/install
    • cd rocks-5.4.3/src/roll/hpc
    • make roll
    • it generate hpc-5.4.3-0.x86_64.disk1.iso
    • mount -o loop `pwd`/hpc-5.4.3-0.x86_64.disk1.iso /mnt
    • cd /mnt/hpc/5.4.3/x86_64/RedHat/RPMS
    • rpm -e rocks-openmpi
    • rpm -iUv rocks-openmpi-1.4.3-1.x86_64.rpm
    • /opt/openmpi/bin/ompi_info|grep gridengine
      • MCA ras: gridengine (MCA v2.0 API v2.0 Component v1.4.3
    • for the new distro
      • cd /export/rocks/install
      • rocks disable roll hpc
      • rocks remove roll hpc
      • rocks add roll /export/rocks/install/rocks-5.4.3/src/roll/hpc/hpc.......iso
      • rocks enable roll hpc
      • rocks create distro
         
    • re-install the compute-0-0
      • insert-ethers --remove compute-0-0
      • insert-ethers
      • pick compute
      • pxe boot the compute-0-0

now there is version openmpi-1.4.4 to work with

In fact Centos come with openmpi-1.4-gcc-x86_64 that does compiled with-sge
the binary is located at /usr/lib64/openmpi/1.4-gcc/bin

For openmpi-1.4.4

  • download the openmpi-1.4.4.tar.bz2
  • edit version.mk change 1.4.3 to 1.4.4
  • under hpc do make roll
  • follow the steps above to replace the  old hpc roll with the old hpc roll with the new roll  re-install the compute node
  • on the frontend just add the new rocks-openmpi-1.4.4-1.x86_64.rpm 
  • download from this link







Wednesday, January 18, 2012

Oracle Big Data Appliance and Cloudera Manager

ON Jan 10 2012  Oracle announce the availability of Oracle Big Data Appliance
that will include Clouders's Dstribution of Apache Hadoop and Clouder Manager
  • Oracle Big Data Appliance is an engineered system of hardware and software that incorporates Cloudera’s Distribution Including Apache Hadoop with Cloudera Manager, Hadoop loader plus an open source distribution of R.
  • Together with Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle Exalytics In-Memory Machine, Oracle Big Data Appliance with the Oracle Big Data Connectors software delivers everything customers require to acquire, organize and to analyze Big Data within the context of all their enterprise data.
  • Cloudera’s Distribution Including Apache Hadoop (CDH) is an enterprise-ready, 100% open source distribution of Apache Hadoop. Drawing from the innovations of a diverse open source community, CDH is the most reliable, secure and widely deployed commercial distribution of Apache Hadoop for the enterprise.
    • The integrated Oracle and Cloudera architecture has been fully tested and validated by Oracle, who will also collaborate with Cloudera to provide support for Oracle Big Data Appliance.
    •  Together, Oracle and Cloudera deliver the full power of Hadoop on an easy-to-deploy, easy-to-use platform.





Saturday, January 14, 2012

Intel E5-2600 tech info

there are many post talk about the Intel E5-2600 based on a  Intel Inside Data Center conference presentation at china 2011 11/10-13
http://intel-ipdc.leadinfo.com.cn/index.php?act=schedule
all download  the interest one is
This is very interesting presentation

conference  title
 presentation title




diversity of server  workload
Server IA Mapping





Xeon Server Product line

Xeon E5-2600




Performance up 80% E5-2600 vs Xeon 5600
 Intel-VT Extension


AVX Instruction






Friday, January 6, 2012

change mirrored rpool into rpool and dpool

there were discussion on change the mirrored rpool to two zpool: rpool and dpool
on the ZFS  troubleshoot guide list some methods based on snap

this blog present another simple method that is based on live upgrade (tested on s10u10)
problem: two hdd c0t0d0 and c0t1d0 as mirrored rpool, user want to split the rpool into two zpool: rpool and dpool
  1. zpool split rpool rpool2 c0t1d0s0
  2. zpool destroy rpool2
  3. partition c0t1d0s0 into two slice c0t1d0s0 and c0t1d0s1
  4. zpool create -f rpool2 c0t1d0s0
  5. lucreate -c c0t0d0s0 -n c0t1d0s0 -p rpool2
  6. luactivate c0t1d0s0
  7. init 6
  8. you have two BE c0t0d0s0 and c0t1d0s0 and new root pool is rpool2
if everything is ok
  1. ludelete  -f  c0t0d0s0
  2. zpool destroy -f rpool
  3. partition c0t0d0s0 into two slice c0t0d0s0 and c0t0d0s1 (or use VTOC)
  4. zpool create -f rpool c0t0d0s0
  5. lucreate -n ct0d0s0 -p rpool
  6. luactivate c0t0d0s0
  7. init 6
if everything is ok
  1. ludelete   -f c0t1d0s0
  2. zpool destroy rpool2
  3. zpool attach -f rpool c0t0d0s0 c0t1d0s0 
  4. zpool create -f  dpool mirror  c0t0d0s1 c0t1d0s1
now you have two mirrored rpool and dpool