Tianhe-1A:天河一号
- top500 (2nd)
- NUDT YH MPP, 7168 x2 xeon x5670 6C 2.93Ghz, 7168 NIVIDIA M2050
- Total core= 7168x(2x6+14)=186368 (agree with top500 list)
- Rpeak=7168x(2x6x2.93x4+14x32x1.15)=4701061 (top500=4701000)
- top100(1st)
- NUDT YH MPP, 7168 x2 xeon x5670 6C 2.93Ghz, 7168 NIVIDIA M2050 and 2048 FT-1000, 8C and 1GZ (it was list as Hex Core, a misprint)
- Total core= 7168x(2x6+14)+ 2048x8=202750 (agree with top100)
- Rpeak=470100, mean that it does not include FT-1000 in Rpeak and linpack
- It seems that to highlight china own FT-1000 chip it need to be included in top100
- we all know that it is not easy to run linpack in a mix-CPU environment so it make sense not to include FT-1000 in linpack
- top500 (4th)
- Dawning TC6300 Blade, 4640 x2 xeon x5650 6C 2.66Ghz, 4640 NIVIDIA C2050
- Total core= 4640x(2x6+14)=120640 (agree with top500 list)
- Rpeak=4640x(2x6x2.66x4+14x32x1.15)=2982964 (top500=2984300)
- top100(4th)
- Dawning TC6300 Blade,2560 x2 xeon x5650 6C 2.66Ghz, 2560 NIVIDIA C2050,but for calculation to match the reported total core cout the total blade is only 2016
- Total core= 2016x(2x6+14)=52416(agree with top100 list)
- Rpeak=2016x(2x6x2.66x4+14x32x1.15)=1296046 (top100=1296320)
- so between top100 and top500 report Nebulae system almost double in size
- if one look at the Top500 site Nebulae system did not change between 06/2010 and 11/2011
- One just wondering why Nebulae NEED to report few system for China HPC100
SUNWAY Blue light:神威蓝光
- top500 (14th)
- SUNWAY bluelight MPP shenwei SW1600 16 core 975Mhz
- Total core= 8575x(16)=137200 (agree with top500 list)
- Rpeak=8575x(16x.975x8)=1070160 (agree with top500 list)
- top100(2nd)
- SUNWAY bluelight MPP shenwei SW1600 16 core 975Mhz
- Total core= 8575x(16)=137200 (agree with top100 list)
- Rpeak=8575x(16x.975x8)=1070160 (agree with top100 list)
- SW1600 is 16 core CPU that use 5x5 crossbar to connect 4 (4core) group and IO system that support PCI-E and ge NIC on chip and mgmt channel
- each 4 core group support DDR3/1333 memory channel/bank
- PCI-E support 8x5 gbps
- each system bd has two CPU, it is not know how these two CPU connected
- 1U system has 4 system bd, I ASSUME that each system bd has its own QDR connection
- SUNWAY use 324 port and 256 port IB switch
- 324 is 3 layer COS network of 3x9 of 36 port chip
- 256 is two layer fat tree, I could not understand the network diagram
- Due to the high density of CPU in 1U, system use water cooling
- overall very impressive system with only 9 rack
- godson
- shenwei