Tags:
create new tag
,
view all tags
---+ Available Resources %TOC% ---++ SPRACE Cluster The SPRACE cluster is a T2 site of the CMS collaboration at the [[http://lcg.web.cern.ch/lcg/][Worldwide LHC Computing Grid (WLCG)]] infrastructure. ---+++ General Summary ---++++ Production Time History | *Date* | *# Nodes* | *#Cores* | *HEP-SPEC06* | *TFlops (theoretical)* | *Storage (TB Raw)* | *Storage (!TiB)* | | Mar/2004 | 22 | 44 | 113 | 0.233 | 4 | 3.4 | | Jun/2005 | 54 | 108 | 485 | 1.001 | 12.4 | 10.6 | | Sep/2006 | 86 | 172 | 1,475 | 2.025 | 12.4 | 10.6 | | Aug/2010 | 80 | 320 | 3,255 | 3.02 | 144 | 102.6 | | Mar/2012 | 80 | 320 | 3,255 | 3.02 | 504 | 378.6 | | Jun/2012 | 144 | 1088 | 13,698 | 10.085 | 1,044 | 787.0 | | *Aug/2019* | *128* | *2688* | *29,700* | *25.353* | *3,364* | *3,060.9* | | Apr/2023 | 144 | verificar | verificar | verificar | verificar | verificar | ---++++ Upgrade Time History | *Date* | *Financial Support* | *# Nodes* | *#Cores* | *HEP-SPEC06* | *TFlops (theoretical)* | *Storage (TB Raw)* | *Storage (!TiB)* | | Feb/2004 | FAPESP phase I | 22 | 44 | 113 | 0.233 | 4 | 3.4 | | Jun/2005 | FAPESP phase II | 32 | 64 | 372 | 0.768 | 8.4 | 7.2 | | Sep/2006 | FAPESP phase III | 32 | 128 | 990 | 1.024 | 0 | 0 | | Aug/2010 | FAPESP phase IV | 16 | 128 | 1,893 | 1.228 | 144 | 102.6 | | Mar/2012 | CNPq Universal | 0 | 0 | 0 | 0 | 360 | 327.0 | | Jun/2012 | FAPESP phase V | 64 | 768 | 10,443 | 7.065 | 540 | 491 | | Mar/2016 | FAPESP phase VI | 16 | 256 | 5,060 | 4.9 | 0 | 0 | | Oct/2015 | FAPESP (extra fund) | 0 | 0 | 0 | 0 | 180 | 163.7 | | May/2016 | Huawei Partnership | 0 | 0 | 0 | 0 | 384 | 349.4 | | May/2017 | FAPESP (extra fund) | 0 | 0 | 0 | 0 | 252 | 229.3 | | Oct/2017 | FAPESP phase VII | 32 | 1280 | 12,497 | 12.16 | 1008 | 917 | | Aug/2019 | RENAFAE fund | 0 | 0 | 0 | 0 | 640 | 582.3 | | Apr/2023 | FAPESP phase VIII | 16 | 1024 | 18567 | - | 2000 | 1818 | ---+++ SPRACE Current status The SPRACE current status, after the FAPESP thematic phase VII upgrade and storage upgrade using RENAFAE fund, that was installed by the end of August 2019, is * 29.7 KHS06 of Processing Ressources * 2.3 PB of Disk Space * 1.9 PB Dedicated to CMS Production * 0.4 PB Group Space ---++++ Worker Nodes Summary SPRACE has 144 worker nodes corresponding to 2688 computing cores. Those servers were bought at different times (phases), according to the evolution of the project, that started in 2004. Equipment of phase 1 were decommissioned in August 2010 because they are 32bit architecture and were no longer useful for CMS production. Equipments acquired in phase V were installed in the end of May 2012, corresponding to more 64 workernodes/768 cores, enhancing significantly the computing power and storage capacity of the cluster. Equipment of phases 2 and 3 were decommissioned by the end of 2015. | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *# of nodes* | *Total HS06* | *TFlops (theoretical)* | | IV | SGI | altix xe 340 | 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz | 8 | 24GB | 16 | 1,893 | 1.228 | | V | SGI | steel head xe C2112 | 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz | 12 | 48GB | 64 | 10,443 | 7.065 | | VI | SGI | Rackable C2112-4GP3-R-G | 2 x Intel Xeon E5-2630 v3 @ 2.3 GHz | 16 | 64GB | 16 | 5,060 | 4.9 | | VII | SGI | SGI C2112-4GP2 | 2 x Intel Xeon E5-2630 v4 @ 2.2 GHz | 20 | 128GB | 32 | 12,497 | 12.16 | | VIII | Supermicro | AS-2124BR-HTR | 2 x AMD Epyc 7343 @ 3.2 GHz | 32 | 256GB | 16 | 18,567 | - | ---++++ Storage Summary SPRACE has a *dCache* based storage with *2.3 !PiB of effective disk space*, distributed in five Supermicro (360TB raw disk space) servers, and four SGI Summit + Infinite 2245 (540TB raw disk space) servers, one SGI Modular !InfiniteStorage (432TB raw disk space), one Huawei !OceanStor (1024TB raw disk space) and two servers Dell !PowerEdge R730 + MD1280 (1008TB raw). Storages of phase IV were decommissioned in January 2019. | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *# of servers* | *Storage (TB Raw)* | *Storage (!TiB)* | | Univ | Supermicro | MBD-X8DTI-F | 2 x Intel Xeon Quad-Core E5620 @ 2.4GHz | 8 | 24GB | 5 | 360 | 327 | | V | SGI | Summit + Infinite 2245 | 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz | 12 | 64GB | 4 | 540 | 591 | | VI | SGI | Modular !InfiniteStore | 2 x Intel Xeon CPU E5-2630 v2 @ 2.60GHz | 12 | 64GB | 1 | 432 | 393 | | VI | Huawei | !OceanStor | 2 X Intel Xeon CPU E5-2695 v3 @ 2.30GHz | 28 | 64GB | 1 | 1024 | 931 | | VII | Dell | !PowerEdge R730 + MD1280 | 2 X Intel Xeon CPU E5-2620 v4 @ 2.10GHz | 16 | 128GB | 2 | 1008 | 917 | | VIII | Supermicro | Storage !SuperServer SSG-640SP-E1 + JBOD CSE-947S | 2 X Intel Xeon Gold 6326 @ 2.90GHz | 16 | 256GB | 1 | 2000 | 1818 | ---++++ Head Nodes Summary SPRACE has 5 head nodes, one for local users access (access), one for open sience grid compute element middleware (osg-ce), one for open sience grid storage element middleware (osg-se), and two for general tasks (spserv01 and spserv02). | *Phase* | *Service* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *Disk Space (!TiB)* | | V | access | SGI | Summit C2108 | 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz | 12 | 64GB | 6.0 TB (RAID-5 4x2TB; 7200RPM) | | V | osg-ce | SGI | Summit C2108 | 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz | 12 | 64GB | 1.5 TB (RAID-5 4x500GB; 7200RPM) | | V | osg-se | SGI | Summit C2108 | 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz | 12 | 64GB | 1.5 TB (RAID-5 4x500GB; 7200RPM ) | | IV | spserv01 | SGI | altix xe 270 | 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz | 8 | 24GB | 1.0 TB (RAID-5 3x500GB; 7200RPM) | | IV | spserv02 | SGI | altix xe 270 | 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz | 8 | 24GB | 3.0 TB (RAID-5 4x1TB; 7200RPM) | ---++++ Network Equipment Summary SPRACE is connected to internet by a 10 + 10 Gbps link provided by [[http://www.ansp.br][ANSP]]. All head nodes and storage servers are connected by a 10Gbps to the internet. All worker nodes have a 1 Gbps conection to switches with 10 Gbps uplinks to the core switch. | *Phase* | *Service* | *Vendor* | *Model* | *# of Devices* | *Description* | | II | top of rack | Dlink | DGS 1224T | 1 | 24x1Gbps ports | | II | top of rack | 3Com | 2824 | 1 | 24x1Gbps ports | | III | top of rack | 3Com | 3834 | 2 | 24x1Gbps ports | | IV | top of rack | SMC | !TigerStack II - 8848M | 1 | 48x1Gbps + 2x10Gbps ports | | Finep | top of rack | Cisco | Nexus 5010 | 1 | 20x10Gbps ports | | IV | core | Cisco | Catalyst 6506E | 1 | ports | | V | top of rack | LG-Ericsson | 4550G | 2 | 48x1Gbps + 1x10Gbps ports | ---+++ Decommissioned Hardware Some machines, bought at phase I of the project, were decommissioned because they were based on a 32 bits architecture. Other machines bought at phase II were also decommissioned because their hardware warranty has expired. *Processing hardware* | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *# of nodes* | | I | Itautec | Infoserver 1252 | 2 x Intel Xeon DP 2.4 GHz | 2 | 1 GB | 24 | | II | Itautec | Infoserver LX210 | 2 x Intel Xeon !EMT64T @ 3.0 GHz | 2 | 2GB | 32 | | III | Itautec | Infoserver LX211 | 2 x Intel Xeon Dual-Core 5130 @ 2.0 GHz | 4 | 4GB | 32 | *Storage hardware* | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *# of nodes* | | I | Dell | !PowerEdge 2650 | 2 X Intel Xeon 2.4 GHz | 2 | 2 GB | 1 | | II | Dell | !PowerEdge 1850 | 2 X Intel Xeon 3.0 GHz | 2 | 2 GB | 1 | | IV | Sun | !SunFire X4540 | 2 x AMD Opteron Quad-Core 2384 @ 2.7 GHz | 8 | 64GB | 3 | | *Phase* | *Vendor* | *Model* | *Raw Disk Space* | *# of units* | | I | Dell | !PowerVault 220S | 2 TB | 2 | | II | Dell | !PowerVault 220S | 4 TB | 2 | | IV | Sun | !SunFire X4540 | 48 TB | 3 | *Head Nodes hardware* | *Phase* | *Service* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *Disk Space (!TiB)* | | I | old admin | Itautec | Infoserver 1251 | 2 x Intel Xeon DP 2.4 GHz | 2 | 1GB | 288GB (4x72GB; SCSI 10KRMP) | | IV | old access | SGI | altix xe 270 | 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz | 8 | 24GB | 3.0 TB (RAID-5 4x1TB; 7200RPM ) | | IV | old osg-ce | SGI | altix xe 270 | 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz | 8 | 24GB | 1.0 TB (RAID-5 3x500GB; 7200RPM ) | | IV | old osg-se | SGI | altix xe 270 | 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz | 8 | 24GB | 1.0 TB (RAID-5 3x500GB; 7200RPM) | *Network Equipment* | *Phase* | *Service* | *Vendor* | *Model* | *# of Devices* | *Description* | | I | top of rack | Dlink | DGS 1024T | 2 | 24x1Gbps ports | | Donation | core | Cisco | 3750 | 1 | 20x1Gbps ports | ---++ WLCG pledges ---+++ U.S. CMS Facilities * [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/USCMSTier2Deployment#U_S_CMS_Tier_2_Facilities_Deploy][U.S. CMS Tier-2 Facilities Deployment Status]] ---+++ CMS T2 Associations and Allocations * [[https://twiki.cern.ch/twiki/bin/view/CMS/CMST2AssociationsAllocations][PAG/POG Allocations at Tier-2 Sites]] ---+++ WLCG pledges for 2013 According to the [[https://espace.cern.ch/WLCG-document-repository/Pledges/2013-2014/WLCGResources-2013-2014_19OCT2012.pdf][WLCG pledges for 2013]], a nominal T2 site is * 10.6 KHS06 of Processing Ressources * 787TB of Disk Space ---+++ WLCG pledges for 2012 According to the [[https://espace.cern.ch/WLCG-document-repository/Pledges/2012-2013/WLCGResources-2012-2013_12OCT2011_T2.pdf][WLCG pledges for 2012]], a nominal T2 site is * 9.5 KHS06 of Processing Ressources * 787TB of Disk Space According to the [[https://cms-docdb.cern.ch/cgi-bin/DocDB/RetrieveFile?docid=5935&version=5&filename=T1_T2_SpaceCredit_2012%20V3.pdf][Service Credit in 2012]], a nominal T2 site is * 10.9 KHS06 of Processing Ressources * 810TB of Disk Space * 30TB of Stage-Out Space * 250TB of Group Space (125TB per group) * 200TB of Central Space * 170TB of Local Space * 160TB of User Space (~40 Users of 4TB each). <!-- The SPRACE status on March 2012, after the CNPq Universal upgrade, was * 3.25 KHS06 of Processing Ressources * 378TB of Disk Space * 20TB of Stage-Out Space * 100TB of Group Space (125 TB per group) * 100TB of Central Space * 80TB of Local Space * 78TB of User Space (~20 Users of 4 TB each). | *Phase* | *Processor* | *Number of* | *Number of* | *SI2K* | *SI2K* | *SI2006* | *SI2006* | | *#* | *Specification* | *Nodes (WN)* |*Cores (WN)*| *per core* | *Total* |*per core*| *Total* | | I | Intel Xeon DP 2.4 GHz | 24 (22) | 50 (44) | 900 | 45,000 | 5.3 | 265 | | II | Intel Xeon !EMT64T 3.0 GHz | 33 (32) | 66 (64) | 1,350 | 89,100 | 7.9 | 521 | | III | Intel Xeon Dual-Core 2.0 GHz | 32 (29) | 128 (116) | 2,100 | 268,800 | 12.3 | 1,574 | | *Total of WN* | ** | *83* | *224* | ** | *369,600*| ** | *2,165*| | *Total* | ** | *89* | *244* | ** | *402,900*| ** | *2,360*| * [[http://www.spec.org/cpu/results/cint2000.html][SPECInt 2000]] * [[http://www.spec.org/cpu/results/res2003q2/cpu2000-20030407-02040.html][Phase I]] * [[http://www.spec.org/cpu/results/res2005q2/cpu2000-20050610-04199.html][Phase II]] * [[http://www.spec.org/cpu/results/res2006q3/cpu2000-20060626-06253.html][Phase III]] * [[http://www.spec.org/cpu2006/results/cpu2006.html][SPECInt 2006]] * Phase I and II: convertion factor [[http://www.spec.org/cpu/results/res2007q1/cpu2000-20070119-08332.html][2000]]/[[http://www.spec.org/cpu2006/results/res2007q1/cpu2006-20070119-00221.html][2006]] = 170 * [[http://www.spec.org/cpu2006/results/res2007q1/cpu2006-20070119-00221.html][Phase III]] * The unit of computing power kSI2K corresponds to one Intel Xeon 2.8 GHz processor: http://www1.jinr.ru/Pepan/2005-v36/v-36-1/pdf/v-36-1_02.pdf * Giga Flop: Xeon processors execute 2 floating point operations per clock cycle. A Xeon with 2.4 GHz is able to execute up to 4.8 billions of floating point operations per second: 2 operations / clock cycle X 2.4 x 10^9 clock cycles / sec = 4.8 GFlops * Compare with the US CMS Tier-2 site capacity: http://t2.unl.edu/uscms/current-us-cms-tier-2-site-capacity-1-24-07/ * See also the LHC Computing Grid Tier 2 Centres: http://lcg.web.cern.ch/lcg/C-RRB/Tier-2/ CMS Tier-1 * CMS Technical Design Report CERN-LHCC-2005-023 (CMS TDR) 20/June/2005 | *Tier-1* | *2007* | *2008* | *2009* | *2010* | | CPU (MSi2k) | 1.3 | 2.5 | 3.5 | 6.8 | | Disk (PB) | 0.3 | 1.2 | 1.7 | 2.6 | | Tape (PB) | 0.6 | 2.8 | 4.9 | 7.0 | | WAN (Gbps) | 3.6 | 7.2 | 10.7 | 16.1 | * *Computing* * WLCG: http://lcg.web.cern.ch/LCG/ * Management: http://lcg.web.cern.ch/LCG/proj_structure.htm * Resources: http://lcg.web.cern.ch/LCG/resources.htm *(see tables)* * CMS Computing: http://cms.cern.ch/iCMS/jsp/page.jsp?mode=cms&action=url&urlkey=CMS_COMPUTING * Organization: http://lucas-nice.web.cern.ch/lucas-nice/cpt/2008-02-07Offline-Computing-Organigram.pdf -->
E
dit
|
A
ttach
|
P
rint version
|
H
istory
: r42
<
r41
<
r40
<
r39
<
r38
|
B
acklinks
|
V
iew topic
|
Ra
w
edit
|
M
ore topic actions
Topic revision: r42 - 2024-10-09
-
marcio
Home
Site map
Main web
Sandbox web
TWiki web
Main Web
Users
Groups
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Copyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback