HPC Cluster Resources

As of Thu Nov 21 04:35:01 EST 2024


Pioneer Cluster

PARTITIONFeaturesNODELISTCPUS
per node
GPUs
per node
MEMORY
per node
Nodes status:
idle/total
CPUs status:
idle/total
GPUs status:
idle/total
batch*dodeca96gbcompt[221-289]249500052/691326/1656
batch*icosa192gbcompt[291-326]401910000/36293/1440
batch*icosa192gb,rdscompt[327-336]401910002/10137/400
batch*icosa256gb,gangliacompt[337-350]402570000/1456/560
batch*icosa256gbcompt[351-399]402570007/49495/1960
gpudgxdgxt001256810313310/1246/256
gpugpup100gput[031-044]2021910002/1497/28019/28
gpugpu2080gput[045-049,052]2021285506/6120/1204/16
gpugpu2080,rdsgput[050-051]2021285500/20/404/16
gpugpu4v100gput[053-056]2441910000/444/9614/16
gpugpu2v100gput[057-059,062]242183000+0/465/9611/12
gpugpu2v100,rdsgput[060-061]2421910000/220/4811/12
gpugpul40sgput[063-071]4842510000/962/43220/36
--The (*) following a name denotes the default value.

SMP node details

NODELISTMEMORYFREE_MEMCPUS(A/I/O/T)CPU_LOAD
smpt067650006839400/24/0/240.00
smpt077650004740080/24/0/240.00
smpt08115900071841440/0/0/401.01
smpt091159000104242640/0/0/4018.79

Memory is listed in MB -- for conversions, remember 1GB = 1024MB


AISC Resource (limited access)

PARTITIONFeaturesNODELISTCPUS
per node
GPUs
per node
MEMORY
per node
Nodes status:
idle/total
CPUs status:
idle/total
GPUs status:
idle/total
aisc(null)aisct[01-04]256810316820/4630/102415/32
--Memory is listed in MB -- for conversions, remember 1GB = 1024MB
--The Node_Range column indicates the inclusive range: Not all values in the range may be available.
--Abbreviations (applies in all tables): A=allocated, I=idle, O=out, T=total