I would like to create a list that compares the free HPC resources (CPUs, storage, GPUs, etc.) available to researchers at different sites.
Please respond with the resources available at your location. Alternatively, if you know of a compiled list that already exists, please share it with me.
At Penn State, faculty, staff, and students have access to our Roar Colab system. This comes with the following for free:
Storage
Home 16GB (VAST)
Work 131GB (VAST)
Scratch 1e6 inodes, 30 days before deletion (GPFS)
That’s a wonderful idea! If you can share the list when you get one, that would benefit a lot of sites!
At Tufts, We have no limit on computing resources.
Storage:
This is a great idea, and I second Delilah’s request to share a list with the community.
At Georgia Tech, we have a free tier of HPC open to every faculty member.
Storage
Home 10 GB (per user)
Scratch 15 TB (per user), 60 days before deletion (Lustre)
Project 1 TB (per PI, shared by group), with backup (Lustre)
Compute
Monthly allocation of credits (per PI, shared by group) equivalent to 10,000 CPU hours on standard hardware (8 GB/CPU), fewer hours if used on GPUs or other more expensive architectures, with all architectures available to free tier accounts
Unlimited use of free backfill queue, offering low-priority access to compute (8-hour walltime, with preemption possible after 1 hour)
Also, specifically on the storage front, there was a discussion on the CaRCC email list last year that resulted in this very helpful spreadsheet: Quota-Info - Google Sheets
Any faculty member can request a research HPC project free of charge.
Graduate students, postdocs, and other collaborators are added to a project in a self-service portal by the faculty member.
Any faculty member can request a temporary HPC project for a specific course and section.
Undergraduates and non-thesis master’s students who want access and are not part of a research group can request access through the NC State University Libraries.
Directors of Core Facilities (such as NC State’s Genomics Sequencing Lab) or other University Group Leaders can also request HPC projects.
Computing resources available to all accounts
Because of the dynamic nature of the cluster, the types and amounts of computing resources available are always in flux. There are currently:
On the order of 500 compute nodes with well over 10,000 cores.
The majority of the nodes are connected with InfiniBand.
Several nodes have one or more attached GPUs of various models.
Over 300 nodes have more than 128 GB of memory, and there are a few 512 and 1024 GB nodes.
Various queues are available with varying priority, time limit, and core limit.
Higher priority is given to jobs with greater parallelism, i.e., MPI jobs.
Higher priority is given to shorter jobs. Job limits for various queues range from 10 minutes to 2 weeks.
The number of simultaneous cores or nodes available for a single job is always changing. In general, the largest MPI jobs currently running on the cluster range between 128 and 256 cores.
By default, all jobs are scheduled on nodes of the same type (homogeneous). For users with unique applications having minimal communication or dependence on architecture, we have a heterogeneous specialty queue that allows for the utilization of over 1000 cores.
UCF ARCC houses high performance computing (HPC) resources that are subsidized by Office of Research for use in research by faculty (and their students) across the campus. It has two main advanced computing clusters, namely Stokes and Newton.
ARCC resources:
– Stokes Cluster: About 5000 compute cores (Intel Xeon 64-bit processors) with 100Gbit InfiniBand interconnect
– Newton GPU Cluster: Around 40 GPUs which are a combination of V100 and H100 GPU nodes
Allocations: Each faculty group has a specific number of dedicated processor hours (DPH) allocated to it per month. A DPH is an hour of computation on any single core of the system. So a 10 hour job fully occupying 3 16-core nodes consumes 10316=480 DPH. By default, users receive 80,000 DPH to use per month; but faculty can contribute to a but-in program to have their resources increased substantially.