High performance computing
High performance computing (HPC) is the use of computer clusters with especially powerful processors, large memory and storage space, to tackle problems which would be difficult or impossible on single PCs.
Exploit parallel computing - HPC
Parallel computing, breaking problems into smaller parts and processing these simultaneously to tackle otherwise unfeasible work, is now essential in many research areas - from climate modelling and structural mechanics, to materials science and drug discovery. Research that could take years on a single PC may be completed in just hours.
High performance computing (HPC) is a parallel computing route which uses clusters (networks) of specialist computers with powerful processors, large memory and storage space. Research IT's Platforms team provides our HPC solution, called Barkla, for appropriate research at Liverpool. For research requiring efficient communication between task parts, programs must be specifically designed (as established HPC apps) to take advantage of the cluster; otherwise, no significant speed increase may be seen. Alternatively, substantial speed increases are possible by running batches (job arrays) of independent tasks across the cluster, perhaps to explore a problem space.
Large batches of independent jobs (~1000 or more), which each run acceptably on ordinary PCs, are more appropriate for our high-throughput computing (HTC) platform, HTCondor. For further details on Barkla and other platforms at Liverpool, including user guides, work procedures, and substantial technical information, please see our technical documents site (intranet or VPN).
Our team
Research IT's Platforms team provides and develops Liverpool's research computing platforms, including the Barkla cluster and our cloud platforms.
The team studies emerging HPC technologies, consults with other HPC facilities, and works closely with the research community to ensure that platforms meet researchers' needs, while providing training and workshops to ready users for best use of these. The team consults with research groups to advise on funding applications and specialist platforms for their work. Indeed, many of Barkla's nodes have been designed, and added successively, for specific groups here. Furthermore, the team manage and approve researchers' applications for access to national or world-leading HPC supercomputers.
Have questions for our team? Considering working with us? Contact us at hpc-support@liverpool.ac.uk or join our mailing list for the latest news and events.
The Barkla HPC cluster
Liverpool's Barkla platform is a Linux HPC cluster consisting of 67 specialist computers (called nodes). Recent significant upgrades have boosted cluster performance and capabilities several-fold, opening doors for new research via:
- 58 compute nodes,* each with 168 cores (two AMD EPYC 9634 CPUs), 1.5 TB RAM (9 GB/core), and 3.84 TB local NVMe storage.
- 2 visualisation nodes - as compute nodes but each with two NVIDIA Ada Lovelace L4 GPUs. For remote desktop pre- or post-data GUI work, or debugging lightweight GPU apps.
- 4 general purpose GPU nodes - as compute nodes but each with two NVIDIA L40S GPUs.
- 3 deep-learning focused GPU nodes, each with 96 cores (two Intel Xeon Platinum 8468 CPUs), 2048 GB RAM (21 GB/core), four NVIDIA H100 SXM GPUs, and 7.68 TB local NVMe storage.
- network storage includes 2 PB for short- and medium-term work (NFS with backup), and 2 PB of Lustre parallel storage for all tasks, including I/O-intensive work. Nodes connect via a fast 200 Gb/s dual Intel Omni-Path interconnect.
(*) while available to all, a growing number of compute- and GPU-nodes are funded by certain research groups who have job priority.
Quick considerations
- Large memory nodes support jobs with memory requirements beyond those of the standard compute nodes. Certain problems, e.g. deep learning, often work best on a massively parallel GPU node.
- While Barkla has several 100 TB of shared storage, this may need consideration if running hundreds of jobs. To ensure availability for everyone, job runtime is usually limited to three days.
- Job executables must be built for a Linux host (not Windows), but Barkla's range of pre-installed research apps and tools allows your research to start immediately.
Get started
Barkla is freely available to researchers registered for MWS. After considering the above advice on suitable HPC jobs, please first register via our self-service portal: Select Request > Accounts > Application to access high performance/throughput computing facilities.
In your application please briefly describe your project and detail:
- What software is needed?
- If using source code, what languages are used, e.g. C/C++, Python, Fortran, MATLAB, R
- Are additional libraries, packages, or other apps needed?
- Are these free of charge or commercial packages - are there any licensing restrictions?
- How long will jobs run for, and how much memory and disk space are needed? (estimate if unknown today)
- is special hardware needed, e.g. GPUs?
After Liverpool, researchers may benefit from opportunities at other regional/specialist (Tier-2, e.g. Bede) or national (Tier-1, e.g. ARCHER2) HPC facilities. For truly exceptional research, please consider a peer-reviewed application for free access to a world-leading Tier-0 facility, e.g. PRACE (~100,000 cores; typically 1,000 cores per job). This competitive route gives access to some of the largest supercomputers in the world. In all cases, please speak to our team for further advice and support.
Share your experience
If your research has benefitted from our services, please email us at hpc-support@liverpool.ac.uk to let us know. We'd love to see anything from articles and presentations, to theses and conference proceedings. This helps us to tailor our services for the future and helps us to secure funding for future facilities and projects.