Access

Who can apply?

Staff and postgraduate research students who are already registered for the MWS service may apply to use the University's advanced research computing services.

How do I apply?

If you require access to the University's Advanced Research Computing facilities, you will need to make a request via our CSD Service Desk
Select Make a request > Accounts > Application to access high performance/throughput computing facilities

Applying to use High Throughput Computing (Condor) facilities

Provide a brief description of your project in the online application, and keep Computing Services informed of any published research which has benefited from the use of the Condor service (including conference proceedings and dissertations) as this will help to secure future support for Condor at the University.

It will also be of assistance if you can briefly answer at least some of the questions below in your application (even if you do not have definite responses, it is worth thinking about these points as they will influence how well particular problems are suited to using Condor):

  • What application software is needed? Are you planning to use software that you (or maybe a colleague) has written - if so which language does it use e.g. C/C++, FORTRAN, MATLAB, R. If third party software is needed, can it be obtained free of charge or is it a commercial package? Are there any licensing restrictions associated with it?
  • How long will the individual jobs run for? Condor works extremely well with jobs running for around 15-30 minutes but longer jobs are less efficient. It is generally not possible to run jobs for longer than about 10-12 hours unless the job can restart from where it left off (see the section on checkpointing).
  • Will the jobs require a lot of memory? Since Condor jobs are run on commodity PCs, memory is quite restriced. Around 1 GB is the absoulte maximum which any job can use although in practise this may drop to around 500 MB. If more memory is needed then the application may be better suited to the High Performance Computing Service.
  • How much disk space will be needed by individual jobs? Condor jobs should aim not to use more than a few GB of disk space on the Condor pool PCs as the availability of large amounts of storage cannot be guaranteed.

Applying to use High Performance Computing (HPC) facilities

Provide a brief description of your project in the online application, and keep Computing Services informed of any published research which has benefited from the use of the HPC service (including conference proceedings and dissertations) as this will help to secure future support for HPC at the University.

It will also be of assistance if you can briefly answer at least some of the questions below in your application (even if you do not have definite responses, it is worth thinking about these points as they will influence how well particular problems are suited to using our HPC system):

  • What application software is needed? Are you planning to use software that you (or maybe a colleague) has written - if so which language does it use? For example, C/C++, FORTRAN, MATLAB, R. If third party software is needed, can it be obtained free of charge or is it a commercial package? Are there any licensing restrictions associated with it?
  • How long will the individual jobs run for? The default maximum time limit on jobs is three days. This limit is a compromise to allow a reasonable number of jobs to run through this shared facility, to prevent jobs from finishing prematurely because of a hardware fault and to allow a reasonable window in which systems time can be scheduled. 
  • Will the jobs require a lot of memory? The new Barkla nodes provide 9500MB per core (so 380GB per node), but two of the nodes have 1.1TB each, equivalently over 250GB per core. 
  • How much disk space will be needed by individual jobs? The supporting file systems have terabytes of storage, but if you need to run several hundred jobs and each of these generates around a terabyte of data, then storage space will need to be managed carefully. Also, moving terabytes of data from the cluster to more secure storage can take a long time. 
  • Is special hardware (like a GPU) needed? If you are doing Deep Learning, your application may work best on a node with a GPU. Similarly, we have remote visualisation nodes with GPUs for tasks such as pre- and post-processing. 

Applying to use other national or international facilities

University of Liverpool researchers may be able to apply for time on the national ARCHER service as well as on one of the EPSRC Tier-2 systems.

EPSRC Tier-2 system information

EPSRC Tier-2 access information

Finally, if you believe you are conducting world-class research that needs top-end HPC access (typically thousands of cores per job), then please consider applying for free access to one of the PRACE systems. This is a competitive, peer-reviewed process for access to some of the largest supercomputers in the world.