Computing guide

Introduction

EPP computing resources include the following:

  • Linux desktop machines
  • Linux user interface machines
  • Network file stores
  • Batch job service
  • Grid computing service
  • Backup service
  • Printing service
  • and of course friendly guidance from colleages

Support

EPP computing support is provided by Peter, Rob, and Alex. Please e-mail Peter in the first instance or drop by our offices on B-floor.

Accounts

User accounts may be requested by filling out this form. This account is valid on any machine in the cluster. You will be assigned to a group corresponding to one of our recognised experiments. Currently these are ATLAS, , HyperK,  LAr, SNO+, and  T2K . You will have a home area on the corresponding disk, and access to the software of that experiment.

Changing passwords

If you wish to change your password type the following command on any cluster machine. You will need to pay attention to the prompts and enter your current password twice.

$ passwd

Desktop machines

HEP desktop machines run Scientific Linux version CentOS7 (plus CentOS8 Stream) and are a general purpose Linux desktop for both physics analysis and admin work.

Interactive machines

There are currently three interactive machines  which may be access from anywhere on the internet. They are running CentOS7 and are well specified with many cores and large memory. Grid User Interface (UI) software is available on these machines and they are mainly used for code development, analysis, and submitting jobs to the batch system and Grid. The machines are:

  • lapw.lancs.ac.uk   (CentOS7)
  • lapx.lancs.ac.uk   (CentOS7)
  • lapy.lancs.ac.uk   (CentOS7)

Personal laptops

Personal laptops may be connected to the network but must be registered with the ISS PASS system. Access to HEP resources will then be via ssh to any Desktop or Interactive machine.

Learning

There are copious Linux command line tutorials on the web. There are worse ones than this so please check YouTube as a starting point.

Speaking of which, here is a good tutorial on the Vim editor, a sound investment for a life in computing. And here is the wider playlist.

Batch system

Batch jobs may be submitted via the htcondor system. The command to submit a job is condor_submit. You can find out the status of jobs using condor_q.

Batch jobs may be submitted from any machine using the condor_submit command:

$ condor_submit foobar.jdl

Here are some useful commands:

# This is a simple JDL script. Note the executable must have the executable bit set (chmod +x hw.sh)
# This requests 1G memory, please try and keep this low to allow more jobs to run simultaneously

$ cat foobar.jdl
executable     = hw.sh
universe       = vanilla
arguments      = foobar
output         = std-$(CLUSTER).$(Process).out
error          = std-$(CLUSTER).$(Process).err
log            = std-$(CLUSTER).$(Process).log
should_transfer_files = YES
transfer_input_files = file1,file2
transfer_output_files = file3,file4
request_memory = 1000
concurrency_limits = myusername:200
queue 3

# 3 jobs will be submitted
$ condor_submit foobar.jdl
Submitting job(s)...
3 job(s) submitted to cluster 2644700.

# View your submitted jobs with:
$ condor_q
-- Submitter: lapx.lancs.ac.uk : <148.88.40.18:9618?sock=23906_a336_3> : lapx.lancs.ac.uk
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
2644700.0   love            5/14 13:43   0+00:00:57 R  0   0.0  hw.sh foobar      
2644700.1   love            5/14 13:43   0+00:00:19 R  0   0.0  hw.sh foobar      
2644700.2   love            5/14 13:43   0+00:00:19 R  0   0.0  hw.sh foobar      

# A word on 'concurrency_limits'. This is important to restrict number of concurrent jobs 
# running on the system and is useful to avoid fileserver overload
# Each 'myusername' resource has 1000 arbitary units and
# your JDL can use X of these units
# Resulting in 1000/X=N concurrent jobs
# A good description here:
# http://www.iac.es/sieinvens/siepedia/pmwiki.php?n=HOWTOs.CondorSubmitFile#howto_limit
# eg.
# 'myusername' is your own, this restricts to 1000/200=5 jobs
concurrency_limits = myusername:200

# Other commands:
$ condor_status -avail
$ condor_status -submitters
$ condor_status -run
$ condor_status -state -total

$ condor_q -analyze 
$ condor_q -better-analyze 
$ condor_q -global

$ condor_rm -constraint 
$ condor_rm -all

# Good documentation on these commands is available:
$ man condor_q
$ man condor_submit
$ man condor_rm
$ man condor_history

Also, we use the vanilla universe: http://research.cs.wisc.edu/htcondor/quick-start.html

NEW: from 2023 we will no longer mount /home directories on the batch nodes. Please see this page for details on how to transfer files with your batch job. The batch machines all CentOS7 machines but you may set the OS version if different batch nodes are available.

requirements = OpSysMajorVer == 7

Grid computing

Grid users should apply for an x509 certificate here:

ATLAS users can request membership of the ATLAS Virtual Organisation here:

Remember to import the Root Certificate for the various Authorities into your browser

Grid commandline tools

On CentOS7 machines (lapw,x,y) you can access grid commandline tools by setting up from /cvmfs as follows.

$ . /cvmfs/grid.cern.ch/umd-c7ui-latest/etc/profile.d/setup-c7-ui-example.sh

Filestore

The disks that are relevant to the normal user are divided into 3 sets, namely:

  • Home areas Your account will be on one of the home areas with a path of the form /home/<experiment>/<account>, for example /home/aleph/ajf.
  • Experiment areas Each experiment has a disk which is mainly used for experiment related software, for example /aleph. Also see CVMFS section below.
  • Data disk(s) The following disk areas are designed for storing large datasets and are mounted on all cluster machines. Please note these have no backup. Note the /luna area is provided by ISS and provides ‘tiered storage’ which has no capacity limit (to first order!)
/data           # this has replaced previous mounts
/luna           # only use for archive, not a posix filesystem

CVMFS

CVMFS is a read-only filesystem managed at CERN. The filesystem is mounted on Interactive and batch nodes providing access to several repositories:

$ cvmfs_config probe
Probing /cvmfs/atlas.cern.ch... OK
Probing /cvmfs/atlas-condb.cern.ch... OK
Probing /cvmfs/fermilab.opensciencegrid.org... OK
Probing /cvmfs/uboone.opensciencegrid.org... OK
Probing /cvmfs/grid.cern.ch... OK
Probing /cvmfs/atlas-nightlies.cern.ch... OK

The main purpose of /cvmfs is to provide a full installation of all experimental software. For example in ATLAS you can check which release versions are available:

[love@lapx ~]$ ls /cvmfs/atlas.cern.ch/repo/sw/software/

ATLAS client tools

ATLAS client tools can be setup from cvmfs. It’s recommended to add these to your ~/.bash_profile file.

export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'

Then you can run the setup command:

$ setupATLAS

You may then use the rucio data management commands:

$ lsetup rucio --quiet

Backup

It is important to realise that only the /home areas are backed up, so if you put critical irreplaceable file on the data disks or anywhere else, you do so at your own risk. The backup is run every night so we can generally help you recover accidentally deleted files, or old version of files up to 3 months in the past. After this period backups are rotated so we will be unable to help.

Email

The university provides webmail and MS-Exchange email through ISS. Details for Thunderbird are here.

  • The Exchange email server name: exchange.lancs.ac.uk
  • Your SMTP address – on campus this is: smtp.lancs.ac.uk

Maillists

To view maillists, membership details, and to subscribe go to maillist main page: https://lists.lancaster.ac.uk

Relevent lists are:

  • physics-epp-staff
  • physics-epp-mphys
  • physics-epp-phd
  • physics-epp-atlas
  • physics-epp-neutrinos
  • physics-epp-t2k

Printing

ISS-managed printers are available on B-floor in B22 and also at the east end of the corridor. Instructions to setup these printers for Windows, MacOS, and Linux are available on the ISS web pages.

Video and phone conferencing

We have a dedicated video conference/phone conference room in B07 however most experiments now use their own systems. Please refer to your experiment’s documentation.

Web Browsing

Firefox is available on the Desktops.

Email Clients

Please see the ISS email page for further information.

Editing

A variety of text editors are available on the linux systems. For example vim, nedit, gedit, pico, and even emacs for the brave.

Compiling

Fortran, c and c++ programs are compiled and linked using gcc.

Text processing

Use pdflatex.

External access to the cluster

Secure shell terminal access (ssh)

To log into the cluster from outside you will need to use secure shell , either the command “ssh” on a Unix machine or using the program “putty” on a windows machine.

AFS

AFS has now been deprecated.

Web pages

Each user is free to create his own directory ~/public_html/ in which he can place web pages. These will appear to the outside world as https://hep.lancs.ac.uk/~ajf/ (where ajf is your username for example).