Difference between revisions of "Grid5000"
From HCL
Line 35: | Line 35: | ||
**mpirun/mpiexec should be run from one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`) | **mpirun/mpiexec should be run from one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`) | ||
− | Setting up new deploy image | + | == Setting up new deploy image == |
... launch image ... | ... launch image ... |
Revision as of 11:17, 1 April 2011
https://www.grid5000.fr/mediawiki/index.php/Grid5000:Home
Login, job submission, deployment of image
- Select sites and clusters for experiments, using information on the Grid5000 network and the Status page
- Access is provided via access nodes access.SITE.grid5000.fr marked here as accessible from everywhere via ssh with keyboard-interactive authentication method. As soon as you are on one of the sites, you can directly ssh frontend node of any other site:
access_$ ssh frontend.SITE2
- There is no access to Internet from computing nodes (external IPs should be registered on proxy), therefore, download/update your stuff at the access nodes. Several revision control clients are available.
- Each site has a separate NFS, therefore, to run an application on several sites at once, you need to copy it scp, sftp, rsync between access or frontend nodes.
- Jobs are run from the frondend nodes, using a PBS-like system OAR. Basic commands:
- oarstat - queue status
- oarsub - job submission
- oardel - job removal
fontend_$ oarsub -I -t deploy -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY="VALUE"']
fontend_$ oarsub BATCH_FILE -t allow_classic_ssh -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY="VALUE"']
- The image to deploy can be created and loaded with help of a Systemimager-like system Kadeploy. Creating: described here
fontend_$ kadeploy3 -a PATH_TO_PRIVATE_IMAGE_DESC -f $OAR_FILE_NODES
Compiling and running MPI applications
- Compilation should be done on one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`)
- Running MPI applications is described here
- mpirun/mpiexec should be run from one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`)
Setting up new deploy image
... launch image ...
Edit /etc/apt/sources.list
deb http://ftp.fr.debian.org/debian/ sid main contrib non-free
apt-get update
apt-get dist-upgrade
apt-get install autoconf automake ctags