<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://hcl.ucd.ie/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Xalid</id>
		<title>HCL - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://hcl.ucd.ie/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Xalid"/>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php/Special:Contributions/Xalid"/>
		<updated>2026-04-08T14:30:23Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.1</generator>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=801</id>
		<title>BlueGene/P</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=801"/>
				<updated>2013-03-22T18:01:18Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition, some members has access to BleGene/P at West University of Timisoara, Romania ([http://hpc.uvt.ro/infrastructure/bluegenep/]). &lt;br /&gt;
&lt;br /&gt;
===== Fupermod on Shaheen BlueGene/P  =====&lt;br /&gt;
&lt;br /&gt;
In order to compile fupermod on the BG/P the following commands should be run to load some libraries: &lt;br /&gt;
&lt;br /&gt;
#module load bluegene &lt;br /&gt;
#module load essl &lt;br /&gt;
#module load gsl&lt;br /&gt;
&lt;br /&gt;
Then, configure command can be executed as follows: &lt;br /&gt;
&lt;br /&gt;
/fupermod/configure --with-gsl-dir=/opt/share/math_libraries/gsl/ppc64/IBM --with-blas=essl CFLAGS=&amp;quot;-O3 -qarch=450 -qtune=450&amp;quot; --with-essl-dir=/opt/share/ibmmath/essl/4.4/&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=800</id>
		<title>BlueGene/P</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=800"/>
				<updated>2013-03-22T18:00:16Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition, some members has access to BleGene/P at West University of Timisoara, Romania ([http://hpc.uvt.ro/infrastructure/bluegenep/]). &lt;br /&gt;
&lt;br /&gt;
===== Fupermod on Shaheen BlueGene/P  =====&lt;br /&gt;
&lt;br /&gt;
In order to compile fupermod on the BG/P the following commands should be run to load some libraries: &lt;br /&gt;
&lt;br /&gt;
#module load bluegene &lt;br /&gt;
#module load essl &lt;br /&gt;
#module load gsl&lt;br /&gt;
&lt;br /&gt;
Then, configure command can be executed as follows: &lt;br /&gt;
&lt;br /&gt;
/fupermod/configure --with-gsl-dir=/opt/share/math_libraries/gsl/ppc64/IBM --with-blas=essl CFLAGS=&amp;quot;-O3 -qarch=450 -qtune=450&amp;quot; --with-essl-dir=/opt/share/ibmmath/essl/4.4/ &lt;br /&gt;
&lt;br /&gt;
Somehow, on BG/P autotools didn't see LD_LIBRARY_PATH. Therefore, the following hardcoded path was added into configure.ac. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;lt;!-- rss --&amp;amp;gt;&lt;br /&gt;
&lt;br /&gt;
if test &amp;quot;$with_essl_dir&amp;quot;&amp;amp;nbsp;!= &amp;quot;&amp;quot;; then&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; CPPFLAGS=&amp;quot;$CPPFLAGS -I$with_essl_dir/include&amp;quot;&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;LDFLAGS=&amp;quot;$LDFLAGS -L$with_essl_dir/lib '''-L/opt/ibmcmp/xlf/bg/11.1/lib'''&amp;quot;&amp;lt;br&amp;gt; fi&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
--&amp;amp;gt;&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=799</id>
		<title>BlueGene/P</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=799"/>
				<updated>2013-03-22T17:59:29Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition, some members has access to BleGene/P at West University of Timisoara, Romania ([http://hpc.uvt.ro/infrastructure/bluegenep/]). &lt;br /&gt;
&lt;br /&gt;
===== Fupermod on Shaheen BlueGene/P  =====&lt;br /&gt;
&lt;br /&gt;
In order to compile fupermod on the BG/P the following commands should be run to load some libraries: &lt;br /&gt;
&lt;br /&gt;
#module load bluegene &lt;br /&gt;
#module load essl &lt;br /&gt;
#module load gsl&lt;br /&gt;
&lt;br /&gt;
Then, configure command can be executed as follows: &lt;br /&gt;
&lt;br /&gt;
/fupermod/configure --with-gsl-dir=/opt/share/math_libraries/gsl/ppc64/IBM --with-blas=essl CFLAGS=&amp;quot;-O3 -qarch=450 -qtune=450&amp;quot; --with-essl-dir=/opt/share/ibmmath/essl/4.4/ &lt;br /&gt;
&lt;br /&gt;
Somehow, on BG/P autotools didn't see LD_LIBRARY_PATH. Therefore, the following hardcoded path was added into configure.ac. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
if test &amp;quot;$with_essl_dir&amp;quot;&amp;amp;nbsp;!= &amp;quot;&amp;quot;; then&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; CPPFLAGS=&amp;quot;$CPPFLAGS -I$with_essl_dir/include&amp;quot;&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;LDFLAGS=&amp;quot;$LDFLAGS -L$with_essl_dir/lib '''-L/opt/ibmcmp/xlf/bg/11.1/lib'''&amp;quot;&amp;lt;br&amp;gt; fi&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--&amp;amp;gt;&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=798</id>
		<title>BlueGene/P</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=798"/>
				<updated>2013-03-22T17:57:36Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition, some members has access to BleGene/P at West University of Timisoara, Romania ([http://hpc.uvt.ro/infrastructure/bluegenep/]). &lt;br /&gt;
&lt;br /&gt;
===== Fupermod on Shaheen BlueGene/P  =====&lt;br /&gt;
&lt;br /&gt;
In order to compile fupermod on the BG/P the following commands should be run to load some libraries: &lt;br /&gt;
&lt;br /&gt;
#module load bluegene &lt;br /&gt;
#module load essl &lt;br /&gt;
#module load gsl&lt;br /&gt;
&lt;br /&gt;
Then, configure command can be executed as follows: &lt;br /&gt;
&lt;br /&gt;
/fupermod/configure --with-gsl-dir=/opt/share/math_libraries/gsl/ppc64/IBM --with-blas=essl CFLAGS=&amp;quot;-O3 -qarch=450 -qtune=450&amp;quot; --with-essl-dir=/opt/share/ibmmath/essl/4.4/ &lt;br /&gt;
&lt;br /&gt;
Somehow, on BG/P autotools didn't see LD_LIBRARY_PATH. Therefore, the following hardcoded path was added into configure.ac. &lt;br /&gt;
&lt;br /&gt;
if test &amp;quot;$with_essl_dir&amp;quot;&amp;amp;nbsp;!= &amp;quot;&amp;quot;; then&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; CPPFLAGS=&amp;quot;$CPPFLAGS -I$with_essl_dir/include&amp;quot;&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;LDFLAGS=&amp;quot;$LDFLAGS -L$with_essl_dir/lib '''-L/opt/ibmcmp/xlf/bg/11.1/lib'''&amp;quot;&amp;lt;br&amp;gt; fi&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=797</id>
		<title>BlueGene/P</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=797"/>
				<updated>2013-03-22T17:56:33Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition, some members has access to BleGene/P at West University of Timisoara, Romania ([http://hpc.uvt.ro/infrastructure/bluegenep/]). &lt;br /&gt;
&lt;br /&gt;
====== Fupermod on Shaheen BlueGene/P ======&lt;br /&gt;
&lt;br /&gt;
In order to compile fupermod on the BG/P the following commands should be run to load some libraries:&lt;br /&gt;
&lt;br /&gt;
#module load bluegene&lt;br /&gt;
#module load essl&lt;br /&gt;
#module load gsl&lt;br /&gt;
&lt;br /&gt;
Then, configure command can be executed as follows:&lt;br /&gt;
&lt;br /&gt;
/fupermod/configure --with-gsl-dir=/opt/share/math_libraries/gsl/ppc64/IBM --with-blas=essl CFLAGS=&amp;quot;-O3 -qarch=450 -qtune=450&amp;quot; --with-essl-dir=/opt/share/ibmmath/essl/4.4/&lt;br /&gt;
&lt;br /&gt;
Somehow, on BG/P autotools didn't see LD_LIBRARY_PATH. Therefore, the following hardcoded path was added into configure.ac.&lt;br /&gt;
&lt;br /&gt;
if test &amp;quot;$with_essl_dir&amp;quot; != &amp;quot;&amp;quot;; then&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; CPPFLAGS=&amp;quot;$CPPFLAGS -I$with_essl_dir/include&amp;quot;&amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;LDFLAGS=&amp;quot;$LDFLAGS -L$with_essl_dir/lib '''-L/opt/ibmcmp/xlf/bg/11.1/lib'''&amp;quot;&amp;lt;br&amp;gt; fi&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=796</id>
		<title>BlueGene/P</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=796"/>
				<updated>2013-03-19T15:02:10Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition, some members has access to BleGene/P at West University of Timisoara, Romania ([http://hpc.uvt.ro/infrastructure/bluegenep/]). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=795</id>
		<title>BlueGene/P</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=BlueGene/P&amp;diff=795"/>
				<updated>2013-03-19T15:00:15Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: Created page with &amp;quot;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition,…&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some members of HCL have access to Shaheen BlueGene/P at King Abdullah University of Science and Technology ([http://www2.hpc.kaust.edu.sa/documentation/shaheen/]) . In addition, some members has access&lt;br /&gt;
to BleGene/P at West University of Timisoara, Romania ([http://hpc.uvt.ro/infrastructure/bluegenep/]).&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=Main_Page&amp;diff=794</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=Main_Page&amp;diff=794"/>
				<updated>2013-03-19T14:48:39Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: /* Hardware */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This site is set up for sharing ideas, findings and experience in heterogeneous computing. Please, log in and create new or edit existing pages. How to format wiki-pages read [[Help:Editing|here]].&lt;br /&gt;
&lt;br /&gt;
== HCL software for heterogeneous computing ==&lt;br /&gt;
* Extensions for [[MPI]]: [http://hcl.ucd.ie/project/mpC mpC] [http://hcl.ucd.ie/project/HeteroMPI HeteroMPI] [http://hcl.ucd.ie/project/libELC libELC]&lt;br /&gt;
* Extensions for [http://en.wikipedia.org/wiki/GridRPC GridRPC]: [http://hcl.ucd.ie/project/SmartGridSolve SmartGridSolve] [http://hcl.ucd.ie/project/NI-Connect NI-Connect]&lt;br /&gt;
* Computation benchmarking, modeling, dynamic load balancing: [http://hcl.ucd.ie/project/fupermod FuPerMod] [http://hcl.ucd.ie/project/pmm PMM]&lt;br /&gt;
* Communication benchmarking, modeling, optimization: [http://hcl.ucd.ie/project/cpm CPM] [http://hcl.ucd.ie/project/mpiblib MPIBlib]&lt;br /&gt;
&lt;br /&gt;
== Heterogeneous mathematical software ==&lt;br /&gt;
* [http://hcl.ucd.ie/project/HeteroScaLAPACK HeteroScaLAPACK]&lt;br /&gt;
* [http://hcl.ucd.ie/project/Hydropad Hydropad]&lt;br /&gt;
&lt;br /&gt;
== Operating systems == &lt;br /&gt;
* [[Linux]]&lt;br /&gt;
* [[Windows]]&lt;br /&gt;
&lt;br /&gt;
== Development tools ==&lt;br /&gt;
* [[C/C++]], [[Python]], [[UML]], [[FORTRAN]]&lt;br /&gt;
* [[Autotools]]&lt;br /&gt;
* [[GDB]], [[OProfile]], [[Valgrind]]&lt;br /&gt;
* [[Doxygen]]&lt;br /&gt;
* [[ChangeLog]], [[Subversion]]&lt;br /&gt;
* [[Eclipse]]&lt;br /&gt;
* [[Bash Scripts]]&lt;br /&gt;
&lt;br /&gt;
== [[Libraries]] ==&lt;br /&gt;
* [[GNU C Library]]&lt;br /&gt;
* [[MPI]]&lt;br /&gt;
* [[STL]], [[Boost]]&lt;br /&gt;
* [[GSL]]&lt;br /&gt;
* [[BLAS LAPACK ScaLAPACK]]&lt;br /&gt;
* [[NLOPT]]&lt;br /&gt;
* [[BitTorrent (B. Cohen's version)]]&lt;br /&gt;
* [[CUDA SDK]]&lt;br /&gt;
&lt;br /&gt;
== Data processing ==&lt;br /&gt;
* [[gnuplot]], [[pgfplot]], [[matplotlib]]&lt;br /&gt;
* [[Graphviz]]&lt;br /&gt;
* [[Octave]], [[R]]&lt;br /&gt;
* [[G3DViewer]]&lt;br /&gt;
&lt;br /&gt;
== Paper &amp;amp; Presentation Tools ==&lt;br /&gt;
* [[Dia]], [[PGF/Tikz]], [[pgfplot]]&lt;br /&gt;
* [[LaTeX]], [[Beamer]]&lt;br /&gt;
* [[BibTeX]], [[JabRef]]&lt;br /&gt;
&lt;br /&gt;
== Hardware ==&lt;br /&gt;
* [[HCL cluster]]&lt;br /&gt;
* [[Other UCD Resources]]&lt;br /&gt;
* [[UTK multicores + GPU]]&lt;br /&gt;
* [[Grid5000]]&lt;br /&gt;
* [[BlueGene/P]]&lt;br /&gt;
* [[Desktop Backup]]&lt;br /&gt;
&lt;br /&gt;
[[SSH|How to connect to cluster via SSH]]&lt;br /&gt;
&lt;br /&gt;
[[hwloc|How to find information about the hardware]]&lt;br /&gt;
&lt;br /&gt;
== Mathematics ==&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Confidence_interval Confidence interval (Statistics)], [http://en.wikipedia.org/wiki/Student's_t-distribution Student's t-distribution] (implemented in [[GSL]])&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Linear_regression Linear regression] (implemented in [[GSL]])&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Binomial_tree#Binomial_tree Binomial tree] (use [[Graphviz]] to visualize trees)&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Spline_interpolation Spline interpolation], [http://en.wikipedia.org/wiki/B-spline Spline approximation] (implemented in [[GSL]])&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=Grid5000&amp;diff=775</id>
		<title>Grid5000</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=Grid5000&amp;diff=775"/>
				<updated>2012-08-24T15:41:54Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: /* Login, job submission, deployment of image */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;https://www.grid5000.fr/mediawiki/index.php/Grid5000:Home &lt;br /&gt;
&lt;br /&gt;
[https://www.grid5000.fr/mediawiki/index.php/Grid5000:UserCharter USAGE POLICY]&amp;amp;nbsp; - Very important, after booking nodes (oarsub ...) run the command:&amp;amp;nbsp;&amp;lt;source lang=&amp;quot;&amp;quot;&amp;gt;outofchart&amp;lt;/source&amp;gt;&amp;amp;nbsp;This will check that you haven't booked too many resources and therefore get in trouble with grid5000 admin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Login, job submission, deployment of image  ==&lt;br /&gt;
&lt;br /&gt;
*Select sites and clusters for experiments, using information on the [https://www.grid5000.fr/mediawiki/index.php/Grid5000:Network#Grid.275000_Sites Grid5000 network] and the [https://www.grid5000.fr/mediawiki/index.php/Status Status page] &lt;br /&gt;
*Access is provided via access nodes '''access.SITE.grid5000.fr''' marked [https://www.grid5000.fr/mediawiki/index.php/External_access here] as ''accessible from '''everywhere''' via ssh with '''keyboard-interactive''' authentication method''. As soon as you are on one of the sites, you can directly ssh frontend node of any other site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
access_$ ssh frontend.SITE2&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*There is no access to Internet from computing nodes (external IPs should be registered on proxy), therefore, download/update your stuff at the access nodes. Several revision control clients are available. &lt;br /&gt;
*Each site has a separate NFS, therefore, to run an application on several sites at once, you need to copy it '''scp, sftp, rsync''' between access or frontend nodes. &lt;br /&gt;
*Jobs are run from the frondend nodes, using a [http://en.wikipedia.org/wiki/OpenPBS PBS]-like system [https://www.grid5000.fr/mediawiki/index.php/Cluster_experiment-OAR2 OAR]. Basic commands: &lt;br /&gt;
**'''oarstat''' - queue status &lt;br /&gt;
**'''oarsub''' - job submission &lt;br /&gt;
**'''oardel''' - job removal&lt;br /&gt;
&lt;br /&gt;
Interactive job on deployed images: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 fontend_$ oarsub -I -t deploy -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY=&amp;quot;VALUE&amp;quot;']&lt;br /&gt;
&amp;lt;/source&amp;gt; Batch job on installed images: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 fontend_$ oarsub BATCH_FILE -t allow_classic_ssh -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY=&amp;quot;VALUE&amp;quot;']&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
Specifying cluster name to reserve: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
oarsub -r 'YYYY-MM-dd HH:mm:ss' -l nodes=2,walltime=1 -p &amp;quot;cluster='Genepi'&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt; If the resources are available two nodes from the cluster &amp;quot;Genepi&amp;quot; will be reserved for the specified time.&lt;br /&gt;
&lt;br /&gt;
*The image to deploy can be created and loaded with help of a [http://wiki.systemimager.org/index.php/Main_Page Systemimager]-like system [https://www.grid5000.fr/mediawiki/index.php/Deploy_environment-OAR2 Kadeploy]. Creating: [https://www.grid5000.fr/mediawiki/index.php/Deploy_environment-OAR2#Tune_an_environment_to_build_another_one:_customize_authentification_parameters described here]&lt;br /&gt;
&lt;br /&gt;
Loading: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
fontend_$ kadeploy3 -a PATH_TO_PRIVATE_IMAGE_DESC -f $OAR_FILE_NODES  &lt;br /&gt;
&amp;lt;/source&amp;gt; A Linux distribution lenny-x64-nfs-2.1 with mc, subversion, autotools, doxygen, MPICH2, GSL, Boost, R, gnuplot, graphviz, X11, evince is available at Orsay /home/nancy/alastovetsky/grid5000.&lt;br /&gt;
&lt;br /&gt;
== Compiling and running MPI applications  ==&lt;br /&gt;
&lt;br /&gt;
*Compilation should be done on one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`) &lt;br /&gt;
*Running MPI applications is described [https://www.grid5000.fr/mediawiki/index.php/Run_MPI_On_Grid%275000 here] &lt;br /&gt;
**mpirun/mpiexec should be run from one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`)&lt;br /&gt;
&lt;br /&gt;
== Setting up new deploy image  ==&lt;br /&gt;
&lt;br /&gt;
List available images &lt;br /&gt;
&lt;br /&gt;
 kaenv3 -l&lt;br /&gt;
&lt;br /&gt;
Then book node and launch: &lt;br /&gt;
&lt;br /&gt;
 oarsub -I -t deploy -l nodes=1,walltime=12&lt;br /&gt;
 kadeploy3 -e squeeze-x64-big -f $OAR_FILE_NODES -k&lt;br /&gt;
 ssh root@`head -n 1 $OAR_NODEFILE`&lt;br /&gt;
&lt;br /&gt;
default password: grid5000 &lt;br /&gt;
&lt;br /&gt;
edit /etc/apt/sources.list &lt;br /&gt;
&lt;br /&gt;
 apt-get update&lt;br /&gt;
 apt-get upgrade&lt;br /&gt;
&lt;br /&gt;
 apt-get install libtool autoconf automake mc colorgcc ctags libboost-serialization-dev libboost-graph-dev &lt;br /&gt;
            libatlas-base-dev gfortran vim gdb valgrind screen subversion iperf bc gsl-bin libgsl0-dev&lt;br /&gt;
&lt;br /&gt;
Possibly also install (for using extrae): &lt;br /&gt;
&lt;br /&gt;
 libxml2-dev binutils-dev libunwind7-dev&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; Compiled for sources by us: &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;strike&amp;gt;gsl-1.14 (download: ftp://ftp.gnu.org/gnu/gsl/)&amp;amp;nbsp;&amp;lt;/strike&amp;gt; ''Now with squeeze it is in repository.''&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;strike&amp;gt;./configure &amp;amp;amp;&amp;amp;amp; make &amp;amp;amp;&amp;amp;amp; make install&amp;lt;/strike&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*mpich2 (download: http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads)&lt;br /&gt;
&lt;br /&gt;
 ./configure --enable-shared --enable-sharedlibs=gcc --with-pm=mpd&lt;br /&gt;
 make &amp;amp;amp;&amp;amp;amp; make install&lt;br /&gt;
&lt;br /&gt;
Mpich2 installed to: &lt;br /&gt;
&lt;br /&gt;
 Installing MPE2 include files to /usr/local/include&lt;br /&gt;
 Installing MPE2 libraries to /usr/local/lib&lt;br /&gt;
 Installing MPE2 utility programs to /usr/local/bin&lt;br /&gt;
 Installing MPE2 configuration files to /usr/local/etc&lt;br /&gt;
 Installing MPE2 system utility programs to /usr/local/sbin&lt;br /&gt;
 Installing MPE2 man to /usr/local/share/man&lt;br /&gt;
 Installing MPE2 html to /usr/local/share/doc/&lt;br /&gt;
 Installed MPE2 in /usr/local&lt;br /&gt;
&lt;br /&gt;
*hwloc (and lstopo) (download: http://www.open-mpi.org/software/hwloc/v1.2/)&lt;br /&gt;
&lt;br /&gt;
compile from sources. To get xml support install libxml2-dev and pkg-config &lt;br /&gt;
&lt;br /&gt;
 apt-get install libxml2-dev pkg-config&lt;br /&gt;
 tar -xzvf hwloc-1.1.1.tar.gz&lt;br /&gt;
 cd hwloc-1.1.1&lt;br /&gt;
 ./configure &amp;amp;amp;&amp;amp;amp; make &amp;amp;amp;&amp;amp;amp; make install&lt;br /&gt;
&lt;br /&gt;
Change root password. &lt;br /&gt;
&lt;br /&gt;
rm sources from root dir. &lt;br /&gt;
&lt;br /&gt;
Edit the &amp;quot;message of the day&amp;quot; &lt;br /&gt;
&lt;br /&gt;
 vi /etc/motd.tail&lt;br /&gt;
&lt;br /&gt;
 echo 90 &amp;amp;gt; /proc/sys/vm/overcommit_ratio&lt;br /&gt;
 echo 2 &amp;amp;gt; /proc/sys/vm/overcommit_memory&lt;br /&gt;
 date &amp;amp;gt;&amp;amp;gt; release&lt;br /&gt;
&lt;br /&gt;
Cleanup &lt;br /&gt;
&lt;br /&gt;
 apt-get clean&lt;br /&gt;
 rm /etc/udev/rules.d/*-persistent-net.rules&lt;br /&gt;
&lt;br /&gt;
Make image &lt;br /&gt;
&lt;br /&gt;
 ssh root@'''node''' tgz-g5k &amp;amp;gt; $HOME/grid5000/'''imagename'''.tgz&lt;br /&gt;
&lt;br /&gt;
make appropriate .env file. &lt;br /&gt;
&lt;br /&gt;
 kaenv3 -p lenny-x64-nfs -u deploy &amp;amp;gt; lenny-x64-custom-2.3.env&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== GotoBLAS2  ==&lt;br /&gt;
&lt;br /&gt;
When compiling gotoblas on a node without direct internet access get this error: &amp;lt;source lang=&amp;quot;&amp;quot;&amp;gt;wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
--2011-05-19 03:11:03--  http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
Resolving www.netlib.org... 160.36.58.108&lt;br /&gt;
Connecting to www.netlib.org|160.36.58.108|:80... failed: Connection timed out.&lt;br /&gt;
Retrying.&lt;br /&gt;
&lt;br /&gt;
--2011-05-19 03:14:13--  (try: 2)  http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
Connecting to www.netlib.org|160.36.58.108|:80... failed: Connection timed out.&lt;br /&gt;
Retrying.&lt;br /&gt;
...&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Fix by downloading http://www.netlib.org/lapack/lapack-3.1.1.tgz to the GotoBLAS2 source directory and editing this line in the Makefile &lt;br /&gt;
&lt;br /&gt;
 184c184&lt;br /&gt;
 &amp;amp;lt; 	-wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;amp;gt; #	-wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; GotoBLAS needs to be compiled individualy for each unique machine - ie each cluster. Add the following to .bashrc &lt;br /&gt;
&lt;br /&gt;
 export CLUSTER=`hostname |sed 's/\([a-z]*\).*/\1/'`&lt;br /&gt;
 LD_LIBRARY_PATH=$HOME/lib/$CLUSTER:$HOME/lib:/usr/local/lib:$LD_LIBRARY_PATH&lt;br /&gt;
 export LIBRARY_PATH=$HOME/lib/$CLUSTER:$HOME/lib:/usr/local/lib:$LIBRARY_PATH&lt;br /&gt;
&lt;br /&gt;
Run the following script once on each cluster: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;#! /bin/bash&lt;br /&gt;
echo &amp;quot;Compiling gotoblas for cluster: $CLUSTER&amp;quot;&lt;br /&gt;
cd $HOME/src&lt;br /&gt;
if [ ! -d &amp;quot;$CLUSTER&amp;quot; ]; then&lt;br /&gt;
        mkdir $CLUSTER&lt;br /&gt;
fi&lt;br /&gt;
cd $CLUSTER&lt;br /&gt;
tar -xzf ../Goto*.tar.gz&lt;br /&gt;
cd Goto*&lt;br /&gt;
make &amp;amp;&amp;gt; m.log&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if [ ! -d &amp;quot;$HOME/lib/$CLUSTER&amp;quot; ]; then&lt;br /&gt;
        mkdir $HOME/lib/$CLUSTER&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
cp libgoto2.so $HOME/lib/$CLUSTER&lt;br /&gt;
&lt;br /&gt;
echo results&lt;br /&gt;
ls -d $HOME/src/$CLUSTER&lt;br /&gt;
ls $HOME/src/$CLUSTER&lt;br /&gt;
&lt;br /&gt;
ls -d $HOME/lib/$CLUSTER&lt;br /&gt;
ls $HOME/lib/$CLUSTER&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
note: for newer processors this may fail. If it is a NEHALEM processor try: &lt;br /&gt;
&lt;br /&gt;
 make clean&lt;br /&gt;
 make TARGET=NEHALEM&lt;br /&gt;
&lt;br /&gt;
== Paging and the OOM-Killer  ==&lt;br /&gt;
&lt;br /&gt;
When doing exhaustion of available memory experiments, problems can occur with over-commit. See [[HCL cluster#Paging_and_the_OOM-Killer]] for more detail. &lt;br /&gt;
&lt;br /&gt;
== Example of experiment setup across several sites  ==&lt;br /&gt;
&lt;br /&gt;
Sources of all files mentioned below is available at: [[Grid5000:sources]]. &lt;br /&gt;
&lt;br /&gt;
Pick one head node as the main head node (I use grenoble, but any will do). Setup sources &lt;br /&gt;
&lt;br /&gt;
 cd dave/fupermod-1.1.0&lt;br /&gt;
 make clean&lt;br /&gt;
 ./configure --with-cblas=goto --prefix=/usr/local/&lt;br /&gt;
&lt;br /&gt;
Reserve 2 nodes from all clusters on a 3 cluster site: &lt;br /&gt;
&lt;br /&gt;
 oarsub -r &amp;quot;2011-07-25 11:01:01&amp;quot; -t deploy  -l cluster=3/nodes=2,walltime=11:59:00&lt;br /&gt;
&lt;br /&gt;
Automate with: &lt;br /&gt;
&lt;br /&gt;
 for a in 2 3 4; do for i in `cat sites.$a`; do echo $a $i; ssh $i oarsub -r &amp;quot;2011-07-25 11:01:01&amp;quot; -t deploy -l cluster=$a/nodes=2,walltime=11:59:00; done; done&lt;br /&gt;
&lt;br /&gt;
Then on each site, where xxx is site name: &lt;br /&gt;
&lt;br /&gt;
 kadeploy3 -a $HOME/grid5000/lenny-dave.env  -f $OAR_NODE_FILE --output-ok-nodes deployed.xxx&lt;br /&gt;
&lt;br /&gt;
Gather deployed files to a head node: &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/sites `; do echo $i; scp $i:deployed* .&amp;amp;nbsp;; done&lt;br /&gt;
 cat deployed.* &amp;amp;gt; deployed.all&lt;br /&gt;
&lt;br /&gt;
Copy cluster specific libs to each deployed node /usr/local/lib dir with script &lt;br /&gt;
&lt;br /&gt;
 copy_local_libs.sh deployed.all&lt;br /&gt;
&lt;br /&gt;
Copy source files to root dir of each deployed node. Then make install each (node ssh -f does this in parallel) &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/deployed.all`; do echo $i; rsync -aP ~/dave/fupermod-1.1.0 root@$i:&amp;amp;nbsp;; done&lt;br /&gt;
 for i in `cat ~/deployed.all`; do echo $i; ssh -f root@$i &amp;quot;cd fupermod-1.1.0&amp;amp;nbsp;; make all install&amp;quot;&amp;amp;nbsp;; done&lt;br /&gt;
&lt;br /&gt;
ssh to the first node &lt;br /&gt;
&lt;br /&gt;
 ssh `head -n1 deployed.all`&lt;br /&gt;
 n=$(cat deployed.all |wc -l)&lt;br /&gt;
 mpdboot --totalnum=$n --file=$HOME/deployed.all&lt;br /&gt;
 mpdtrace&lt;br /&gt;
&lt;br /&gt;
 cd dave/data/&lt;br /&gt;
 mpirun -n $n /usr/local/bin/partitioner -l /usr/local/lib/libmxm_col.so -a0 -D10000 -o N=100&lt;br /&gt;
&lt;br /&gt;
Cleanup after: &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/sites `; do echo $i; ssh $i rm deployed.*&amp;amp;nbsp;; done&lt;br /&gt;
&lt;br /&gt;
== Check network speed  ==&lt;br /&gt;
&lt;br /&gt;
 apt-get install iperf&lt;br /&gt;
&lt;br /&gt;
== Choose which network interface to use  ==&lt;br /&gt;
&lt;br /&gt;
 mpirun --mca btl self,openib ...&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 mpirun --mca btl self,tcp ...&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=Grid5000&amp;diff=774</id>
		<title>Grid5000</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=Grid5000&amp;diff=774"/>
				<updated>2012-08-24T15:41:03Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: /* Login, job submission, deployment of image */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;https://www.grid5000.fr/mediawiki/index.php/Grid5000:Home &lt;br /&gt;
&lt;br /&gt;
[https://www.grid5000.fr/mediawiki/index.php/Grid5000:UserCharter USAGE POLICY]&amp;amp;nbsp; - Very important, after booking nodes (oarsub ...) run the command:&amp;amp;nbsp;&amp;lt;source lang=&amp;quot;&amp;quot;&amp;gt;outofchart&amp;lt;/source&amp;gt;&amp;amp;nbsp;This will check that you haven't booked too many resources and therefore get in trouble with grid5000 admin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Login, job submission, deployment of image  ==&lt;br /&gt;
&lt;br /&gt;
*Select sites and clusters for experiments, using information on the [https://www.grid5000.fr/mediawiki/index.php/Grid5000:Network#Grid.275000_Sites Grid5000 network] and the [https://www.grid5000.fr/mediawiki/index.php/Status Status page] &lt;br /&gt;
*Access is provided via access nodes '''access.SITE.grid5000.fr''' marked [https://www.grid5000.fr/mediawiki/index.php/External_access here] as ''accessible from '''everywhere''' via ssh with '''keyboard-interactive''' authentication method''. As soon as you are on one of the sites, you can directly ssh frontend node of any other site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
access_$ ssh frontend.SITE2&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*There is no access to Internet from computing nodes (external IPs should be registered on proxy), therefore, download/update your stuff at the access nodes. Several revision control clients are available. &lt;br /&gt;
*Each site has a separate NFS, therefore, to run an application on several sites at once, you need to copy it '''scp, sftp, rsync''' between access or frontend nodes. &lt;br /&gt;
*Jobs are run from the frondend nodes, using a [http://en.wikipedia.org/wiki/OpenPBS PBS]-like system [https://www.grid5000.fr/mediawiki/index.php/Cluster_experiment-OAR2 OAR]. Basic commands: &lt;br /&gt;
**'''oarstat''' - queue status &lt;br /&gt;
**'''oarsub''' - job submission &lt;br /&gt;
**'''oardel''' - job removal&lt;br /&gt;
&lt;br /&gt;
Interactive job on deployed images: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 fontend_$ oarsub -I -t deploy -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY=&amp;quot;VALUE&amp;quot;']&lt;br /&gt;
&amp;lt;/source&amp;gt; Batch job on installed images: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 fontend_$ oarsub BATCH_FILE -t allow_classic_ssh -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY=&amp;quot;VALUE&amp;quot;']&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
Specifying cluster name to reserve: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
oarsub -r '2012-08-24 19:30:00' -l nodes=2,walltime=1 -p &amp;quot;cluster='Genepi'&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt; If the resources are available two nodes from the cluster &amp;quot;Genepi&amp;quot; will be reserved for the specified time.&lt;br /&gt;
&lt;br /&gt;
*The image to deploy can be created and loaded with help of a [http://wiki.systemimager.org/index.php/Main_Page Systemimager]-like system [https://www.grid5000.fr/mediawiki/index.php/Deploy_environment-OAR2 Kadeploy]. Creating: [https://www.grid5000.fr/mediawiki/index.php/Deploy_environment-OAR2#Tune_an_environment_to_build_another_one:_customize_authentification_parameters described here]&lt;br /&gt;
&lt;br /&gt;
Loading: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
fontend_$ kadeploy3 -a PATH_TO_PRIVATE_IMAGE_DESC -f $OAR_FILE_NODES  &lt;br /&gt;
&amp;lt;/source&amp;gt; A Linux distribution lenny-x64-nfs-2.1 with mc, subversion, autotools, doxygen, MPICH2, GSL, Boost, R, gnuplot, graphviz, X11, evince is available at Orsay /home/nancy/alastovetsky/grid5000.&lt;br /&gt;
&lt;br /&gt;
== Compiling and running MPI applications  ==&lt;br /&gt;
&lt;br /&gt;
*Compilation should be done on one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`) &lt;br /&gt;
*Running MPI applications is described [https://www.grid5000.fr/mediawiki/index.php/Run_MPI_On_Grid%275000 here] &lt;br /&gt;
**mpirun/mpiexec should be run from one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`)&lt;br /&gt;
&lt;br /&gt;
== Setting up new deploy image  ==&lt;br /&gt;
&lt;br /&gt;
List available images &lt;br /&gt;
&lt;br /&gt;
 kaenv3 -l&lt;br /&gt;
&lt;br /&gt;
Then book node and launch: &lt;br /&gt;
&lt;br /&gt;
 oarsub -I -t deploy -l nodes=1,walltime=12&lt;br /&gt;
 kadeploy3 -e squeeze-x64-big -f $OAR_FILE_NODES -k&lt;br /&gt;
 ssh root@`head -n 1 $OAR_NODEFILE`&lt;br /&gt;
&lt;br /&gt;
default password: grid5000 &lt;br /&gt;
&lt;br /&gt;
edit /etc/apt/sources.list &lt;br /&gt;
&lt;br /&gt;
 apt-get update&lt;br /&gt;
 apt-get upgrade&lt;br /&gt;
&lt;br /&gt;
 apt-get install libtool autoconf automake mc colorgcc ctags libboost-serialization-dev libboost-graph-dev &lt;br /&gt;
            libatlas-base-dev gfortran vim gdb valgrind screen subversion iperf bc gsl-bin libgsl0-dev&lt;br /&gt;
&lt;br /&gt;
Possibly also install (for using extrae): &lt;br /&gt;
&lt;br /&gt;
 libxml2-dev binutils-dev libunwind7-dev&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; Compiled for sources by us: &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;strike&amp;gt;gsl-1.14 (download: ftp://ftp.gnu.org/gnu/gsl/)&amp;amp;nbsp;&amp;lt;/strike&amp;gt; ''Now with squeeze it is in repository.''&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;strike&amp;gt;./configure &amp;amp;amp;&amp;amp;amp; make &amp;amp;amp;&amp;amp;amp; make install&amp;lt;/strike&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*mpich2 (download: http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads)&lt;br /&gt;
&lt;br /&gt;
 ./configure --enable-shared --enable-sharedlibs=gcc --with-pm=mpd&lt;br /&gt;
 make &amp;amp;amp;&amp;amp;amp; make install&lt;br /&gt;
&lt;br /&gt;
Mpich2 installed to: &lt;br /&gt;
&lt;br /&gt;
 Installing MPE2 include files to /usr/local/include&lt;br /&gt;
 Installing MPE2 libraries to /usr/local/lib&lt;br /&gt;
 Installing MPE2 utility programs to /usr/local/bin&lt;br /&gt;
 Installing MPE2 configuration files to /usr/local/etc&lt;br /&gt;
 Installing MPE2 system utility programs to /usr/local/sbin&lt;br /&gt;
 Installing MPE2 man to /usr/local/share/man&lt;br /&gt;
 Installing MPE2 html to /usr/local/share/doc/&lt;br /&gt;
 Installed MPE2 in /usr/local&lt;br /&gt;
&lt;br /&gt;
*hwloc (and lstopo) (download: http://www.open-mpi.org/software/hwloc/v1.2/)&lt;br /&gt;
&lt;br /&gt;
compile from sources. To get xml support install libxml2-dev and pkg-config &lt;br /&gt;
&lt;br /&gt;
 apt-get install libxml2-dev pkg-config&lt;br /&gt;
 tar -xzvf hwloc-1.1.1.tar.gz&lt;br /&gt;
 cd hwloc-1.1.1&lt;br /&gt;
 ./configure &amp;amp;amp;&amp;amp;amp; make &amp;amp;amp;&amp;amp;amp; make install&lt;br /&gt;
&lt;br /&gt;
Change root password. &lt;br /&gt;
&lt;br /&gt;
rm sources from root dir. &lt;br /&gt;
&lt;br /&gt;
Edit the &amp;quot;message of the day&amp;quot; &lt;br /&gt;
&lt;br /&gt;
 vi /etc/motd.tail&lt;br /&gt;
&lt;br /&gt;
 echo 90 &amp;amp;gt; /proc/sys/vm/overcommit_ratio&lt;br /&gt;
 echo 2 &amp;amp;gt; /proc/sys/vm/overcommit_memory&lt;br /&gt;
 date &amp;amp;gt;&amp;amp;gt; release&lt;br /&gt;
&lt;br /&gt;
Cleanup &lt;br /&gt;
&lt;br /&gt;
 apt-get clean&lt;br /&gt;
 rm /etc/udev/rules.d/*-persistent-net.rules&lt;br /&gt;
&lt;br /&gt;
Make image &lt;br /&gt;
&lt;br /&gt;
 ssh root@'''node''' tgz-g5k &amp;amp;gt; $HOME/grid5000/'''imagename'''.tgz&lt;br /&gt;
&lt;br /&gt;
make appropriate .env file. &lt;br /&gt;
&lt;br /&gt;
 kaenv3 -p lenny-x64-nfs -u deploy &amp;amp;gt; lenny-x64-custom-2.3.env&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== GotoBLAS2  ==&lt;br /&gt;
&lt;br /&gt;
When compiling gotoblas on a node without direct internet access get this error: &amp;lt;source lang=&amp;quot;&amp;quot;&amp;gt;wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
--2011-05-19 03:11:03--  http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
Resolving www.netlib.org... 160.36.58.108&lt;br /&gt;
Connecting to www.netlib.org|160.36.58.108|:80... failed: Connection timed out.&lt;br /&gt;
Retrying.&lt;br /&gt;
&lt;br /&gt;
--2011-05-19 03:14:13--  (try: 2)  http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
Connecting to www.netlib.org|160.36.58.108|:80... failed: Connection timed out.&lt;br /&gt;
Retrying.&lt;br /&gt;
...&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Fix by downloading http://www.netlib.org/lapack/lapack-3.1.1.tgz to the GotoBLAS2 source directory and editing this line in the Makefile &lt;br /&gt;
&lt;br /&gt;
 184c184&lt;br /&gt;
 &amp;amp;lt; 	-wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;amp;gt; #	-wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; GotoBLAS needs to be compiled individualy for each unique machine - ie each cluster. Add the following to .bashrc &lt;br /&gt;
&lt;br /&gt;
 export CLUSTER=`hostname |sed 's/\([a-z]*\).*/\1/'`&lt;br /&gt;
 LD_LIBRARY_PATH=$HOME/lib/$CLUSTER:$HOME/lib:/usr/local/lib:$LD_LIBRARY_PATH&lt;br /&gt;
 export LIBRARY_PATH=$HOME/lib/$CLUSTER:$HOME/lib:/usr/local/lib:$LIBRARY_PATH&lt;br /&gt;
&lt;br /&gt;
Run the following script once on each cluster: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;#! /bin/bash&lt;br /&gt;
echo &amp;quot;Compiling gotoblas for cluster: $CLUSTER&amp;quot;&lt;br /&gt;
cd $HOME/src&lt;br /&gt;
if [ ! -d &amp;quot;$CLUSTER&amp;quot; ]; then&lt;br /&gt;
        mkdir $CLUSTER&lt;br /&gt;
fi&lt;br /&gt;
cd $CLUSTER&lt;br /&gt;
tar -xzf ../Goto*.tar.gz&lt;br /&gt;
cd Goto*&lt;br /&gt;
make &amp;amp;&amp;gt; m.log&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if [ ! -d &amp;quot;$HOME/lib/$CLUSTER&amp;quot; ]; then&lt;br /&gt;
        mkdir $HOME/lib/$CLUSTER&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
cp libgoto2.so $HOME/lib/$CLUSTER&lt;br /&gt;
&lt;br /&gt;
echo results&lt;br /&gt;
ls -d $HOME/src/$CLUSTER&lt;br /&gt;
ls $HOME/src/$CLUSTER&lt;br /&gt;
&lt;br /&gt;
ls -d $HOME/lib/$CLUSTER&lt;br /&gt;
ls $HOME/lib/$CLUSTER&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
note: for newer processors this may fail. If it is a NEHALEM processor try: &lt;br /&gt;
&lt;br /&gt;
 make clean&lt;br /&gt;
 make TARGET=NEHALEM&lt;br /&gt;
&lt;br /&gt;
== Paging and the OOM-Killer  ==&lt;br /&gt;
&lt;br /&gt;
When doing exhaustion of available memory experiments, problems can occur with over-commit. See [[HCL cluster#Paging_and_the_OOM-Killer]] for more detail. &lt;br /&gt;
&lt;br /&gt;
== Example of experiment setup across several sites  ==&lt;br /&gt;
&lt;br /&gt;
Sources of all files mentioned below is available at: [[Grid5000:sources]]. &lt;br /&gt;
&lt;br /&gt;
Pick one head node as the main head node (I use grenoble, but any will do). Setup sources &lt;br /&gt;
&lt;br /&gt;
 cd dave/fupermod-1.1.0&lt;br /&gt;
 make clean&lt;br /&gt;
 ./configure --with-cblas=goto --prefix=/usr/local/&lt;br /&gt;
&lt;br /&gt;
Reserve 2 nodes from all clusters on a 3 cluster site: &lt;br /&gt;
&lt;br /&gt;
 oarsub -r &amp;quot;2011-07-25 11:01:01&amp;quot; -t deploy  -l cluster=3/nodes=2,walltime=11:59:00&lt;br /&gt;
&lt;br /&gt;
Automate with: &lt;br /&gt;
&lt;br /&gt;
 for a in 2 3 4; do for i in `cat sites.$a`; do echo $a $i; ssh $i oarsub -r &amp;quot;2011-07-25 11:01:01&amp;quot; -t deploy -l cluster=$a/nodes=2,walltime=11:59:00; done; done&lt;br /&gt;
&lt;br /&gt;
Then on each site, where xxx is site name: &lt;br /&gt;
&lt;br /&gt;
 kadeploy3 -a $HOME/grid5000/lenny-dave.env  -f $OAR_NODE_FILE --output-ok-nodes deployed.xxx&lt;br /&gt;
&lt;br /&gt;
Gather deployed files to a head node: &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/sites `; do echo $i; scp $i:deployed* .&amp;amp;nbsp;; done&lt;br /&gt;
 cat deployed.* &amp;amp;gt; deployed.all&lt;br /&gt;
&lt;br /&gt;
Copy cluster specific libs to each deployed node /usr/local/lib dir with script &lt;br /&gt;
&lt;br /&gt;
 copy_local_libs.sh deployed.all&lt;br /&gt;
&lt;br /&gt;
Copy source files to root dir of each deployed node. Then make install each (node ssh -f does this in parallel) &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/deployed.all`; do echo $i; rsync -aP ~/dave/fupermod-1.1.0 root@$i:&amp;amp;nbsp;; done&lt;br /&gt;
 for i in `cat ~/deployed.all`; do echo $i; ssh -f root@$i &amp;quot;cd fupermod-1.1.0&amp;amp;nbsp;; make all install&amp;quot;&amp;amp;nbsp;; done&lt;br /&gt;
&lt;br /&gt;
ssh to the first node &lt;br /&gt;
&lt;br /&gt;
 ssh `head -n1 deployed.all`&lt;br /&gt;
 n=$(cat deployed.all |wc -l)&lt;br /&gt;
 mpdboot --totalnum=$n --file=$HOME/deployed.all&lt;br /&gt;
 mpdtrace&lt;br /&gt;
&lt;br /&gt;
 cd dave/data/&lt;br /&gt;
 mpirun -n $n /usr/local/bin/partitioner -l /usr/local/lib/libmxm_col.so -a0 -D10000 -o N=100&lt;br /&gt;
&lt;br /&gt;
Cleanup after: &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/sites `; do echo $i; ssh $i rm deployed.*&amp;amp;nbsp;; done&lt;br /&gt;
&lt;br /&gt;
== Check network speed  ==&lt;br /&gt;
&lt;br /&gt;
 apt-get install iperf&lt;br /&gt;
&lt;br /&gt;
== Choose which network interface to use  ==&lt;br /&gt;
&lt;br /&gt;
 mpirun --mca btl self,openib ...&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 mpirun --mca btl self,tcp ...&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=Grid5000&amp;diff=773</id>
		<title>Grid5000</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=Grid5000&amp;diff=773"/>
				<updated>2012-08-24T15:38:28Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: /* Login, job submission, deployment of image */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;https://www.grid5000.fr/mediawiki/index.php/Grid5000:Home &lt;br /&gt;
&lt;br /&gt;
[https://www.grid5000.fr/mediawiki/index.php/Grid5000:UserCharter USAGE POLICY]&amp;amp;nbsp; - Very important, after booking nodes (oarsub ...) run the command:&amp;amp;nbsp;&amp;lt;source lang=&amp;quot;&amp;quot;&amp;gt;outofchart&amp;lt;/source&amp;gt;&amp;amp;nbsp;This will check that you haven't booked too many resources and therefore get in trouble with grid5000 admin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Login, job submission, deployment of image  ==&lt;br /&gt;
&lt;br /&gt;
*Select sites and clusters for experiments, using information on the [https://www.grid5000.fr/mediawiki/index.php/Grid5000:Network#Grid.275000_Sites Grid5000 network] and the [https://www.grid5000.fr/mediawiki/index.php/Status Status page] &lt;br /&gt;
*Access is provided via access nodes '''access.SITE.grid5000.fr''' marked [https://www.grid5000.fr/mediawiki/index.php/External_access here] as ''accessible from '''everywhere''' via ssh with '''keyboard-interactive''' authentication method''. As soon as you are on one of the sites, you can directly ssh frontend node of any other site:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
access_$ ssh frontend.SITE2&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*There is no access to Internet from computing nodes (external IPs should be registered on proxy), therefore, download/update your stuff at the access nodes. Several revision control clients are available. &lt;br /&gt;
*Each site has a separate NFS, therefore, to run an application on several sites at once, you need to copy it '''scp, sftp, rsync''' between access or frontend nodes. &lt;br /&gt;
*Jobs are run from the frondend nodes, using a [http://en.wikipedia.org/wiki/OpenPBS PBS]-like system [https://www.grid5000.fr/mediawiki/index.php/Cluster_experiment-OAR2 OAR]. Basic commands: &lt;br /&gt;
**'''oarstat''' - queue status &lt;br /&gt;
**'''oarsub''' - job submission &lt;br /&gt;
**'''oardel''' - job removal&lt;br /&gt;
&lt;br /&gt;
Interactive job on deployed images: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 fontend_$ oarsub -I -t deploy -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY=&amp;quot;VALUE&amp;quot;']&lt;br /&gt;
&amp;lt;/source&amp;gt; Batch job on installed images: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 fontend_$ oarsub BATCH_FILE -t allow_classic_ssh -l [/cluster=N/]nodes=N,walltime=HH[:MM[:SS]] [-p 'PROPERTY=&amp;quot;VALUE&amp;quot;']&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
Specifying cluster name to reserve: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
oarsub -r '2012-08-24 19:30:00' -l nodes=16,walltime=12 -p &amp;quot;cluster='Genepi'&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The image to deploy can be created and loaded with help of a [http://wiki.systemimager.org/index.php/Main_Page Systemimager]-like system [https://www.grid5000.fr/mediawiki/index.php/Deploy_environment-OAR2 Kadeploy]. Creating: [https://www.grid5000.fr/mediawiki/index.php/Deploy_environment-OAR2#Tune_an_environment_to_build_another_one:_customize_authentification_parameters described here]&lt;br /&gt;
&lt;br /&gt;
Loading: &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
fontend_$ kadeploy3 -a PATH_TO_PRIVATE_IMAGE_DESC -f $OAR_FILE_NODES  &lt;br /&gt;
&amp;lt;/source&amp;gt; A Linux distribution lenny-x64-nfs-2.1 with mc, subversion, autotools, doxygen, MPICH2, GSL, Boost, R, gnuplot, graphviz, X11, evince is available at Orsay /home/nancy/alastovetsky/grid5000.&lt;br /&gt;
&lt;br /&gt;
== Compiling and running MPI applications  ==&lt;br /&gt;
&lt;br /&gt;
*Compilation should be done on one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`) &lt;br /&gt;
*Running MPI applications is described [https://www.grid5000.fr/mediawiki/index.php/Run_MPI_On_Grid%275000 here] &lt;br /&gt;
**mpirun/mpiexec should be run from one of the reserved nodes (e.g. ssh `head -n 1 $OAR_NODEFILE`)&lt;br /&gt;
&lt;br /&gt;
== Setting up new deploy image  ==&lt;br /&gt;
&lt;br /&gt;
List available images &lt;br /&gt;
&lt;br /&gt;
 kaenv3 -l&lt;br /&gt;
&lt;br /&gt;
Then book node and launch: &lt;br /&gt;
&lt;br /&gt;
 oarsub -I -t deploy -l nodes=1,walltime=12&lt;br /&gt;
 kadeploy3 -e squeeze-x64-big -f $OAR_FILE_NODES -k&lt;br /&gt;
 ssh root@`head -n 1 $OAR_NODEFILE`&lt;br /&gt;
&lt;br /&gt;
default password: grid5000 &lt;br /&gt;
&lt;br /&gt;
edit /etc/apt/sources.list &lt;br /&gt;
&lt;br /&gt;
 apt-get update&lt;br /&gt;
 apt-get upgrade&lt;br /&gt;
&lt;br /&gt;
 apt-get install libtool autoconf automake mc colorgcc ctags libboost-serialization-dev libboost-graph-dev &lt;br /&gt;
            libatlas-base-dev gfortran vim gdb valgrind screen subversion iperf bc gsl-bin libgsl0-dev&lt;br /&gt;
&lt;br /&gt;
Possibly also install (for using extrae): &lt;br /&gt;
&lt;br /&gt;
 libxml2-dev binutils-dev libunwind7-dev&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; Compiled for sources by us: &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;strike&amp;gt;gsl-1.14 (download: ftp://ftp.gnu.org/gnu/gsl/)&amp;amp;nbsp;&amp;lt;/strike&amp;gt; ''Now with squeeze it is in repository.''&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;strike&amp;gt;./configure &amp;amp;amp;&amp;amp;amp; make &amp;amp;amp;&amp;amp;amp; make install&amp;lt;/strike&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*mpich2 (download: http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads)&lt;br /&gt;
&lt;br /&gt;
 ./configure --enable-shared --enable-sharedlibs=gcc --with-pm=mpd&lt;br /&gt;
 make &amp;amp;amp;&amp;amp;amp; make install&lt;br /&gt;
&lt;br /&gt;
Mpich2 installed to: &lt;br /&gt;
&lt;br /&gt;
 Installing MPE2 include files to /usr/local/include&lt;br /&gt;
 Installing MPE2 libraries to /usr/local/lib&lt;br /&gt;
 Installing MPE2 utility programs to /usr/local/bin&lt;br /&gt;
 Installing MPE2 configuration files to /usr/local/etc&lt;br /&gt;
 Installing MPE2 system utility programs to /usr/local/sbin&lt;br /&gt;
 Installing MPE2 man to /usr/local/share/man&lt;br /&gt;
 Installing MPE2 html to /usr/local/share/doc/&lt;br /&gt;
 Installed MPE2 in /usr/local&lt;br /&gt;
&lt;br /&gt;
*hwloc (and lstopo) (download: http://www.open-mpi.org/software/hwloc/v1.2/)&lt;br /&gt;
&lt;br /&gt;
compile from sources. To get xml support install libxml2-dev and pkg-config &lt;br /&gt;
&lt;br /&gt;
 apt-get install libxml2-dev pkg-config&lt;br /&gt;
 tar -xzvf hwloc-1.1.1.tar.gz&lt;br /&gt;
 cd hwloc-1.1.1&lt;br /&gt;
 ./configure &amp;amp;amp;&amp;amp;amp; make &amp;amp;amp;&amp;amp;amp; make install&lt;br /&gt;
&lt;br /&gt;
Change root password. &lt;br /&gt;
&lt;br /&gt;
rm sources from root dir. &lt;br /&gt;
&lt;br /&gt;
Edit the &amp;quot;message of the day&amp;quot; &lt;br /&gt;
&lt;br /&gt;
 vi /etc/motd.tail&lt;br /&gt;
&lt;br /&gt;
 echo 90 &amp;amp;gt; /proc/sys/vm/overcommit_ratio&lt;br /&gt;
 echo 2 &amp;amp;gt; /proc/sys/vm/overcommit_memory&lt;br /&gt;
 date &amp;amp;gt;&amp;amp;gt; release&lt;br /&gt;
&lt;br /&gt;
Cleanup &lt;br /&gt;
&lt;br /&gt;
 apt-get clean&lt;br /&gt;
 rm /etc/udev/rules.d/*-persistent-net.rules&lt;br /&gt;
&lt;br /&gt;
Make image &lt;br /&gt;
&lt;br /&gt;
 ssh root@'''node''' tgz-g5k &amp;amp;gt; $HOME/grid5000/'''imagename'''.tgz&lt;br /&gt;
&lt;br /&gt;
make appropriate .env file. &lt;br /&gt;
&lt;br /&gt;
 kaenv3 -p lenny-x64-nfs -u deploy &amp;amp;gt; lenny-x64-custom-2.3.env&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== GotoBLAS2  ==&lt;br /&gt;
&lt;br /&gt;
When compiling gotoblas on a node without direct internet access get this error: &amp;lt;source lang=&amp;quot;&amp;quot;&amp;gt;wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
--2011-05-19 03:11:03--  http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
Resolving www.netlib.org... 160.36.58.108&lt;br /&gt;
Connecting to www.netlib.org|160.36.58.108|:80... failed: Connection timed out.&lt;br /&gt;
Retrying.&lt;br /&gt;
&lt;br /&gt;
--2011-05-19 03:14:13--  (try: 2)  http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
Connecting to www.netlib.org|160.36.58.108|:80... failed: Connection timed out.&lt;br /&gt;
Retrying.&lt;br /&gt;
...&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Fix by downloading http://www.netlib.org/lapack/lapack-3.1.1.tgz to the GotoBLAS2 source directory and editing this line in the Makefile &lt;br /&gt;
&lt;br /&gt;
 184c184&lt;br /&gt;
 &amp;amp;lt; 	-wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
 ---&lt;br /&gt;
 &amp;amp;gt; #	-wget http://www.netlib.org/lapack/lapack-3.1.1.tgz&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; GotoBLAS needs to be compiled individualy for each unique machine - ie each cluster. Add the following to .bashrc &lt;br /&gt;
&lt;br /&gt;
 export CLUSTER=`hostname |sed 's/\([a-z]*\).*/\1/'`&lt;br /&gt;
 LD_LIBRARY_PATH=$HOME/lib/$CLUSTER:$HOME/lib:/usr/local/lib:$LD_LIBRARY_PATH&lt;br /&gt;
 export LIBRARY_PATH=$HOME/lib/$CLUSTER:$HOME/lib:/usr/local/lib:$LIBRARY_PATH&lt;br /&gt;
&lt;br /&gt;
Run the following script once on each cluster: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;#! /bin/bash&lt;br /&gt;
echo &amp;quot;Compiling gotoblas for cluster: $CLUSTER&amp;quot;&lt;br /&gt;
cd $HOME/src&lt;br /&gt;
if [ ! -d &amp;quot;$CLUSTER&amp;quot; ]; then&lt;br /&gt;
        mkdir $CLUSTER&lt;br /&gt;
fi&lt;br /&gt;
cd $CLUSTER&lt;br /&gt;
tar -xzf ../Goto*.tar.gz&lt;br /&gt;
cd Goto*&lt;br /&gt;
make &amp;amp;&amp;gt; m.log&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if [ ! -d &amp;quot;$HOME/lib/$CLUSTER&amp;quot; ]; then&lt;br /&gt;
        mkdir $HOME/lib/$CLUSTER&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
cp libgoto2.so $HOME/lib/$CLUSTER&lt;br /&gt;
&lt;br /&gt;
echo results&lt;br /&gt;
ls -d $HOME/src/$CLUSTER&lt;br /&gt;
ls $HOME/src/$CLUSTER&lt;br /&gt;
&lt;br /&gt;
ls -d $HOME/lib/$CLUSTER&lt;br /&gt;
ls $HOME/lib/$CLUSTER&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
note: for newer processors this may fail. If it is a NEHALEM processor try: &lt;br /&gt;
&lt;br /&gt;
 make clean&lt;br /&gt;
 make TARGET=NEHALEM&lt;br /&gt;
&lt;br /&gt;
== Paging and the OOM-Killer  ==&lt;br /&gt;
&lt;br /&gt;
When doing exhaustion of available memory experiments, problems can occur with over-commit. See [[HCL cluster#Paging_and_the_OOM-Killer]] for more detail. &lt;br /&gt;
&lt;br /&gt;
== Example of experiment setup across several sites  ==&lt;br /&gt;
&lt;br /&gt;
Sources of all files mentioned below is available at: [[Grid5000:sources]]. &lt;br /&gt;
&lt;br /&gt;
Pick one head node as the main head node (I use grenoble, but any will do). Setup sources &lt;br /&gt;
&lt;br /&gt;
 cd dave/fupermod-1.1.0&lt;br /&gt;
 make clean&lt;br /&gt;
 ./configure --with-cblas=goto --prefix=/usr/local/&lt;br /&gt;
&lt;br /&gt;
Reserve 2 nodes from all clusters on a 3 cluster site: &lt;br /&gt;
&lt;br /&gt;
 oarsub -r &amp;quot;2011-07-25 11:01:01&amp;quot; -t deploy  -l cluster=3/nodes=2,walltime=11:59:00&lt;br /&gt;
&lt;br /&gt;
Automate with: &lt;br /&gt;
&lt;br /&gt;
 for a in 2 3 4; do for i in `cat sites.$a`; do echo $a $i; ssh $i oarsub -r &amp;quot;2011-07-25 11:01:01&amp;quot; -t deploy -l cluster=$a/nodes=2,walltime=11:59:00; done; done&lt;br /&gt;
&lt;br /&gt;
Then on each site, where xxx is site name: &lt;br /&gt;
&lt;br /&gt;
 kadeploy3 -a $HOME/grid5000/lenny-dave.env  -f $OAR_NODE_FILE --output-ok-nodes deployed.xxx&lt;br /&gt;
&lt;br /&gt;
Gather deployed files to a head node: &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/sites `; do echo $i; scp $i:deployed* .&amp;amp;nbsp;; done&lt;br /&gt;
 cat deployed.* &amp;amp;gt; deployed.all&lt;br /&gt;
&lt;br /&gt;
Copy cluster specific libs to each deployed node /usr/local/lib dir with script &lt;br /&gt;
&lt;br /&gt;
 copy_local_libs.sh deployed.all&lt;br /&gt;
&lt;br /&gt;
Copy source files to root dir of each deployed node. Then make install each (node ssh -f does this in parallel) &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/deployed.all`; do echo $i; rsync -aP ~/dave/fupermod-1.1.0 root@$i:&amp;amp;nbsp;; done&lt;br /&gt;
 for i in `cat ~/deployed.all`; do echo $i; ssh -f root@$i &amp;quot;cd fupermod-1.1.0&amp;amp;nbsp;; make all install&amp;quot;&amp;amp;nbsp;; done&lt;br /&gt;
&lt;br /&gt;
ssh to the first node &lt;br /&gt;
&lt;br /&gt;
 ssh `head -n1 deployed.all`&lt;br /&gt;
 n=$(cat deployed.all |wc -l)&lt;br /&gt;
 mpdboot --totalnum=$n --file=$HOME/deployed.all&lt;br /&gt;
 mpdtrace&lt;br /&gt;
&lt;br /&gt;
 cd dave/data/&lt;br /&gt;
 mpirun -n $n /usr/local/bin/partitioner -l /usr/local/lib/libmxm_col.so -a0 -D10000 -o N=100&lt;br /&gt;
&lt;br /&gt;
Cleanup after: &lt;br /&gt;
&lt;br /&gt;
 for i in `cat ~/sites `; do echo $i; ssh $i rm deployed.*&amp;amp;nbsp;; done&lt;br /&gt;
&lt;br /&gt;
== Check network speed  ==&lt;br /&gt;
&lt;br /&gt;
 apt-get install iperf&lt;br /&gt;
&lt;br /&gt;
== Choose which network interface to use  ==&lt;br /&gt;
&lt;br /&gt;
 mpirun --mca btl self,openib ...&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 mpirun --mca btl self,tcp ...&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=OpenMPI&amp;diff=770</id>
		<title>OpenMPI</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=OpenMPI&amp;diff=770"/>
				<updated>2012-08-22T10:45:46Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: /* MCA parameter files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;http://www.open-mpi.org/faq/&lt;br /&gt;
&lt;br /&gt;
== MCA parameter files ==&lt;br /&gt;
If you want to permanently use some MCA parameter settings, you can create a file $HOME/.openmpi/mca-params.conf, e.g.:&lt;br /&gt;
&lt;br /&gt;
 cat $HOME/.openmpi/mca-params.conf&lt;br /&gt;
 btl_tcp_if_exclude = lo,eth1&lt;br /&gt;
&lt;br /&gt;
== Handling SSH key issues ==&lt;br /&gt;
&lt;br /&gt;
This trick avoids a confirmation message asking &amp;quot;yes&amp;quot; when asked by SSH if a host should be added to known_hosts:&lt;br /&gt;
&lt;br /&gt;
    ssh -q -o StrictHostKeyChecking=no &lt;br /&gt;
&lt;br /&gt;
So with OpenMPI it can be used as &lt;br /&gt;
&lt;br /&gt;
    mpirun --mca plm_rsh_agent &amp;quot;ssh -q -o StrictHostKeyChecking=no&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Running applications on Multiprocessors/Multicores ==&lt;br /&gt;
Process can be bound to specific sockets and cores on nodes by choosing right options of mpirun.&lt;br /&gt;
* [http://www.open-mpi.org/doc/v1.4/man1/mpirun.1.php#sect9 Process binding]&lt;br /&gt;
* [http://www.open-mpi.org/doc/v1.4/man1/mpirun.1.php#sect10 Rankfile]&lt;br /&gt;
&lt;br /&gt;
== PERUSE ==&lt;br /&gt;
[[Media:current_peruse_spec.pdf|PERUSE Specification]]&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=SSH&amp;diff=769</id>
		<title>SSH</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=SSH&amp;diff=769"/>
				<updated>2012-08-22T10:43:29Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: /* The best way is saying no &amp;quot;YES&amp;quot; */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Passwordless SSH ==&lt;br /&gt;
To set up passwordless SSH, there are three main things to do:&lt;br /&gt;
* generate a pair of public/private keys on your local computer&lt;br /&gt;
* copy the public key from the source computer to the target computer's authorized_keys file&lt;br /&gt;
* check the permissions. &lt;br /&gt;
&lt;br /&gt;
You can repeat that transitively for &amp;quot;A-&amp;gt;B-&amp;gt;C&amp;quot;. You can use the initial pair of keys everywhere.&lt;br /&gt;
&lt;br /&gt;
See here for details:&lt;br /&gt;
&lt;br /&gt;
http://www.stearns.org/doc/ssh-techniques.current.html&lt;br /&gt;
&lt;br /&gt;
== Automatically saying &amp;quot;yes&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
This expect script automates typing &amp;quot;yes&amp;quot; when asked by SSH if a host should be added to known_hosts &lt;br /&gt;
 &lt;br /&gt;
 #!/usr/bin/expect -f&lt;br /&gt;
 set arg1 [lindex $argv 0]&lt;br /&gt;
 set timeout 2&lt;br /&gt;
 spawn ssh  $arg1&lt;br /&gt;
 expect &amp;quot;yes/no&amp;quot;  {&lt;br /&gt;
 send &amp;quot;yes\n&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
 send &amp;quot;exit\n&amp;quot;&lt;br /&gt;
 send &amp;quot;\r&amp;quot;          &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can include it in a bash script to iterate over all nodes doing this:&lt;br /&gt;
&lt;br /&gt;
 for i in `uniq hostfile` ; do&lt;br /&gt;
 ./say-yes.exp $i&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Better than automatically saying &amp;quot;yes&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Remark: It turns out there is a more ellegant way to do this task: using a tool called ''ssh-keyscan''.&lt;br /&gt;
&lt;br /&gt;
== Making a cascade of SSH connections easy ==&lt;br /&gt;
Here is a very convenient way to set up the access to any machine directly instead of doing a cascade of SSH calls. If you can not directly access e.g. the machine &amp;quot;heterogeneous&amp;quot;, but you can log into &amp;quot;csserver&amp;quot; and then to &amp;quot;heterogeneous&amp;quot;, you can put this into your .ssh/config file :&lt;br /&gt;
 Host csserver&lt;br /&gt;
   User kdichev&lt;br /&gt;
   Hostname csserver.ucd.ie&lt;br /&gt;
 Host heterogeneous&lt;br /&gt;
   User kiril&lt;br /&gt;
   Hostname heterogeneous.ucd.ie&lt;br /&gt;
   ProxyCommand ssh -qax csserver nc %h %p&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the installation of a new PBS system, you can not directly log into a hclXX node. You can do&lt;br /&gt;
 ssh heterogeneous instead and use &amp;quot;qsub&amp;quot; [[HCL_cluster#Access_and_Security]]&lt;br /&gt;
&lt;br /&gt;
== The best way is saying no &amp;quot;YES&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
This trick avoids a confirmation message asking &amp;quot;yes&amp;quot; when asked by SSH if a host should be added to known_hosts:&lt;br /&gt;
&lt;br /&gt;
    ssh -q -o StrictHostKeyChecking=no &lt;br /&gt;
&lt;br /&gt;
So with OpenMPI it can be used as &lt;br /&gt;
&lt;br /&gt;
    mpirun --mca plm_rsh_agent &amp;quot;ssh -q -o StrictHostKeyChecking=no&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== X11 forwarding ==&lt;br /&gt;
&amp;lt;code lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh -X hostname&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
or add the following line to your .ssh/ssh_config file&lt;br /&gt;
 ForwardX11 yes&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=SSH&amp;diff=768</id>
		<title>SSH</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=SSH&amp;diff=768"/>
				<updated>2012-08-22T10:43:04Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: /* The best way is saying no &amp;quot;YES&amp;quot; */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Passwordless SSH ==&lt;br /&gt;
To set up passwordless SSH, there are three main things to do:&lt;br /&gt;
* generate a pair of public/private keys on your local computer&lt;br /&gt;
* copy the public key from the source computer to the target computer's authorized_keys file&lt;br /&gt;
* check the permissions. &lt;br /&gt;
&lt;br /&gt;
You can repeat that transitively for &amp;quot;A-&amp;gt;B-&amp;gt;C&amp;quot;. You can use the initial pair of keys everywhere.&lt;br /&gt;
&lt;br /&gt;
See here for details:&lt;br /&gt;
&lt;br /&gt;
http://www.stearns.org/doc/ssh-techniques.current.html&lt;br /&gt;
&lt;br /&gt;
== Automatically saying &amp;quot;yes&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
This expect script automates typing &amp;quot;yes&amp;quot; when asked by SSH if a host should be added to known_hosts &lt;br /&gt;
 &lt;br /&gt;
 #!/usr/bin/expect -f&lt;br /&gt;
 set arg1 [lindex $argv 0]&lt;br /&gt;
 set timeout 2&lt;br /&gt;
 spawn ssh  $arg1&lt;br /&gt;
 expect &amp;quot;yes/no&amp;quot;  {&lt;br /&gt;
 send &amp;quot;yes\n&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
 send &amp;quot;exit\n&amp;quot;&lt;br /&gt;
 send &amp;quot;\r&amp;quot;          &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can include it in a bash script to iterate over all nodes doing this:&lt;br /&gt;
&lt;br /&gt;
 for i in `uniq hostfile` ; do&lt;br /&gt;
 ./say-yes.exp $i&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Better than automatically saying &amp;quot;yes&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Remark: It turns out there is a more ellegant way to do this task: using a tool called ''ssh-keyscan''.&lt;br /&gt;
&lt;br /&gt;
== Making a cascade of SSH connections easy ==&lt;br /&gt;
Here is a very convenient way to set up the access to any machine directly instead of doing a cascade of SSH calls. If you can not directly access e.g. the machine &amp;quot;heterogeneous&amp;quot;, but you can log into &amp;quot;csserver&amp;quot; and then to &amp;quot;heterogeneous&amp;quot;, you can put this into your .ssh/config file :&lt;br /&gt;
 Host csserver&lt;br /&gt;
   User kdichev&lt;br /&gt;
   Hostname csserver.ucd.ie&lt;br /&gt;
 Host heterogeneous&lt;br /&gt;
   User kiril&lt;br /&gt;
   Hostname heterogeneous.ucd.ie&lt;br /&gt;
   ProxyCommand ssh -qax csserver nc %h %p&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the installation of a new PBS system, you can not directly log into a hclXX node. You can do&lt;br /&gt;
 ssh heterogeneous instead and use &amp;quot;qsub&amp;quot; [[HCL_cluster#Access_and_Security]]&lt;br /&gt;
&lt;br /&gt;
== The best way is saying no &amp;quot;YES&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
This ssh trick avoids a confirmation message asking &amp;quot;yes&amp;quot; when asked by SSH if a host should be added to known_hosts:&lt;br /&gt;
&lt;br /&gt;
    ssh -q -o StrictHostKeyChecking=no &lt;br /&gt;
&lt;br /&gt;
So with OpenMPI it can be used as &lt;br /&gt;
&lt;br /&gt;
    mpirun --mca plm_rsh_agent &amp;quot;ssh -q -o StrictHostKeyChecking=no&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== X11 forwarding ==&lt;br /&gt;
&amp;lt;code lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh -X hostname&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
or add the following line to your .ssh/ssh_config file&lt;br /&gt;
 ForwardX11 yes&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	<entry>
		<id>https://hcl.ucd.ie/wiki/index.php?title=SSH&amp;diff=767</id>
		<title>SSH</title>
		<link rel="alternate" type="text/html" href="https://hcl.ucd.ie/wiki/index.php?title=SSH&amp;diff=767"/>
				<updated>2012-08-22T10:41:01Z</updated>
		
		<summary type="html">&lt;p&gt;Xalid: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Passwordless SSH ==&lt;br /&gt;
To set up passwordless SSH, there are three main things to do:&lt;br /&gt;
* generate a pair of public/private keys on your local computer&lt;br /&gt;
* copy the public key from the source computer to the target computer's authorized_keys file&lt;br /&gt;
* check the permissions. &lt;br /&gt;
&lt;br /&gt;
You can repeat that transitively for &amp;quot;A-&amp;gt;B-&amp;gt;C&amp;quot;. You can use the initial pair of keys everywhere.&lt;br /&gt;
&lt;br /&gt;
See here for details:&lt;br /&gt;
&lt;br /&gt;
http://www.stearns.org/doc/ssh-techniques.current.html&lt;br /&gt;
&lt;br /&gt;
== Automatically saying &amp;quot;yes&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
This expect script automates typing &amp;quot;yes&amp;quot; when asked by SSH if a host should be added to known_hosts &lt;br /&gt;
 &lt;br /&gt;
 #!/usr/bin/expect -f&lt;br /&gt;
 set arg1 [lindex $argv 0]&lt;br /&gt;
 set timeout 2&lt;br /&gt;
 spawn ssh  $arg1&lt;br /&gt;
 expect &amp;quot;yes/no&amp;quot;  {&lt;br /&gt;
 send &amp;quot;yes\n&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
 send &amp;quot;exit\n&amp;quot;&lt;br /&gt;
 send &amp;quot;\r&amp;quot;          &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can include it in a bash script to iterate over all nodes doing this:&lt;br /&gt;
&lt;br /&gt;
 for i in `uniq hostfile` ; do&lt;br /&gt;
 ./say-yes.exp $i&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Better than automatically saying &amp;quot;yes&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Remark: It turns out there is a more ellegant way to do this task: using a tool called ''ssh-keyscan''.&lt;br /&gt;
&lt;br /&gt;
== Making a cascade of SSH connections easy ==&lt;br /&gt;
Here is a very convenient way to set up the access to any machine directly instead of doing a cascade of SSH calls. If you can not directly access e.g. the machine &amp;quot;heterogeneous&amp;quot;, but you can log into &amp;quot;csserver&amp;quot; and then to &amp;quot;heterogeneous&amp;quot;, you can put this into your .ssh/config file :&lt;br /&gt;
 Host csserver&lt;br /&gt;
   User kdichev&lt;br /&gt;
   Hostname csserver.ucd.ie&lt;br /&gt;
 Host heterogeneous&lt;br /&gt;
   User kiril&lt;br /&gt;
   Hostname heterogeneous.ucd.ie&lt;br /&gt;
   ProxyCommand ssh -qax csserver nc %h %p&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since the installation of a new PBS system, you can not directly log into a hclXX node. You can do&lt;br /&gt;
 ssh heterogeneous instead and use &amp;quot;qsub&amp;quot; [[HCL_cluster#Access_and_Security]]&lt;br /&gt;
&lt;br /&gt;
== The best way is saying no &amp;quot;YES&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
This ssh trick avoids typing &amp;quot;yes&amp;quot; when asked by SSH if a host should be added to known_hosts:&lt;br /&gt;
&lt;br /&gt;
    ssh -q -o StrictHostKeyChecking=no &lt;br /&gt;
&lt;br /&gt;
So with OpenMPI it can be used as &lt;br /&gt;
&lt;br /&gt;
    mpirun --mca plm_rsh_agent &amp;quot;ssh -q -o StrictHostKeyChecking=no&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== X11 forwarding ==&lt;br /&gt;
&amp;lt;code lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ssh -X hostname&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
or add the following line to your .ssh/ssh_config file&lt;br /&gt;
 ForwardX11 yes&lt;/div&gt;</summary>
		<author><name>Xalid</name></author>	</entry>

	</feed>