Difference between revisions of "HCL cluster/heterogeneous.ucd.ie install log"
(→Torque - PBS) |
(→Torque - PBS) |
||
(31 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
* Basic installation of Debian Squeeze | * Basic installation of Debian Squeeze | ||
+ | |||
+ | See also [[HCL_cluster/hcl_node_install_configuration_log]] | ||
==Networking== | ==Networking== | ||
Line 6: | Line 8: | ||
<source lang="text"> | <source lang="text"> | ||
− | + | auto lo eth0 eth1 eth2 | |
− | auto lo eth0 eth1 | ||
iface lo inet loopback | iface lo inet loopback | ||
− | |||
− | |||
iface eth0 inet static | iface eth0 inet static | ||
− | + | address 192.168.20.254 | |
− | + | netmask 255.255.255.0 | |
− | + | ||
iface eth1 inet static | iface eth1 inet static | ||
− | + | address 192.168.21.254 | |
− | + | netmask 255.255.255.0 | |
− | + | ||
+ | |||
+ | iface eth2 inet dhcp | ||
</source> | </source> | ||
Line 35: | Line 36: | ||
===DNS / BIND=== | ===DNS / BIND=== | ||
− | We will run our own DNS server for the cluster. | + | We will run our own DNS server for the cluster. Because dhclient is running on eth2, we need to stop it breaking resolv.conf. Add to <code>/etc/dhcp/dhclient.conf</code> |
− | + | <source lang="text"> | |
+ | supersede domain-name "heterogeneous.ucd.ie ucd.ie"; | ||
+ | prepend domain-name-servers 127.0.0.1; | ||
+ | </source> | ||
+ | After running dhclient, resolv.conf should read: | ||
<source lang="text"> | <source lang="text"> | ||
+ | domain heterogeneous.ucd.ie | ||
+ | search heterogeneous.ucd.ie ucd.ie | ||
nameserver 127.0.0.1 | nameserver 127.0.0.1 | ||
nameserver 137.43.116.19 | nameserver 137.43.116.19 | ||
nameserver 137.43.116.17 | nameserver 137.43.116.17 | ||
nameserver 137.43.105.22 | nameserver 137.43.105.22 | ||
− | |||
− | |||
</source> | </source> | ||
Line 119: | Line 124: | ||
</source> | </source> | ||
− | Now work on the zone files specified <code>db.heterogneneous.ucd.ie</code> and the reverse maps <code>db.192.168.21</code> & <code>db.192.168.21</code> Populate them with all nodes of the cluster. | + | Now work on the zone files specified <code>/var/cache/bind/db.heterogneneous.ucd.ie</code> and the reverse maps <code>/var/cache/bind/db.192.168.21</code> & <code>/var/cache/bind/db.192.168.21</code> Populate them with all nodes of the cluster. |
===IP Tables=== | ===IP Tables=== | ||
Line 161: | Line 166: | ||
echo 1 > /proc/sys/net/ipv4/ip_forward | echo 1 > /proc/sys/net/ipv4/ip_forward | ||
</source> | </source> | ||
− | + | ==NFS== | |
+ | Install: | ||
+ | apt-get install nfs-kernel-server nfs-common portmap | ||
+ | Add to /etc/exports | ||
+ | /home 192.168.20.0/255.255.254.0(rw,sync,no_root_squash,no_subtree_check) | ||
+ | Restart nfs server with | ||
+ | exportfs -a | ||
==Clonezilla== | ==Clonezilla== | ||
Firstly, Clonezilla is probably going to pollute a lot of your server configuration when it sets itself up. Be prepared to loose your IPtables configuration, NFS (if any) and DHCP settings. Maybe more. | Firstly, Clonezilla is probably going to pollute a lot of your server configuration when it sets itself up. Be prepared to loose your IPtables configuration, NFS (if any) and DHCP settings. Maybe more. | ||
Line 218: | Line 229: | ||
} | } | ||
</source> | </source> | ||
+ | ==NTP Daemon== | ||
+ | Nodes in the cluster should use the Network Time Protocol to set their clocks, heterogeneous.ucd.ie should provide them that service. Install ntpd with the following command: | ||
+ | apt-get install ntp | ||
+ | Configure the daemon with the following line in <code>/etc/ntp.conf</code>: | ||
+ | restrict 192.168.20.0 mask 255.255.254.0 nomodify notrap | ||
+ | |||
==Install DHCP== | ==Install DHCP== | ||
− | Install the DHCP server package with <code>apt-get install dhcp3-server</code>. When you install Clonezilla it will probably pollute your DHCP server setup, so make | + | Install the DHCP server package with <code>apt-get install dhcp3-server</code>. When you install Clonezilla it will probably pollute your DHCP server setup, so make ... TODO |
==Install NIS== | ==Install NIS== | ||
Copy users from <code>passwd</code>, <code>groups</code> and <code>shadow</code> from <code>/etc</code> on <code>hcl01</code>. | Copy users from <code>passwd</code>, <code>groups</code> and <code>shadow</code> from <code>/etc</code> on <code>hcl01</code>. | ||
Install nis. | Install nis. | ||
+ | Set domain to | ||
+ | heterogeneous.ucd.ie | ||
Edit <code>/etc/defaultdomain</code> so that it contains: | Edit <code>/etc/defaultdomain</code> so that it contains: | ||
Line 229: | Line 248: | ||
heterogeneous.ucd.ie | heterogeneous.ucd.ie | ||
− | Edit <code>/etc/ | + | Edit <code>/etc/default/nis</code> so that it contains: |
# Are we a NIS server and if so what kind (values: false, slave, master) | # Are we a NIS server and if so what kind (values: false, slave, master) | ||
Line 245: | Line 264: | ||
Edit <code>/etc/hosts</code> end ensure the NIS Master is listed | Edit <code>/etc/hosts</code> end ensure the NIS Master is listed | ||
− | 192.168. | + | 192.168.20.254 heterogeneous.ucd.ie heterogeneous |
Edit <code>/etc/yp.conf</code> and ensure that it contains: | Edit <code>/etc/yp.conf</code> and ensure that it contains: | ||
Line 273: | Line 292: | ||
Configure gmetad by adding to the <code>/etc/ganglia/gmetad.conf</code>, the following line: | Configure gmetad by adding to the <code>/etc/ganglia/gmetad.conf</code>, the following line: | ||
data_source "HCL Cluster" 192.168.20.1 192.168.20.16 | data_source "HCL Cluster" 192.168.20.1 192.168.20.16 | ||
+ | data_source "HCL Service" localhost:8648 | ||
This means that the gmetad collector connect to hcl01 and hcl16 on the .20 subnet to gather data for the frontend to use. | This means that the gmetad collector connect to hcl01 and hcl16 on the .20 subnet to gather data for the frontend to use. | ||
Line 287: | Line 307: | ||
Install smartmontools as per [http://www.howtoforge.com/checking-hard-disk-sanity-with-smartmontools-debian-ubuntu here]. Briefly: | Install smartmontools as per [http://www.howtoforge.com/checking-hard-disk-sanity-with-smartmontools-debian-ubuntu here]. Briefly: | ||
apt-get install smartmontools | apt-get install smartmontools | ||
− | Edit <code>/etc/defaults/smartmontools</code> so that it | + | Edit <code>/etc/defaults/smartmontools</code> so that it contains: |
<source lang="text"> | <source lang="text"> | ||
# List of devices you want to explicitly enable S.M.A.R.T. for | # List of devices you want to explicitly enable S.M.A.R.T. for | ||
Line 299: | Line 319: | ||
smartd_opts="--interval=1800" | smartd_opts="--interval=1800" | ||
</source> | </source> | ||
− | + | Open <code>/etc/smartd.conf</code> and edit the first line that begins with DEVICESCAN (all lines after the first instance of DEVICESCAN are ignored). Have it read something like: | |
DEVICESCAN -d removable -n standby -m root -m robert_higgins@iol.ie -M exec /usr/share/smartmontools/smartd-runner | DEVICESCAN -d removable -n standby -m root -m robert_higgins@iol.ie -M exec /usr/share/smartmontools/smartd-runner | ||
Then start the service <code>/etc/init.d/smartmontools start</code> | Then start the service <code>/etc/init.d/smartmontools start</code> | ||
Line 305: | Line 325: | ||
= Torque - PBS = | = Torque - PBS = | ||
+ | Torque is now in the Debian repository, but we are using the newer version and compiling from sources. But apt-get installs the libtorque2 with open-MPI, this leads to two versions of the library; in /usr/lib and /usr/local/lib. If you manually set LD_LIBRARY_PATH just make sure that /usr/local/lib is first. | ||
+ | |||
+ | Download Torque from [http://www.clusterresources.com/pages/products/torque-resource-manager.php here]. Extract archive and configure: | ||
+ | ./configure --prefix=/usr/local | ||
+ | Run <code>make && make install</code>. Torque will have installed files under <code>/usr/local</code> and <code>/var/spool/torque</code>. Edit the Torque config file <code>/var/spool/torque/torque.cfg</code> adding the line: | ||
+ | SERVERHOST=192.168.20.254 | ||
+ | Add the compute nodes of the cluster to the file <code>/var/spool/torque/server_priv/nodes</code> | ||
+ | <source lang="text"> | ||
+ | hcl01 | ||
+ | hcl02 | ||
+ | hcl03 | ||
+ | hcl04 | ||
+ | hcl05 | ||
+ | hcl06 | ||
+ | hcl07 | ||
+ | hcl08 | ||
+ | hcl09 | ||
+ | hcl10 | ||
+ | hcl11 | ||
+ | hcl12 | ||
+ | hcl13 | ||
+ | hcl14 | ||
+ | hcl15 | ||
+ | hcl16 | ||
+ | </source> | ||
+ | |||
+ | By default /usr/local/lib may not be in the list of directories to search for dynamic-linked libraries. If so, add that path to the end of /etc/ld.so.conf.d/local.conf, and run "ldconfig" to update. | ||
+ | |||
+ | Now return to the extracted source distribution, find the <code>contrib/init.d</code> folder and copy the <code>debian.pbs_server</code> to <code>/etc/init.d/pbs_server</code>. Edit the inplace file so that the header reads: | ||
+ | <source lang="bash"> | ||
+ | #!/bin/sh | ||
+ | ### BEGIN INIT INFO | ||
+ | # Provides: pbs_server | ||
+ | # Required-Start: $local_fs $named | ||
+ | # Should-Start: | ||
+ | # Required-Stop: | ||
+ | </source> | ||
+ | This will ensure that important services are started before the PBS/Torque daemon. then run | ||
+ | update-rc.d pbs_server defaults | ||
+ | pbs_server -t create | ||
+ | qterm | ||
+ | pbs_server | ||
+ | |||
+ | ===Packages for nodes=== | ||
+ | make packages | ||
+ | scp torque-package-mom-linux-i686.sh hclxx: | ||
+ | scp torque-package-clients-linux-i686.sh hclxx: | ||
+ | |||
+ | ===Queues=== | ||
+ | We will configure 4 queues, with varying priority and preemption policies. | ||
+ | *The first is the <b>normal</b> queue. This will be where most jobs, which should not be interrupted, will be placed. It has the highest priority. | ||
+ | *The second queue is <b>lowpri</b>, it is for running jobs which may run for extended periods but are interruptible. Should a job <i>A</i> be placed on the normal queue while a lowpri job <i>B</i> is running, <i>B</i> will be sent a kill signal, so it may shut down cleanly, and then it will be requeued on the system, so it can resume running after <i>A</i>b has finished. | ||
+ | *The third queue is for running of service jobs like the homedir backup. It is named <b>service</b>. This will have lower priority than the above queues and jobs running on this queue will be preemptable. | ||
+ | *The final queue is the <b>volunteer</b> queue. This has the lowest priority and is for executing volunteer computing jobs during otherwise idle periods. These jobs will also be preemptable. | ||
+ | |||
+ | ====Queue Setup==== | ||
+ | Enter the queue manager program <code>qmgr</code> as the root user on heterogeneous. Send it the following commands: | ||
Allow all users to see all queued jobs: | Allow all users to see all queued jobs: | ||
− | < | + | set server query_other_jobs=TRUE' |
− | < | + | Create the default <b>normal</b> queue |
+ | create queue normal | ||
+ | set queue normal queue_type = Execution | ||
+ | set queue normal Priority = 10000 | ||
+ | set queue normal enabled = True | ||
+ | set queue normal started = True | ||
+ | Create <b>lowpri</b> queue | ||
+ | create queue lowpri | ||
+ | set queue lowpri queue_type = Execution | ||
+ | set queue lowpri Priority = 1000 | ||
+ | set queue lowpri enabled = True | ||
+ | set queue lowpri started = True | ||
+ | Create <b>service</b> queue | ||
+ | create queue service | ||
+ | set queue service queue_type = Execution | ||
+ | set queue service Priority = 100 | ||
+ | set queue service enabled = True | ||
+ | set queue service started = True | ||
+ | Create <b>volunteer</b> queue | ||
+ | create queue volunteer | ||
+ | set queue volunteer queue_type = Execution | ||
+ | set queue volunteer Priority = 0 | ||
+ | set queue volunteer enabled = True | ||
+ | set queue volunteer started = True | ||
+ | Set some server settings | ||
+ | set server default_queue=normal | ||
+ | set server resources_default.nodes = 1 | ||
+ | set server resources_default.walltime = 12:00:00 | ||
+ | == Misc == | ||
+ | Everyone can see every job: in qmgr | ||
+ | set server query_other_jobs = True | ||
+ | |||
+ | = Maui Scheduler = | ||
+ | Download from [http://www.clusterresources.com/product/maui/ here], unpack and run | ||
+ | ./configure --with-pbs --with-spooldir=/var/spool/maui | ||
+ | Edit /var/spool/maui/maui.cfg | ||
+ | |||
+ | <source lang="text"> | ||
+ | SERVERHOST heterogeneous | ||
+ | ADMIN1 root | ||
+ | RMCFG[HETEROGENEOUS] TYPE=PBS | ||
+ | AMCFG[bank] TYPE=NONE | ||
+ | |||
+ | RMPOLLINTERVAL 00:00:01 | ||
+ | QUEUETIMEWEIGHT 0 | ||
+ | PREEMPTIONPOLICY REQUEUE | ||
+ | DEFERTIME 0 | ||
+ | DEFERCOUNT 86400 | ||
+ | NODEALLOCATIONPOLICY FIRSTAVAILABLE | ||
+ | |||
+ | QOSWEIGHT 1 | ||
+ | QOSCFG[normal] QFLAGS=PREEMPTOR | ||
+ | QOSCFG[lowpri] QFLAGS=PREEMPTEE:PREEMPTOR QTWEIGHT=0 | ||
+ | QOSCFG[service] QFLAGS=PREEMPTEE:PREEMPTOR QTWEIGHT=0 | ||
+ | QOSCFG[volunteer] QFLAGS=PREEMPTEE:PREEMPTOR QTWEIGHT=0 | ||
+ | |||
+ | CREDWEIGHT 1 | ||
+ | CLASSWEIGHT 1 | ||
+ | CLASSCFG[normal] QDEF=normal PRIORITY=10000 | ||
+ | CLASSCFG[lowpri] QDEF=lowpri PRIORITY=1000 | ||
+ | CLASSCFG[service] QDEF=service PRIORITY=100 | ||
+ | CLASSCFG[volunteer] QDEF=volunteer PRIORITY=1 | ||
+ | </source> | ||
+ | edit /etc/profile | ||
+ | <source lang="text"> | ||
+ | if [ "`id -u`" -eq 0 ]; then | ||
+ | PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/maui/bin:/usr/local/maui/sbin" | ||
+ | else | ||
+ | PATH="/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/maui/bin" | ||
+ | fi | ||
+ | </source> |
Latest revision as of 15:37, 26 October 2010
- Basic installation of Debian Squeeze
See also HCL_cluster/hcl_node_install_configuration_log
Contents
Networking
Interfaces
- edit
/etc/networks/interfaces
Note that at some point eth1 should be configured by DHCP, it is on the UCD LAN and must be registered correctly (update MAC address with services).eth0
is the internal network.
auto lo eth0 eth1 eth2
iface lo inet loopback
iface eth0 inet static
address 192.168.20.254
netmask 255.255.255.0
iface eth1 inet static
address 192.168.21.254
netmask 255.255.255.0
iface eth2 inet dhcp
- Install non-free linux firmware for network interface (eth0). This will allow Gigabit operation on eth0 with the tg3 hardware (I think). Edit
/etc/apt/sources.list
including the lines:
deb http://ftp.ie.debian.org/debian/ squeeze main contrib non-free
deb-src http://ftp.ie.debian.org/debian/ squeeze main contrib non-free
- Install firmware-linux:
apt-get update && apt-get install firmware-linux
DNS / BIND
We will run our own DNS server for the cluster. Because dhclient is running on eth2, we need to stop it breaking resolv.conf. Add to /etc/dhcp/dhclient.conf
supersede domain-name "heterogeneous.ucd.ie ucd.ie";
prepend domain-name-servers 127.0.0.1;
After running dhclient, resolv.conf should read:
domain heterogeneous.ucd.ie
search heterogeneous.ucd.ie ucd.ie
nameserver 127.0.0.1
nameserver 137.43.116.19
nameserver 137.43.116.17
nameserver 137.43.105.22
Now install bind9 (apt-get install bind9
). Edit /etc/bind/named.conf.local
and set the domain zones for the cluster (forwards and reverse). We have two subdomains where reverse lookups will have to be specified 192.168.20 and 192.168.21
//
// Do any local configuration here
//
// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
include "/etc/bind/rndc.key";
controls {
inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; };
};
zone "heterogeneous.ucd.ie" {
type master;
file "db.heterogeneous.ucd.ie";
};
zone "21.168.192.in-addr.arpa" {
type master;
file "db.192.168.21";
};
zone "20.168.192.in-addr.arpa" {
type master;
file "db.192.168.20";
};
Also edit the options file: /etc/bind/named.conf.options
, note the subnet we define in the allow sections, 192.168.20/23, it will permit access from 192.168.20.* and 192.168.21.* addresses.
options {
directory "/var/cache/bind";
// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See http://www.kb.cert.org/vuls/id/800113
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.
forwarders {
137.43.116.19;
137.43.116.17;
137.43.105.22;
};
recursion yes;
version "REFUSED";
allow-recursion {
127.0.0.1;
192.168.20.0/23;
};
allow-query {
127.0.0.1;
192.168.20.0/23;
};
auth-nxdomain no; # conform to RFC1035
listen-on-v6 { any; };
};
Now work on the zone files specified /var/cache/bind/db.heterogneneous.ucd.ie
and the reverse maps /var/cache/bind/db.192.168.21
& /var/cache/bind/db.192.168.21
Populate them with all nodes of the cluster.
IP Tables
- Set up
iptables
. We want to implement NAT between the internal network (eth0
) and external one (eth1
). Add a script to/etc/network/if-up.d
directory, named00iptables
. All scripts in this directory will be executed after network interfaces are brought up, so this will persist:
#!/bin/sh
PATH=/usr/sbin:/sbin:/bin:/usr/bin
IF_INT=eth0
IF_EXT=eth1
#
# delete all existing rules.
#
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -X
# Always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT
# Allow established connections, and those not coming from the outside
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW ! -i $IF_EXT -j ACCEPT
iptables -A FORWARD -i $IF_EXT -o $IF_INT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow outgoing connections from the LAN side.
iptables -A FORWARD -i $IF_INT -o $IF_EXT -j ACCEPT
# Masquerade.
iptables -t nat -A POSTROUTING -o $IF_EXT -j MASQUERADE
# Don't forward from the outside to the inside.
iptables -A FORWARD -i $IF_EXT -o $IF_EXT -j REJECT
# Enable routing.
echo 1 > /proc/sys/net/ipv4/ip_forward
NFS
Install:
apt-get install nfs-kernel-server nfs-common portmap
Add to /etc/exports
/home 192.168.20.0/255.255.254.0(rw,sync,no_root_squash,no_subtree_check)
Restart nfs server with
exportfs -a
Clonezilla
Firstly, Clonezilla is probably going to pollute a lot of your server configuration when it sets itself up. Be prepared to loose your IPtables configuration, NFS (if any) and DHCP settings. Maybe more.
- follow the guide to installing Clonezilla here. Essentially:
- add repository key
wget -q http://drbl.sourceforge.net/GPG-KEY-DRBL -O- | apt-key add -
- the line add to /etc/apt/sources.list:
deb http://drbl.sourceforge.net/drbl-core drbl stable
- run:
apt-get update && apt-get install drbl && /opt/drbl/sbin/drbl4imp
- accept default options to drbl4imp.
- add repository key
- After Clonezilla has installed edit
/etc/dhcpd3/dhcpd.conf
, adding all entries for test nodeshcl07
andhcl03
. Also ensure these nodes have been removed from the inplace heterogeneous.ucd.ie server so that they are only served by one machine.
default-lease-time 300;
max-lease-time 300;
option subnet-mask 255.255.255.0;
option domain-name-servers 137.43.116.19,137.43.116.17,137.43.105.22;
option domain-name "ucd.ie";
ddns-update-style none; # brett had ad-hoc ...?
server-name drbl;
filename = "pxelinux.0";
subnet 192.168.21.0 netmask 255.255.255.0 {
option subnet-mask 255.255.255.0;
option routers 192.168.21.1;
next-server 192.168.21.254;
pool {
# allow members of "DRBL-Client";
range 192.168.21.200 192.168.21.212;
}
host hcl03 {
option host-name "hcl03.ucd.ie";
hardware ethernet 00:14:22:0A:22:6C;
fixed-address 192.168.21.5;
}
host hcl03_eth1 {
option host-name "hcl03_eth1.ucd.ie";
hardware ethernet 00:14:22:0A:22:6D;
fixed-address 192.168.21.105;
}
host hcl07 {
option host-name "hcl07.ucd.ie";
hardware ethernet 00:14:22:0A:20:E2;
fixed-address 192.168.21.9;
}
host hcl07_eth1 {
option host-name "hcl07_eth1.ucd.ie";
hardware ethernet 00:14:22:0A:20:E3;
fixed-address 192.168.21.109;
}
default-lease-time 21600;
max-lease-time 43200;
}
NTP Daemon
Nodes in the cluster should use the Network Time Protocol to set their clocks, heterogeneous.ucd.ie should provide them that service. Install ntpd with the following command:
apt-get install ntp
Configure the daemon with the following line in /etc/ntp.conf
:
restrict 192.168.20.0 mask 255.255.254.0 nomodify notrap
Install DHCP
Install the DHCP server package with apt-get install dhcp3-server
. When you install Clonezilla it will probably pollute your DHCP server setup, so make ... TODO
Install NIS
Copy users from passwd
, groups
and shadow
from /etc
on hcl01
.
Install nis. Set domain to
heterogeneous.ucd.ie
Edit /etc/defaultdomain
so that it contains:
heterogeneous.ucd.ie
Edit /etc/default/nis
so that it contains:
# Are we a NIS server and if so what kind (values: false, slave, master) NISSERVER=master
Edit /etc/ypserv.securenets
so that is contains:
# allow connects from local 255.0.0.0 127.0.0.0 # allow connections from heterogeneous subnets .20 and .21 255.255.254.0 192.168.20.0
The NIS host is also a client of itself, so do the client set up as follows:
Edit /etc/hosts
end ensure the NIS Master is listed
192.168.20.254 heterogeneous.ucd.ie heterogeneous
Edit /etc/yp.conf
and ensure that it contains:
domain heterogeneous.ucd.ie server localhost
Edit /etc/passwd
adding a line to the end that reads: +::::::
. Edit /etc/group
with a line +:::
at the line.
The NIS Makefile will not pull userid and groupids that are lower than a certain value, we must set this to 500 in /var/yp/Makefile
MINUID=500 MINGID=500
Start the ypbind
and yppasswd
services. Then initialise the NIS database:
/usr/lib/yp/ypinit -m
Accept defaults at prompts.
Now start other NIS services
service nis start
Installing Ganglia Frontend
Install the packages gmetad and ganglia-webfrontend.
Configure the front end by appending to /etc/apache2/apache2.conf
, the following:
Include /etc/ganglia-webfrontend/apache.conf
Configure gmetad by adding to the /etc/ganglia/gmetad.conf
, the following line:
data_source "HCL Cluster" 192.168.20.1 192.168.20.16 data_source "HCL Service" localhost:8648
This means that the gmetad collector connect to hcl01 and hcl16 on the .20 subnet to gather data for the frontend to use.
After all packages are configured execute:
service apache2 restart
service gmetad restart
Pointing your browser to here should display the monitoring page for HCL Cluster. gmond
must also be installed and configured on the cluster nodes.
Hardware Monitoring & Backup
Disk Monitoring
Install smartmontools as per here. Briefly:
apt-get install smartmontools
Edit /etc/defaults/smartmontools
so that it contains:
# List of devices you want to explicitly enable S.M.A.R.T. for
# Not needed (and not recommended) if the device is monitored by smartd
enable_smart="/dev/sda"
# uncomment to start smartd on system startup
start_smartd=yes
# uncomment to pass additional options to smartd on startup
smartd_opts="--interval=1800"
Open /etc/smartd.conf
and edit the first line that begins with DEVICESCAN (all lines after the first instance of DEVICESCAN are ignored). Have it read something like:
DEVICESCAN -d removable -n standby -m root -m robert_higgins@iol.ie -M exec /usr/share/smartmontools/smartd-runner
Then start the service /etc/init.d/smartmontools start
Note, consider installing this on all nodes, as it would be interesting to have prior notice of any failing disks.
Torque - PBS
Torque is now in the Debian repository, but we are using the newer version and compiling from sources. But apt-get installs the libtorque2 with open-MPI, this leads to two versions of the library; in /usr/lib and /usr/local/lib. If you manually set LD_LIBRARY_PATH just make sure that /usr/local/lib is first.
Download Torque from here. Extract archive and configure:
./configure --prefix=/usr/local
Run make && make install
. Torque will have installed files under /usr/local
and /var/spool/torque
. Edit the Torque config file /var/spool/torque/torque.cfg
adding the line:
SERVERHOST=192.168.20.254
Add the compute nodes of the cluster to the file /var/spool/torque/server_priv/nodes
hcl01
hcl02
hcl03
hcl04
hcl05
hcl06
hcl07
hcl08
hcl09
hcl10
hcl11
hcl12
hcl13
hcl14
hcl15
hcl16
By default /usr/local/lib may not be in the list of directories to search for dynamic-linked libraries. If so, add that path to the end of /etc/ld.so.conf.d/local.conf, and run "ldconfig" to update.
Now return to the extracted source distribution, find the contrib/init.d
folder and copy the debian.pbs_server
to /etc/init.d/pbs_server
. Edit the inplace file so that the header reads:
#!/bin/sh
### BEGIN INIT INFO
# Provides: pbs_server
# Required-Start: $local_fs $named
# Should-Start:
# Required-Stop:
This will ensure that important services are started before the PBS/Torque daemon. then run
update-rc.d pbs_server defaults pbs_server -t create qterm pbs_server
Packages for nodes
make packages scp torque-package-mom-linux-i686.sh hclxx: scp torque-package-clients-linux-i686.sh hclxx:
Queues
We will configure 4 queues, with varying priority and preemption policies.
- The first is the normal queue. This will be where most jobs, which should not be interrupted, will be placed. It has the highest priority.
- The second queue is lowpri, it is for running jobs which may run for extended periods but are interruptible. Should a job A be placed on the normal queue while a lowpri job B is running, B will be sent a kill signal, so it may shut down cleanly, and then it will be requeued on the system, so it can resume running after Ab has finished.
- The third queue is for running of service jobs like the homedir backup. It is named service. This will have lower priority than the above queues and jobs running on this queue will be preemptable.
- The final queue is the volunteer queue. This has the lowest priority and is for executing volunteer computing jobs during otherwise idle periods. These jobs will also be preemptable.
Queue Setup
Enter the queue manager program qmgr
as the root user on heterogeneous. Send it the following commands:
Allow all users to see all queued jobs:
set server query_other_jobs=TRUE'
Create the default normal queue
create queue normal set queue normal queue_type = Execution set queue normal Priority = 10000 set queue normal enabled = True set queue normal started = True
Create lowpri queue
create queue lowpri set queue lowpri queue_type = Execution set queue lowpri Priority = 1000 set queue lowpri enabled = True set queue lowpri started = True
Create service queue
create queue service set queue service queue_type = Execution set queue service Priority = 100 set queue service enabled = True set queue service started = True
Create volunteer queue
create queue volunteer set queue volunteer queue_type = Execution set queue volunteer Priority = 0 set queue volunteer enabled = True set queue volunteer started = True
Set some server settings
set server default_queue=normal set server resources_default.nodes = 1 set server resources_default.walltime = 12:00:00
Misc
Everyone can see every job: in qmgr
set server query_other_jobs = True
Maui Scheduler
Download from here, unpack and run
./configure --with-pbs --with-spooldir=/var/spool/maui
Edit /var/spool/maui/maui.cfg
SERVERHOST heterogeneous
ADMIN1 root
RMCFG[HETEROGENEOUS] TYPE=PBS
AMCFG[bank] TYPE=NONE
RMPOLLINTERVAL 00:00:01
QUEUETIMEWEIGHT 0
PREEMPTIONPOLICY REQUEUE
DEFERTIME 0
DEFERCOUNT 86400
NODEALLOCATIONPOLICY FIRSTAVAILABLE
QOSWEIGHT 1
QOSCFG[normal] QFLAGS=PREEMPTOR
QOSCFG[lowpri] QFLAGS=PREEMPTEE:PREEMPTOR QTWEIGHT=0
QOSCFG[service] QFLAGS=PREEMPTEE:PREEMPTOR QTWEIGHT=0
QOSCFG[volunteer] QFLAGS=PREEMPTEE:PREEMPTOR QTWEIGHT=0
CREDWEIGHT 1
CLASSWEIGHT 1
CLASSCFG[normal] QDEF=normal PRIORITY=10000
CLASSCFG[lowpri] QDEF=lowpri PRIORITY=1000
CLASSCFG[service] QDEF=service PRIORITY=100
CLASSCFG[volunteer] QDEF=volunteer PRIORITY=1
edit /etc/profile
if [ "`id -u`" -eq 0 ]; then
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/maui/bin:/usr/local/maui/sbin"
else
PATH="/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/maui/bin"
fi