Monday, September 21, 2009

NFS kernel-server

# apt-get install nfs-kernel-server

If you like me use a firewall between the networks, you need to configure NFS to use pre-defined ports (or atleast should cause it makes your life more easy), as opposed to having portmapper deciding dynamically.

Edit /etc/default/nfs-kernel-server (only showing what I have changed, I choose to beef up RCNFSDCOUNT from 8 to 32 as I have 40 machines mounting the same export):
RPCNFSDCOUNT=32
RPCMOUNTDOPTS="--port 4002"

Edit /etc/default/nfs-common:
STATDOPTS="--port 4000 --outgoing-port 4001"

Restart both nfs-kernel-server and nfs-common. Note that all clients need to have the same ports setup.

Open:
TCP ports 111, 2049, 4000 & 4002.
UDP ports 111, 794, 2049, 4000 & 4002.

Check the nfs-server from a client with rpcinfo:

# rpcinfo -p program vers proto port
100000 2 tcp 111 portmapper
391002 2 tcp 705 sgi_fam
100000 2 udp 111 portmapper
100024 1 udp 4000 status
100024 1 tcp 4000 status
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100021 1 udp 58452 nlockmgr
100021 3 udp 58452 nlockmgr
100021 4 udp 58452 nlockmgr
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100021 1 tcp 38677 nlockmgr
100021 3 tcp 38677 nlockmgr
100021 4 tcp 38677 nlockmgr
100005 1 udp 4002 mountd
100005 1 tcp 4002 mountd
100005 2 udp 4002 mountd
100005 2 tcp 4002 mountd
100005 3 udp 4002 mountd
100005 3 tcp 4002 mountd

Now just setup some exports in /etc/exports and run exportfs and mount it, don't forget to add the shares to fstab for mounts upon boot!

Bonding on debian

apt-get install ifenslave-2.6

Then add following to /etc/network/interfaces:
auto bond0
iface bond0 inet static
address 10.12.12.163
netmask 255.255.255.0
network 10.12.12.0
gateway 10.12.12.254
slaves eth0 eth1
bond_mode active-backup
bond_miimon 100
bond_downdelay 200
bond_updelay 200

Now just bring the interface up!


NOTE:
For Etch, you need to add the following lines to /etc/modprobe.d/arch/i386

alias bond0 bonding
options bonding mode=1 miimon=100 downdelay=200 updelay=200

Don't forget to run update-modules & bring up the interface!

Wednesday, September 9, 2009

mdadm + lvm2

1) Create partitions on the disks.

2) Create the md0 device with proper raid level and the disks
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

3) Create physical volumes
# pvcreate md0

4) Create the volume
# vgcreate myvolume /dev/md0

5) Display the volume, note the PE size
# vgdisplay lvm-raid
Free PE / Size 119234 / 465.76 GB

6) make a file system on it
# lvcreate -l 119234 myvolume -n myraidname

7) create the filesystem on the raid volume
# mkfs.ext3 /dev/myvolume/myraidname

9) Add the raid device to mdadm.conf, so it's recognized next time you boot
mdadm -Es | grep md0 >>/etc/mdadm.conf


From http://en.wikipedia.org/wiki/Mdadm

View the status of a multi disk array.
# mdadm --detail /dev/md0

View the status of all multi disk arrays.
# cat /proc/mdstat