As previously mentioned, we are setting up a shared Xen hosting, and I promised to publish detailed setup steps. So here I will, but I warn you: I’ll just note down what I’m doing, and it might be confusing...
I’m given a rescue system (based on Debian sarge) and two blank 300GB disks. So first, I partition the disks:
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 974 7823623+ fd Linux raid autodetect
Same partition table for the second hard disk, /dev/sdb
. This can be done using
sfdisk -d /dev/sda | sfdisk /dev/sdb
For now I’ve just set up the partition for the host, the guests will come later. Now I set up the software RAID for the partition:
mdadm --create /dev/md0 -n 2 -l 1 /dev/sda1 /dev/sdb1
mkfs.ext3 /dev/md0
Now we can install Debian. After getting an up-to-date debootstrap into the rescue system, I ran
debootstrap --arch amd64 etch /mnt/ http://some/debian/mirror
This went pretty fast, thanks to a local mirror. I want to enter the system, so I set up a chroot and enter it:
mount -t proc none /mnt/proc
mount -o bind /dev /mnt/dev
mount -o bind /sys /mnt/sys
chroot /mnt
First step: Set password, fix the /etc/apt/sources.list and /etc/fstab. This looks now like this:
proc /proc proc defaults 0 0
/dev/md0 / ext3 defaults 0 2
For now, I make sure it runs without Xen. So I install a kernel and grub:
apt-get install linux-image-2.6-amd64 grub mdadm
echo "/dev/md0 / ext3 rw,data=ordered 0 0" > /etc/mtab
grub-install --no-floppy /dev/md0
I also have to install grub in the boot sectors of both disks, so at the prompt of grub-install --no-floppy
, I enter:
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit
Running update-grub
generates the /boot/grub/menu.lst
, where I lower the timeout and set root=/dev/md0
as the kernel option.
The system should theoretically boot now, but without a configured network, there is not much we can do with it. So I fill in /etc/network/interfaces
, and we are ready to go. exit
the chroot, unmount everything and reboot
...
Bummer, it did not work. The machine answers pings again after a minute, which means that the system has started somewhat, but ssh does not work. Did I install a ssh server? Let’s activate the rescue system and reboot again, using the hosters web frontend...
And indeed, I better had run apt-get install ssh
inside the chroot. Ok, second try...
And there we are, a freshly installed Debian etch install. Before we do anything else, I put the /etc/
directory under subversion control, to trace my steps later:
apt-get install subversion
mkdir /srv/svn
svnadmin create /srv/svn/serverama
svn mkdir file:///srv/svn/serverama/prof-etc/
cd /etc/
svn co file:///srv/svn/serverama/prof-etc/ .
svn add *
svn revert shadow mtab
svn ci -m 'initial configuration checkin'
By now, I’m tired of writing down every step here, so I’ll be a bit more brief :-). I install some packages I like, and then try to install the Xen hypervisor:
apt-get install xen-hypervisor-3.0.3-1-amd64 linux-image-2.6-xen-amd64 xen-utils-3.0.3-1 bridge-utils sysfsutils xen-tools
reboot
That was easy, and now we are running under the Xen hypervisor. Great! We haven’t yet configured the remaining 2×294GB of the server. Because it is not yet clear whether people will want RAID1 or no RAID1, I create a bunch of variously sized partitions. Later I can then put some of them in a RAID1 (and combine using LVM), and some not (or put them in RAID0). Now the partition table on both disk look like this:
# fdisk -l /dev/sda
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 974 7823623+ fd Linux raid autodetect
/dev/sda2 975 38913 304745017+ 5 Extended
/dev/sda5 975 3528 20514973+ 83 Linux
/dev/sda6 3529 13741 82035891 83 Linux
/dev/sda7 13742 23954 82035891 83 Linux
/dev/sda8 23955 26387 19543041 fd Linux raid autodetect
/dev/sda9 26388 36722 83015856 fd Linux raid autodetect
It is not yet clear what will happen to partitions 5 through 7, but 8 and 9 can already be used for RAID. As in the beginning, I join each pair to a raid, out of which I create a LVM volume group:
mdadm --create /dev/md8 -n 2 -l 1 /dev/sda8 /dev/sdb8
mdadm --create /dev/md9 -n 2 -l 1 /dev/sda9 /dev/sdb9
pvcreate /dev/md8
pvcreate /dev/md9
vgcreate vg-raid1 /dev/md8 /dev/md9
To check if RAID and LVM work nicely together, I reboot. And it works just fine
Time to create the first Xen instance. We are lazy, so we use xen-tools
. Here we go:
mdadm --create /dev/md7 -n 2 -l 1 /dev/sda7 /dev/sdb7
vgextend vg-raid1 /dev/md7
xen-create-image --verbose --hostname bender --ip 192.168.0.1 --size 20000Mb --swap 4000Mb --memory 512Mb
lvcreate -L 30000 -n bender-data1 vg-raid1
lvcreate -L 30000 -n bender-data2 vg-raid1
# fine tune /etc/xen/bender.cfg, i.e. adding these devices, adjusting memory,
xm create /etc/xen/bender.cfg
That was sufficiently easy. By symlinking the config to /etc/xen/auto
, the guest domain starts automatically
Now we want the xen-shell
, which allows the users to manage their machine themselves. We get them from backports.org, configure sudo as described and create the user “bender”. Very nice tool!
This completes the setup description for now. The first host is running and manageable by it’s users. The rest is probably just fine-tuning the Dom0 (setting up mail to get error messages, hardening, some monitoring). Questions, Comments?
Have something to say? You can post a comment by sending an e-Mail to me at <mail@joachim-breitner.de>, and I will include it here.