Connecting a XServe RAID to a (virtual) Linux host

The (now discontinued) Apple XServe RAID is a Fibre Channel connected array for those looking for cheap and fast storage without too much hardware redundancy. It’s use as a local storage source on an XServe is excellent bus when you want to share your data across the world over NFS or CIFS you can run into problems as the Mac OS X 10.5 Server is not yet very mature. I therefore decided to move the storage off to a PC and offer the storage with a NFS export to my entire network. This would keep my absolute file paths consistent across all connected systems and would be a lot cheaper than buying a dedicated NAS appliance from any vendor out there.

In my setup I would buy a server, install Xen (as a part of RHEL5) and dedicate one host to exporting an ext3 volume over NFS. The volume would reside on the XServe RAID and would connect over Fibre Channel to the server my means of a Fibre Channel HBA.

First of all it was tempting to figure out if it was possible to reuse the Fibre Channel PCI-X card in a PC. I ordered a HP DL160 with a PCI-X riser kit and gave it a try. Upon installing Red Hat Enterprise Linux 5.2 the Raid array showed up immediately as a valid drive to install RHEL on so the card was working. Success! After some research I found out that the card is in fact a LSI 7202XP PCI-X card that is still very well supported. You can find drivers, manuals and firmware here. I updated the firmware to version 1.02.23 (the Apple supplied firmware was from 2005). You can also update the BIOS on the card so it will be bootable as well but since this wasn’t my goal I skipped this step.
(I then configured my REHL setup to use persistent device names, this will help to prevent the /dev/sdX from changing but you might not need/want to do this. The instructions to do this are listed in Chapter 7 of part III in the virtualization manual from RH which can be found here.)
The next challenge is to deal with partitioning. This website came in handy to explain me some basics around Linux partitioning. Apparently using any drive larger than 2 TB as a boot volume would be difficult as this would require a MS-DOS based partioning scheme (a.k.a. disk label). The newer GPT scheme handles volumes over 2 TB quite well and I was not surprised to find that the XServe RAID was already set up as a GPT disk. In my case the XServe RAID could be found under /dev/sdc, the last letter can change depending how many drives are in your system (in my case 2 x HD and 1 XServe RAID). If you use the following command you’ll see that there are in fact 2 partitions already on the drive. On /dev/sdc2 was the data partition formatted as HFS+ and on /dev/sdc was… an EFI partition! as it was only 200 megabytes and it was hard to determine if it was required for booting up the XServe RAID I decided to let it sit around and to just repartition the 2nd partition with the following command:
mkfs.ext3 -m0 /dev/sdc2
This formats the data partition as an ext3 volume, the -m0 indicates that there will be no free swap space left. Since we do not use the volume to boot form there’s no point in wasting this space. If you want to see if it works you can mount the volume like this:
mount -t /dev/sdc2 /
And there you go, your XServe RAID as an ext3 volume on RH Linux!
If this all works it’s time to pipe our ext3 volume to Xen. This is something you *can* do in the GUI of the Virtual Manager. Add a new Disk, assign it to your partition and don’t forget to select in the somewhat oddly formed pop-up menu that you want to use it as a virtual disk.
Now because I have no clue how to use umount (you’ll have to unmount the volume before Xen can take it over) I rebooted the whole setup and launched Xen. The first thing you’ll notice is that there is no volume to be seen. That’s because you’ll have to configure it manually in /etc/fstab before it’ll work. This is the line I created for my XServe RAID:
/dev/xvdb               /mnt/data               ext3    defaults        0 0
Reboot your host and you can finally use your XServe RAID in your Xen host! You can now use it as an NFS host or infinite logging, have fun!
Advertisements

Xen, RHEL, VLANs, NICs, bridges and virtual interfaces

Based on what I’ve read this must have been a pickle for a lot more people than just me. If you’re looking for a virtualization solution but are not keen on shelling out thousands of dollars to VMWare you are gonna explore Xen. It looks nice, comes cheap and should be very flexible… Yeah right! I got myself a RHEL5 license that comes with Xen 4 built-in and got started.

First of all there’s this idea that Xen actually supports VLANs. While this might be true in away, you’ll have a *very* hard time to get this to work. As Mike Neir points out, VLAN interfaces that are created on the Xen host are broken down when Xen is launched. Because they cannot be bridged correctly by Xen you end up with a fictional bridge that won’t function. Note that he did got it to work as described on this wiki but the script is still unstable on shutdown. Christopher also made a script that is supposed to do the trick, it is explained in more detail by a Red Hat employee in this pdf
I eventually gave up and suggest you do the same for now. VMWare is much more mature in this area but comes at a hefty price tag. Still, imagine you do want to use Xen and have multiple guests that you want to connect to different networks there are other solutions at hand. In the end I settled for a solution where I bridge every Xen guest to 1 or more NIC’s which seems to work very well for now. This is how I did it.
First of all it helps to understand how Xen works. Basically Xen tears down the existing network interfaces (eth0, eth1 and so on) and binds them to itself (also referred to as Dom0) in the form of virtual interfaces which are in turn bridged to the interfaces in the guests. This is all pretty well documented here so I won’t explain this again.
What we do need to know is how to set this all up. It took me a long while before I figured it out but here are some basic guidelines.
1) Do not evn attempt to setup a new host with the correct interfaces during installation. Xen does its magic through a ‘default’ interface (the only option you can select when setting up a host anyway) that bridges to the primary NIC of the Xen host. Make sure you have a install source ready and run through the installation process (if it asks to use DHCP on the interface during installation just say yes, it will continue anyway and find your networked installation resource anyway). Afterwards it’s much easier to change the NIC as I’ll explain below.
2) When you finished your installation and have a running Xen host it’s time to prepare to add the correct NIC to your host. However we can’t do that before we make some adjustments to the Xen networking setup. First of all make sure that your NIC doesn’t have an IP address set, I got several IP conflicts because the ARP table on your LAN will not release quickly enough, thus disabling your interface. You can set the IP address in the host (and the correct gateway and so on) which is what you want anyway. I set my NIC on inactive on launch. Now here comes the tricky part. In the official Red Hat manual (get it here) there’s a good explanation from page 111 and onwards. Basically you divert Xen from using its default script (called network-bridge) and instruct it to call it script multiple times for every NIC we want to use in Xen. This is done with the example script (called network-xen-multi-bridge) in the manual. Feel free to add 2 more NIC’s to the example script, my version of RHEL did not allow me to use more than 4 NIC’s in total. This would look something like this:
$script start vifnum=3 bridge=xenbr3 netdev=eth3
$script start vifnum=2 bridge=xenbr2 netdev=eth1
$script start vifnum=1 bridge=xenbr1 netdev=dev29392
$script start vifnum=0 bridge=xenbr0 netdev=eth0 
vifnum is the number of the internal interface, bridge is the value you’ll use in the configuration file of your guest and netdev is the value of the physical device in your pc (I added a NIC later, hence the weird dev29392 name).
3) With this information at hand you can do as the manual says. Copy network-bridge to network-bridge.xen, create the network-xen-multi-bridge script and edit xend-config.sxp. This is where the manual makes an ambiguous note about editing the script. Just make sure (network-script network-xen-multi-bridge) is printed somewhere in the file and that other lines starting with (network-script… are commented out. Do make sure that you chmod +x the network-xen-multi-bridge script, otherwise it won’t load on SELinux setups!
4) Now that we instructed Xen to create 4 NICs we can start handing them out to our hosts. Edit your host configuration file (located in /etc/xen) and find this line:
vif = [ “mac=00:16:3e:1b:b2:a4,bridge=xenbr3” ]
You can just replace the text after bridge= with the bridge variable that belongs to the NIC that you want to use as I did here. In my example eth3 will be used.
5) If you want to use 2 NICs you’re in for a treat. If you do as the manual says you should edit your config file so it looks like this:
vif = [ “mac=00:16:3e:5a:52:be,bridge=xenbr1,script=vif-bridge”,”mac=00:16:3e:79:a5:18,bridge=xenbr2,script=vif-bridge” ]
I don’t know if you need to add the script=vif-bridge part, it should be automatically called for every bridge you create but you can never be thorough enough. What you will find out soon enough is that the 2nd NIC won’t show up in your host. This puzzled me but you’ll just need to add it to /etc/modprobe.conf manually. Add ‘eth1 xennet’ and you should see it as a new hardware device in your network configuration in your guest that you can use.
Hope this helps and a big thanks to the people out there who took the time to write down their stories so I could compile this report.