Category Archives: The New Server

Choosing a Hypervisor

So it came to a point where I needed to choose a Hypervisor.

The choices (from the free ones) appeared to be Vmware’s offerings, either vSphere 4.1 or vSphere 5.0 or a free hypervisor such as Xen or Virtualbox.

Vmware’s offerings are touted as Tier-1 Hypervisors. That is, the hypervisor itself runs on the bare metal. Whereas KVM or Virtualbox are Tier-2 Hypervisors – relying on an underlying OS kernel.

This is where I began to hit Vmware’s limits on the ‘free’ aspects of ESXi.

My server has 8 Cores, 48GB Memory and a 2.7TB Disk Array.

ESXi v5.0 (or whatever they mean to call it now – deliberately blurring the feature set of their free product with their paid-for product) has a limitation of 32GB RAM, anything above that is unusable to the OS.

Well, OK, I’m not going to waste 16GB of memory for the sake of being on a Tier-1 Hypervisor.

ESXi v4.1 doesn’t have the memory limitation, although it appears to have fewer features (I was really interested in the PVLANs for instance, but it appears that those are paid-for only features as well!). Never the less, I tried it out – and well what do you know it has a lmitation of 750GB for local storage!

A further irksome issue I had with Vmware in testing was when I was trying to set up their Networking. I essentially wanted to implement multiple subnets, with a single interface bridged to one of the physical interfaces on the host server. Should be simple enough, no? Well think again – when I tried to do this Vmware insisted that I assign an IP address to this interface. Why? It’s a bridged interface, it doesn’t need an IP address, it doesn’t necessarily need to even run IPv4 – I might want to run IPX/SPX or IPv6 instead – in any case the bridge does not need to be assigned any Layer-3 address.

I googled on this networking faux-pas and found other people asking the same question, why does it need an IP address, with people giving ill-informed answers that “because it’s the bridge interface”. Nonsense, these people don’t know their networking.

So, now I was left with either going with something like Xen Server or perhaps going with the Hypervisor I actually use on my Laptop (Virtualbox). I thought there would be an obvious downside to Virtualbox – “it’s clearly designed for desktop virtualisation”, I thought. I had a brief look at Xen, however, and started to get confused. So XenServer is Citrix right? Citrix sell XenServer, then there is XenSource? How does that fit in? The Wikipedia says that RedHat / CentOS 6 don’t support dom0, what is dom0…

Well, too many questions, not enough answers, no good documentation.

So I opted for Virtualbox – turns out it has a pretty good Vboxheadless mode which allows me to run all my VMs through a VRDP session to their consoles (essentially using Microsoft’s Remote Desktop Services protocol RDP). I intended nearly all my VMs to be Linux based, and be primarily controlled via SSH – so this is fine for me.

There is also a companion project called phpVirtualbox – which provides a near identical GUI to the Virtualbox interface via a Web Browser.

It should also be noted that, surprisingly, Virtualbox is probably more dynamically controlable via the command line than it is via either of the GUI interfaces – and is very suited to a roll your own VM Hypervisor set up.

The New Server

For some years now I have been running a VPS system to host my email, my www.coochey.net website and the placeholder for the www.netsecspec.co.uk website, it has also served me as Linux launchpad and a mini-test/lab platform. I had started off on the lowest tier and over time upgraded to a tier 4 VPS. It, however, was still incredibly limited. For one the VPS system had no swap file space available to it and the physical memory assigned to it was 512Mbytes. This meant that running a LAMP server at all would generally require a light footprint SQL server, and that, in itself, has its limitations.

So, in the past few weeks, I decided to change the way I was operating.

Now, there has been a lot of talk recently about ‘The Cloud’, in particular Amazon’s (EC2) elastic cloud offering, there is also the more general hosting solutions out there by many small providers. I’m sure that these methods suit some people, but as I work in managing and building IT infrastructures I decided to purchase an actual server and co-locate it with a hosting provider. This gives me flexibility to run what I like, how I like.

I looked online and found a supplier (RackServers, also known as Evolution Computers) had a large range of reasonably priced SuperMicro servers. I went ahead a 1U server from them suitable for running a virtual environment. The rough specification was Dual Quad Core CPU, 48Gb RAM and 4 SAS drives.

The plan was to have the server delivered to me where I would do the initial configuration and then have the server couriered to the co-location provider for installation into the rack, all they needed to do was connect the power and network connections and it would be up and running.

So, the server arrived:

Looks a bit big for a 1U server, perhaps they messed up the shipments?

In actual fact, it was just well packed, which I appreciated from the supplier, and I could re-use the packaging when I needed to courier it back out.

Yet another box, within a box.

 Eventually within all the packaging, there it is:

The first thing to do on taking delivery of a server is to take off the hood and have a look under the bonnet. Supposedly this is to check to any components that may have come loose in shipping, see if any capacitors are bulging, but for me it was just to awe at the technology I’ve bought so that I could justify it for myself. After all, I will probably rarely see this server once it is co-located.

To the top, the processor and memory banks, covered by a perspex shroud to ensure air flows through them from the fan array on the right. On the bottom left is the SAS RAID controller with battery backup (not visible) connecting to 4 SAS hard drives off to the right of the picture. You can also see a bank of 6 SATA interfaces connecting the DVD/CDROM drive.

Overall technical Specifications:

  • 2 x Intel Xeon E5620 2.4Ghz Quad Core Processors.
  • 6 x 8Gb DD3-1333 ECC Registered DIMMs (space to upgrade to a total of 12).
  • 1 x LSI Logic 9260-4i SAS Raid Controller (512Mb Cache), with battery backup.
  • 4 x 1Tb SAS2 Seagate Constellation ES drives (64Mb Cache).

Processors and Memory with the Shroud Removed:

 Everything looking good it’s now time to apply some power and boot up and start off with some initial configuration.

As I live in Spain and am only in the UK currently for business purposes I have generally been working from a laptop. So in order to set up the system I had to purchase a keyboard. I eventually chose one of the mini-keyboards that you can get from Maplin stores. They’re less that £20, and while I know that you can get a keyboard for about £6, I quite liked this one – I would never use it as an everyday keyboard, but to have something that easily fits into your laptop bag when you need to visit a datacenter could be very useful indeed (I think I don’t like looking like a dork carrying a full size keyboard around too!). It is perhaps too small, and takes a bit of getting used to finding certain keys, but it did the job.

In my next post I will go through the reasoning on why I decided not to go via the Vmware ESXi / Vmware vSphere Hypervisor route and eventually chose a headless Virtualbox setup.

New Site, new projects…

Well, you may have noticed the old site went away, and I’ve put this wordpress site in instead.

Not good, you might think! What with all the wordpress vulnerabilities about. Well, it’s actually part of a much bigger project and rather than have a not-found or a 404 on the site I thought I would at least get something up and running here.

Here is not where here used to be. All the sites associated with me are now being transferred to a co-location provider in the UK.

If you’re looking for my CV then it is still here.

If you’re looking for my Phone Number it’s +44 7584 634135.