For some years now I have been running a VPS system to host my email, my www.coochey.net website and the placeholder for the www.netsecspec.co.uk website, it has also served me as Linux launchpad and a mini-test/lab platform. I had started off on the lowest tier and over time upgraded to a tier 4 VPS. It, however, was still incredibly limited. For one the VPS system had no swap file space available to it and the physical memory assigned to it was 512Mbytes. This meant that running a LAMP server at all would generally require a light footprint SQL server, and that, in itself, has its limitations.
So, in the past few weeks, I decided to change the way I was operating.
Now, there has been a lot of talk recently about ‘The Cloud’, in particular Amazon’s (EC2) elastic cloud offering, there is also the more general hosting solutions out there by many small providers. I’m sure that these methods suit some people, but as I work in managing and building IT infrastructures I decided to purchase an actual server and co-locate it with a hosting provider. This gives me flexibility to run what I like, how I like.
I looked online and found a supplier (RackServers, also known as Evolution Computers) had a large range of reasonably priced SuperMicro servers. I went ahead a 1U server from them suitable for running a virtual environment. The rough specification was Dual Quad Core CPU, 48Gb RAM and 4 SAS drives.
The plan was to have the server delivered to me where I would do the initial configuration and then have the server couriered to the co-location provider for installation into the rack, all they needed to do was connect the power and network connections and it would be up and running.
So, the server arrived:
Looks a bit big for a 1U server, perhaps they messed up the shipments?
In actual fact, it was just well packed, which I appreciated from the supplier, and I could re-use the packaging when I needed to courier it back out.
Yet another box, within a box.
Eventually within all the packaging, there it is:
The first thing to do on taking delivery of a server is to take off the hood and have a look under the bonnet. Supposedly this is to check to any components that may have come loose in shipping, see if any capacitors are bulging, but for me it was just to awe at the technology I’ve bought so that I could justify it for myself. After all, I will probably rarely see this server once it is co-located.
To the top, the processor and memory banks, covered by a perspex shroud to ensure air flows through them from the fan array on the right. On the bottom left is the SAS RAID controller with battery backup (not visible) connecting to 4 SAS hard drives off to the right of the picture. You can also see a bank of 6 SATA interfaces connecting the DVD/CDROM drive.
Overall technical Specifications:
- 2 x Intel Xeon E5620 2.4Ghz Quad Core Processors.
- 6 x 8Gb DD3-1333 ECC Registered DIMMs (space to upgrade to a total of 12).
- 1 x LSI Logic 9260-4i SAS Raid Controller (512Mb Cache), with battery backup.
- 4 x 1Tb SAS2 Seagate Constellation ES drives (64Mb Cache).
Processors and Memory with the Shroud Removed:
Everything looking good it’s now time to apply some power and boot up and start off with some initial configuration.
As I live in Spain and am only in the UK currently for business purposes I have generally been working from a laptop. So in order to set up the system I had to purchase a keyboard. I eventually chose one of the mini-keyboards that you can get from Maplin stores. They’re less that £20, and while I know that you can get a keyboard for about £6, I quite liked this one – I would never use it as an everyday keyboard, but to have something that easily fits into your laptop bag when you need to visit a datacenter could be very useful indeed (I think I don’t like looking like a dork carrying a full size keyboard around too!). It is perhaps too small, and takes a bit of getting used to finding certain keys, but it did the job.
In my next post I will go through the reasoning on why I decided not to go via the Vmware ESXi / Vmware vSphere Hypervisor route and eventually chose a headless Virtualbox setup.