MOAR SERVERS! – My newest little server

So this server business turns out to be quite interesting..

First of all: sorry for not posting in ages, no excuses or anything I’ve just been a lazy bastard with more and more hobbies (now included beer brewing).

Back in the days when I was in university these people messing about with servers all the time puzzled me, I really didn’t understand this appeal they had for setting up machines, maintenance, tweaking, networking and so on, honestly I didn’t bother because people at the university or friends with an interest for this hosted whatever I needed for me, why do it yourself when somebody has already done it for you right?

Well, me investing in my HP proliant microserver which you can read more about here: seems to be the beginning of an evil circle for me, this summer I built myself another machine, a Vmware ESXi server.

So why did I do this? well mostly because I’m not the most experienced person within this area, I ended up throwing so much stuff onto the relatively weak microserver, I broke stuff and I got this server-geek fetish for good uptime, it was about to kick it up a notch.

The hardware

Again I wanted something small, silent and good looks doesn’t hurt (who knows one day I might get a girlfriend and then WAF might be a thing I have to keep in mind).

I figured I’d be able to get a pretty good machine with a mini-itx setup or potentially a shuttle barebone, I browsed the web for what I actually had to take in mind when setting up and ESXi machine, apparently an intel nic was more or less a must so I needed a PCI-E interface on the motherboard as the support for onboard nic’s was to say the least flaky with ESXi.

I found a lot of good shuttle machines that others had good experience with using for the same purpose but as I am a big lian li fanboy when it comes to computer cases and the fact that I found a pretty little thing that supported everything I needed I went for a lian li pc-q16 which is the smallest case in their assortment.

I also figured out I should have a CPU and motherboard that supports vt-x and preferably vt-d so that narrowed down to a motherboard with the z87 chipset from intel, now the issue was that most manufacturers did not implement vt-d in the bios for this socket, especially on mini-itx as it’s not really the obvious choice for a workstation or server but at last I found out that the “right” bios versions on the ASRock Z87E-ITX supported vt-d.

And by pure luck the local computer shop digital impuls are one of the few stores in norway that sell AsRock products (which is weird as their motherboards are relatively cheap and very good) so I put that card on my list, the next thing to cover was the CPU, for this I went for a intel i5 4670 (not the k version with unlocked multipliers for overclocking as it doesn’t support vt-d). I threw in 16gb ddr3 ram as it is the most I can get on a regular mini-itx motherboard, a 2tb WD red drive for VM storage and a dual port intel pro 1000 NIC.

I also threw in a “dumb” 8 port netgear switch to get everything around my desk connected.

hardware goodies

A rather poor picture of a pile of hardware



The software (or why I chose ESXi)

Well this part is pretty silly but I basically went for ESXi as I had been told by several people that its really simple to use, there is also a free version of ESXi for machines with one physical CPU and less than 32GB ram.

The fact that it runs very nicely of a pendrive is also fairly nice as I can swap around harddrives as I’d like without bothering with re-installing the hypervisor.

Assembly and a massive facepalm

So with all the bits and bobs in house my fingers was itching to put this box together and so it happened, it all went smoothly until I quite late in the process realized that there were no extra card slots on the case (DOH!), I had everything checked, controlled and controlled again, all the hardware would work nicely together but there was no space for the NIC in the box.

as the AsRock motherboard also featured an onboard intel NIC i thought I might be in luck with ESXi supporting it, and well I was almsot right. by using the ESXi image builder I managed to get the drivers for the onboard nic onto the ESXi installation drive!

The rest of the assembly was pretty smooth sailing and the AsRock motherboard features an onboard WiFi card as well as a slot for m-sata drives that I will most likely put to use in the future.

And the end-result was this: (notice the nicely fitted NIC but with no expansion slot on the case)

All the big boxes compacted into one small

All the big boxes compacted into one small

The issue with the NIC got solved at a later point with a really long session with the dremel, seriosly 1mm aluminium is not fun to cut with these little things.

Installation and setup

So with the box nicely placed on my desk, custom ESXi image on a spare pendrive and a tiny pendrive installed in the back for running ESXi off I was ready to install.

The new image worked like a breeze and my machine was online.

The next stage was to install operating systems, the plan was to install the following:

  • A Windows 7 VM for access to a windows machine when i’m not sat at my desktop.
  • A ubuntu 12.04 machine for general use (IRC shell, persistant tmux session and that kind of stuff)
  • A debian machine for web development
  • Another debian machine for general programming and development, as I didn’t want simple web development tasks to get broken my me messing about with other things.

Installation of operating systems was pretty much the same as installing on a physical machine, the only extra bits was to set up resource allocation in the vSphere client.

All the machines have been thick provisioned as I know myself well enough to know I am very likely to extend my capacity if it’s all thin provisioned, it also seems to be good practice.

The memory was allocated by pointing my finger into the air and make an educated guess, the windows machine got allocated a whopping 4gb as windows has a tendency to eat ram for breakfast, lunch, dinner and snacks.

the Ubuntu machine got 1gb as it wasn’t supposed to do anything resource heavy (or interresting for that sake).

the web server and other development machine got allocated a flexible memory of 2gb so that they can go to town on memory if they like (or if I run things in an infinite loop).

And the result:

Very satisfying, the windows machine works like a charm with RDP, and the others are all playing along nicely too and no needs to kill my file server by throwing silly tasks at it anymore, now it keeps my files safe and that’s about it.

I have mounted all my machines (both physical and virtual) to a shared directory on my file server so I have easy access to code, documents etc. I also keep backups of all the VM images on the file server and it now lives happily with no downtime, currenly at 121 days uptime which is about when  I set up the ESXi box.

Further developments:

As I mentioned I brought out the dremel and cut holes to fit the intel NIC into the machine and then I set up so that the hypervisor itself is accessed on its own port on the intel NIC and with the onboard NIC as a backup.

The virtual machines all talk through the last port and it works nicely.

I also installed an intel 320 120gb ssd into the machine as it lost it purpose when I won two 120gb Samsung 840 SSD drives in a competition, the Samsung drives are now running in raid 0 as the main OS drive in my workstation, the boot times are epic to say the least.

I also replaced the stock case fan and cpu cooler with a noctua NH-l9a and a 140mm noctua case fan as I felt that the stock options were a bit noisy.

The end result looks like this:


Lots of stuff in a tiny box, still rather spacious

Lots of stuff in a tiny box, still rather spacious

I am also really curious about setting up a cluster of machines, more or less only to entertain my curiousity but that involves getting another box more or less similar to this one and some box to run VmWare vCenter, smells expensive all the way from here to the computer shop.

This does on the other hand fall behind my plans of building myself a steam machine (the Valve console not the choo choo thing).

I am also thinking of swapping out the two drives in the server with one larger m-sata drive and then use the 120gb drive + the 2tb drive in my future steam machine as I have no real need for 2tb local storage in the ESXi box when I have 9tb++ available in the file server.

I am planning to write more posts about ESXi and playing with virtual machines in a later post where I will show a few features of this pretty well known hypervisor and how I made the custom image and so on, hopefully more posts will follow quite soon!



  • Hello,

    may I ask what type of PC case fan you’re using and what’s the diameter?

    I guess 140 mm, right?

    Thanks a lot!


    MichaelApril 11, 2014
    • Its a noctua something, can’t remember the exact model right now but its a 140mm turbine designed to fit in a 120mm fan slot, it does come with some fittings to allow assembly in a 140mm slot.

      Pretty decent fan but mounting it in the rubber grommets that is more or less standard in newer lian li cases require a bit of tinkering.

      nikolaiApril 16, 2014
  • Hey Nikolai,

    Fantastic build! I have been researching a build for a mini ITX ESX server and you seem to have selected all the components that ended up on my list, rather hilariously, even down to the the low profile noctua cooler! I am looking at the Fractal Node 304 case so far.

    I’m planning the 4670s cpu which i guess should be ok, nice and low wattage (65w). I did have an instinct to buy an 85w processor and overclock a bit, but I know it is a bit of a pipe dream in a tiny case really.

    The main reason I have got in cotnact is that I am a bit stuck on motherboard, torn between yours and an asrock server model ( )

    My first choice was the server type asrock. Largely because it has IPMI remote support.

    I was wondering if you could comment on what level of remote access you can get on the Z87E-ITX ? Im planning on locating the machine in the attic so its quite a major one for me, although i’m glad to see your report of 120 odd days up time! I’ve heard you can at least switch it on and off remotely, maybe even via a mobile app which sounds promising. I doubt there is the chance of vpro or ipmi direct remote control, but i thought it might be worth asking.

    Thanks for the blog! I will soon be creating the same ESXI disk as you lol


    KevApril 15, 2014
    • Hi

      Well to be honest I haven’t really looked much into how much remote access I can get with this motherboard, my machine is conveniently located at my desk at the moment so haven’t had the need to do any form of remote control of the actual hardware.

      but for what its worth I’ve found my hardware to be rock solid, the only downtime I’ve had is due to me pulling out the power cord by accident a little while ago and another time when I recently removed the 2tb WD red drive as I’ve moved all but my must performance hungry VMs to an NFS share on my file server.

      All in all I am really pleased with this little thing but if I had space enough I’d put in some other CPU cooler as the temperatures run a bit on the high end which again ends up with winding up the fan quite liberally when I hit it with some heavier loads, not that its noisy at max or anything but there is a bit of room for improvement, but as you’re planning to keep this in the attic I bet noise wouldn’t be a problem for you.

      And let me know how your setup turns out!


      nikolaiApril 16, 2014
  • Hey Cheers for the quick reply mate! Just looking at the parts now, hoping to get an order in for easter :) Will be in touch again soon. Thanks again, Kev

    KevApril 16, 2014
  • Might try a CR-80EH Nofan instead of the noctua.

    KevApril 16, 2014
  • Hey Nikolai,

    Hope alls well! Was wondering, just finishing putting the server together at the moment and vt-d came into my mind. Did you have to use a specific bios to confuse it into using vt-d?

    Best regards,


    KevApril 25, 2014
  • Oh yes, and that cpu cooler was a massive facepalm. Crazily huge lol. Sent it back for a noctua.

    KevApril 25, 2014

Leave a Reply to Kev Cancel reply