18TB SQL and File Server


The following build was commissioned for running SQL as well as to act as an office’s main file server. Access to the SQL programs is via remote desktop whereas file serving is over the network. The brief for this server is as follows: uptime, redundancy, and performance is vital. It does not need to be rack mounted, does not need to be top of the range, yet should not be bottom either, should preferably not use SSDs, should not be too cutting edge, and should provide enough hard drive capacity for a small country.

My limited experience with SQL, combined with the research conducted on the web before, all revealed that SQL would chow any memory you give it, and then ask for some seconds. Filling up the memory as quickly as possible relies on fast hard drives, while cleaning out the memory requires a beefy processor. The memory is fine as it is cheap to fill her up to the max; the CPU is easy, just get whatever the budget allows. The real problem came in in choosing the hard drives. For maximum speed, you need a SSD, SAS, or PCI drive, neither of which really provides a cheap enough solution for the capacity requested.

A solution to counteract the above restrictions is to have two set of drives. One optimised for speed and SQL processing, and the other for maximum capacity and some redundancy. Having dealt with similar environments before, this kind of setup could easily become a cumbersome solution and put a lot of strain on the end users having to move files back and forth between the two arrays. After discussion with the client, and understanding the way they will process the data, it was agreed that the focus would be maximum space rather than pure speed. As long as the drives can feed the memory at a reasonable speed, they will be happy on the SQL side as well.

The budget was flexible, not too high, but no basement specials either. Therefore, after various options, changes, discussions, back and forth arguments, this is the final shopping list:

CPU: i7 950 Quad core
Motherboard: Intel x58 Smackover 2
GPU: NVidia GT220
HDD: 2 x Western Digital Caviar blue 320GB, Sata 6G
HHD: 6 x Seagate Barracuda XT, 3TB, Sata 6G
RAID: Adaptec 6805 kit
PSU: Corsair HX850
Case: Lian-li PC V2010, full tower
RAM: 2 Corsair TR3X6G1600C8 kits, i.e. 2x2x3GB kit, (total = 12GB ram)
Optical: Asus DVD rewriter

Total cost for all this was just over R30,000, but note that no software, keyboard, mouse, or screen is included.

Unpacking the goodies:

Central Processing Unit; CPU

The CPU choices were a bit more difficult than other builds. Shall it be Sandy Bridge, Xeon, stick with old faithful the LGA1366, what about giving the underdog AMD another shot? If I go with AMD, will it be an Opteron or a Tuban? Will single socket be sufficient or do we need a dual socket?

Since SQL will use all the cores it can, this tipped the decision to Intel due to Intel’s hyper-threaded eight cores versus AMD’s top of the range CPU only having six physical cores. Server versus mainstream processors was a tad bit more difficult. In the end, price to performance swung the vote and resulted in a mainstream solution. This process of elimination left us with a choice between Sandy Bridge and the now matured LGA1366 socket.

Sandy Bridge uses dual channel memory controllers, whereas LGA1366 uses triple channel. As already mentioned, SQL will eat up all memory you give it, so that made the LGA1366 the ideal choice. Going with a mainstream processor meant this would be a single socket build.

After figuring out what socket our CPU would need, the next thing was to figure out which version of CPU we wanted. Looking at the i7 range (for socket LGA1366) there are effectively three splits in the price range. One can own an i7950 or i7960 for under R3000, an i7-970 for about R5000, and then the i7-990Xe for a whopping R8000. Price difference made me opt for the i7-950 or the i7-960. It is hard to find the i7960, which meant I ended up with yet another i7-950. I used this exact CPU model on my last 4 builds, and it still remains a very well balanced CPU, you can easily overclock it to 4GHz (with proper cooling), yet is fast enough to act as a fully fledged workhorse at stock speeds.

Since this will be a production server, stability is vital. As such, there will be no overclocking happening today. I agree that it is almost a crime to plug in an i7-950 and not do an overclock, but hey, it happens. With no need to spend money on an aftermarket fan, a standard Intel fan is used.


Two options presented itself, either the Intel Smackover 2 or the ASUS Rampage 3 Gene. Originally, I wanted to go with the ASUS board but that board has now reached its end of life and no stock is available any more. That left me very little choice but to go with the Intel board. Let us just be clear here, both boards are great, ASUS is better in the overclocking department, but Intel beats it with dual Intel gigabit Ethernet ports. The one thing that I really missed in this build was the ASUS adaptor to install the front panel connectors; however, the patented adapter from ASUS was not worth the additional cost to go to the next level up ASUS board.

The Smackover 2 is really a nice board. It has some nice lines to it, the layout is simple, no frills no fuzz, just plain good value. The connectors sit where they are supposed to, the slots are open and accessible, the heat sinks seem stable enough so overall a very solid board. It is missing all the “bling” that the ASUS Republic of Gamers boards bring to the party, but really, who cares? The one little item of “bling” that the Intel board offered was a wireless receiver. Due to potential security reasons, this was never connected. The other benefit of the Intel Smackover 2, it can take 48GB of memory.

The one criticism that I have with Intel is that despite the fact that I am using an Intel motherboard, CPU, and a standard fan, they still have the wires running the wrong side around the CPU block. Obviously the people that designed these boards, and or CPU fans, should have noticed that if they run the wires anti-clockwise, it would look a lot neater? If they had done that, then it would have saved me at least 30 minutes in this build.

Graphics Processing Unit; GPU

The sole objective of having a graphics card was to install Windows. It is rare that I venture into the very lowest of low when it comes to graphics cards, but this is almost it. I say almost, because there were one or two that was even cheaper than this, but the reason I opted for this one was because it had a fan in it. I am more concerned with potential heat issues in this build than some noise coming from a server.

After unwrapping the:

this is what stared back at me…

Dreams are made of this stuff, not those you think off, no those dreams that keep you awake all night that is…

I believe I read somewhere on the box that it is a graphics card, but I am not so sure. Guess we will have to take their word for it. The sad part of this packaging, unsuspecting people may just wonder into a store and see this massive “1GB Graphics Card” on the side of the box and then go out and buy it expecting some rid-donkey-lous performance from it.

Hard disk drives; HDD

This is what differentiates this machine from just the normal run of the mill machine. I opted for the biggest, almost meanest, 3TB drives on the market. To that end, I can confirm that the Barracuda XT’s are real monsters.

The operating drives were more a matter of cost effectiveness. I just need two drives to mirror, nothing fancy. Opting for Caviar Blues is as good as any choice.

If cost was no issue, another option that I was toying with was to throw in two relatively large SAS 15,000 RPM SAS drives in a RAID1 and RAID0 configuration. Use the RAID 1 as an OS partition and the RAID0 as a potential work space for ultimate speed. But I decided against that and rather spent the money on a better RAID controller that could handle RAID 6 better.

I can write a whole article just on the various RAID solutions and the various pros and cons. In fact, the start of this article can be found here :.

As we said, this server requires redundancy. At the end of the day, after careful consideration, RAID 6 was the recommended, and implemented, solution due to the additional redundancy.

The benefit of a RAID 6 is that you could lose any two drives and the array will continue to function. Having 6x3TB drives will yield an effective capacity of 12TB, this is on par with a RAID 50 solution but having the benefit that any two drives could be lost. The only other benefit of going with RAID 6, over RAID 50 is that you could use an uneven number of drives in a RAID 6 configuration whereas for RAID 50 you need even numbers. Performance benchmarks are a bit sketchy with some sites reporting the RAID50 to be better, while others report the RAID 6 to be faster. It all boils down to the controller. Unfortunately, as will be seen later on in the post, I could not test benchmarks for the two options. It is a pity since if the performance on RAID 50 was more impressive than RAID 6; I might have opted for it instead.

The fact that going the RAID 6 route presented me an opportunity to play with a real world, practical excuse, 18TB-RAID 6 configuration, had nothing to do with the choice.

Redundant Array of Independent Disks Controller; RAID Controller

This was the make or break point for his system. Going with the on-board RAID controller would have failed, even if we ignore the fact that trying to use 3TB drives on an ICH10 chipset really sucks, the speed would have been horrendous. As such, let us introduce some big guns here in the form of the Adaptec 6805 controller.

This beauty sports 512MB of DDR2 cache memory, more RAID levels than what Patterson, Gibson or Katz could dream off, 256 drive support, online capacity expansion, RAID level migration, 8-lane PCIe Gen2, oh, and did I mention, SATA 6Gb/s? In summary, she is a mean piece of hardware.

Saying that, I had great difficulty in sourcing this card, first the supplier did not have stock, then my back-up plan failed, then the supplier provided me with a 6805e (notice the small letter at the back) which is a piece of crap compared to its big brother until I finally received the real deal. But more about these later.

Power Supply Unit; PSU

It is a Corsair HX850, need I say more? Very well, I will expand this section a bit.

The one area where people always go for budget equipment is on the power supply. There are several options for good power supplies. Having received excellent performance with all my Corsair PSUs, choosing them over any other brand has become a no-brainer for me. They are slightly more expensive than the rest, but the beauty of them, they work. Plug them in there, and you can forget about them for the next 5 years at least.

Granted we could have gone with a slightly smaller version and it would have worked as well for now, however if we ever opt to go full high end stuff, and install say a further 8x15k RPM drives, then I would like to have comfort in knowing that the PSU is not something I need to concern myself with.

As I said it is a Corsair Power Supply, they just work.

Computer Chassis: Case

The client did not need, nor require a rack mounted chassis, and this opened up the playing field nicely. I was looking for a chassis that had at least eight internal 3.5” drive bays, and preferably one that is able to separate the heat generated by these drives away from the CPU and memory components. I personally wanted to get a big enough case to allow sufficient expansion if we ever need to plug in another set of drives. Playing around with some options, doing some research into cases, I ended up going with a Lian-LI PC V2010, full tower case. When Ursula (who so kindly collected the stuff for me) phoned me to complain that the case does not fit into her car, then I knew I had a case big enough.

Opening the case for the first time did not disappoint at all. The case is huge, sturdy build and nothing but a pleasure to be working inside of. I will be dealing a lot more with the case during the build, suffice it to say that it is indeed a pleasure.

Just count these slots on the back…

Lian-Li PC-V2010

Lian-Li PC-V2010 Brushed Aluminum

The one concern I did have was that the back panel is too close to the motherboard tray, thereby not allowing one to route cables on the back of the board.

On the flip side, I received enough screws to last me at least two builds

Double data rate type three synchronous dynamic random access memory: DDR3 RAM

SQL will munch everything in the RAM department you can throw at it and still be hungry for more. Putting in the maximum amount of memory for me was as such a no brainer. I had to choose between the 4GB modules or faster 2GB modules, given the size of the databases that processed I chose the slightly faster 2GB modules over the more expensive 4GB ones. At the end, choosing Corsair XMS3 clocked at 1600MHz CL8, providing throughput of 12,800MB/s, 12 Gigabyte of them.

The motherboard is capable of going up to 48GB of memory, so I can see that this will be one of the first areas to upgrade once prices on DDR3 comes down.

Rewritable Optical Laser Digital Versatile Disc: DVD Rewriter

Since this DVD writer will only be used to install software, and that all brands are designed equal, my criteria for selecting this component was simply that it needed a black front bezel and must be SATA compatible. As such, without further ado, meet the DVD Rewriter…

Yes, I know it is a boring component, but we still need it.

I did consider blue-ray or HD drives, but the client will not be keeping backups on that medium, nor really receive data in that format. Rather use the saved money on something else.


With the toe bone connected
to the foot bone,
and the foot bone connected
to the anklebone,
and the anklebone connected
to the leg bone,
oh mercy how they scare…

Oops, sorry, that is for my other persona, shall I try that again…

With the RAM connected
to the QPI,
and the QPI connected
to the CPU,
and the CPU connected
to the ICH,
and the ICH connected
to the RJ45,
and the RJ45 connected
to the Lan,
and the Lan connected
to the ISP
and the ISP connected
to the WAN
and the WAN connected
to L4D,
oh mercy how those zombies scare…

I normally start with inserting the CPU into the motherboard followed closely by the CPU fan. I know a lot of people start with first installing the motherboard and then the CPU, but I get nervous when I cannot clearly see how far the fan still needs to go.

Once the CPU is in, the first thing I do is to fix Intel mistake in the CPU Fan wiring department. I mean come-on, how can an Intel Motherboard, using an Intel CPU, using an Intel Fan, get the wiring so wrong that I effectively needed to redo it completely just to make it look neat. The sad part is that there was actually somebody, who went and designed the pins on the fan to route the cable the wrong way, how daft can one be?

Ok, rant is over, the wire is now running the way it should, there are no lose points, and the wire neatly tucked away. Just a warning, if you do ever plan to route wires the way I do so close to the CPU, be careful that the wires won’t slip into the fans when it gets colder and the wires start retracting.

CPU Fan installed, next on my list, while I have easy access to the motherboard, is a ton of memory modules. I have seen uglier memory banks before.

Memory installed, time to get the motherboard in the case. As we can see, the Lian-Li has more than enough options to mount any shape and size of motherboard that you may ever need.

Motherboard mounted, next up an excuse of a graphics card which was so uneventful, I actually forgot to take a picture of it…

Now, time to bring out the real stars of the show, more hard drive space than what most governments had 20 years ago. The hard drive mountings work very nicely indeed. You screw in four bolts with a rubber attached thereto. This protruding stands glides into the Lian-Li’s bays.

They fit very snug, so leaving it at that will be fine for most circumstance. I however wanted to ensure that these drives don’t go anywhere without my explicit permission, so Lian-Li catered for that in the way of using the middle holes on the drives to secure them in place.

The only concern I had with mounting the hard drives, was tightening the drives located in the front. There is very little space to get your hand in, even after removing the front fan. At the end of the day, I had to resort to a special screwdriver, a tool that I am sure any serious PC builder will have.

The only other option I had was to tighten the drives on the right on the left hand side but that means if I want to replace one of them, I would also need to remove the back ended drives. This here is exactly the reason why in my view so many RAID5 arrays fail when a drive needs replacement. You have lost a single drive, and then you need to fool around inside cramped spaces to replace it, only to damage / unplug a perfectly fine drive in the process, resulting in you losing your whole array.

Hard drives installed, in with the power supply.

Ok, she is all dressed up but with nowhere to go. The reason, the RAID controller that I want is delayed and I can’t find another anywhere in South Africa. To make the build even more interesting, the client now requires the server urgently as they are running out of space on their existing infrastructure.

Luckily, I have a plan B to call upon. Since my last LAN, when people complained my server was too slow, I started dabbling in the art of SAS controllers. I currently own two controllers, a Dell Perc H700 allowing eight internal drives, and an Adaptec 6405 allowing four drives internal drives. Obviously having a controller with just four drives is not sufficient since I need to connect at least six drives. Therefore, my server’s controller has to become Plan B for now. Unfortunately, my server’s controller is an active one and I am not keen to lose all my data. After moving in excess of 15TB of data to various drives around the house I finally managed to pull out the Perc H700. Only to find out that the Perc H700 is not compatible with the Intel board. Yes, I know I should have checked before I moved 15TB of data around. Time to create Plan C, out comes my main gaming rig’s controller, a 6405 card, which is for all practical senses the little brother of the 6805 needed for this server. Unfortunately the 6405 only connects four internal drives, which will just have to do for now. Created a 6TB array, and started informing the client that the server is ready (with the limited space on board).

As I was typing the mail however, my supplier phoned and confirm that they now have the controller a week ahead of schedule. Ok then, confirming the client can wait for another 2 days thereby allowing me to install and setup the new controller makes it Plan D then. Received the controller, put it through a quick photo shoot when I started noticing some slight variations between the 6405 and this controller. First obvious one that this new card is about 5mm shorter, ok… next I noticed that the back of the controller are not nearly as interesting as the 6405’s one, ok…then I noticed the letter “e” behind the version number, making this an 6805e card. Cool, I just figured Adaptec improved their manufacturing and design process and left it there, but that something fishy smell started to fill the air.

Install the RAID controller and boot the machine up. It is then when a stinking, rotting, dead fish of note just tail slapped me right in the face. This RAID Controller is incapable to do RAID 6 and it only has 128MB of memory? WTF? Stop signs, warning lights and red lights started flashing in front of my eyes! I am sure that I have done the proper research and that I have ordered and paid for the proper card, where did this mess came from? Quick Google of the “e” behind the name revealed that this is the “entry” level card and not as I assumed a revised manufacturing process. I did not screw up, my supplier did. They invoiced me the 6805, but handed me a 6805e. Urgent phone calls…. deadlines… client wants server… no server ready…client need server… urgent over-night re-deliveries… raid controller not available… client wants server…

Ok, another emergency fall back then, go back to Plan C then. Re-installed my 6405, connect the four drives; setup the array, arrangements made to deliver an incomplete server. Despite the client being very understanding, this still sucked!

I did eventually receive the real controller, a proper 6805, and successfully installed it at the client’ site, but boy, what a mission. I unfortunately do not have a picture with the final card installed; it might have looked a bit nerdy to do an on-site photo shoot of a server.

Here are the various RAID controllers that I had to deal with.

Adaptec 6405

Apaptec 6805e

Adaptec 6805 (finally!)

Cleaning up the mess

We now have a pc that is semi working; all the components do what they supposed to do apart from the RAID controller. Most people, probably 99%, would say this PC is ready for shipment. I am obviously not part of the 99% This wiring looks absolutely fugly. There is no way I will allow this pc to go out of my “humble workshop” looking like this, I am sorry.

I think I wrote enough for now, so I will let the pictures tell the rest of the story:

Now this is what I am talking about!

Build time

Some people ask me how long it takes me to build something like this. If I had to look at this build, the research that I put into this before making my first recommendation was about 20 hours of just reading, investigating and trying to understand the client’s real needs. Then I would say another 3 hours of back and fro tweaks and discussions with experts on why this and not that. About an hour placing and collecting the order, in other words, before I actually handled the first component I’d already spent 24 hours on this build.

My approach then is to take each component out of the box and analyse it in detail, taking the pictures and making sure there are no obvious damage before I install it. I would then spend time thinking how this component will affect the final aesthetics, the wiring, the wind flow, future upgrades, ease of access, basically, asking the question, where should this go for optimal results. I take a fair amount of time in this initial build and it can easy run up to 6 hours. (Yes, I know I can do it in under an hour, but that would not be one of my builds then and it will be just another fugly, overpriced, production line computer.)

Once the initial build is complete, I will remove it from the “assembly line” and start installing software, update all the various bios, download and install all the latest drivers, and in the process stress test individual components. Stress testing the various components can take time, but this allow me to tweak the fan speeds, see if anything breaks, basically playing around on the computer to see how well it operates and what can be improved. For instance, in this build, there was an annoying rattle coming from the DVD writer. I solved it by installing a couple of rubber O-Rings between the drive and the case.
I only stress tested this machine for about 48 hours, but my actual time in front of it, was only about 4 hours. I can confirm after numerous experiments, that memory test, nor any other hardware tests does not run any faster, nor will it display anything new than what you’ve seen the first 100 times you looked at it. Really, trust me on that, I tested it, hypothesis busted. I normally like to run my stress test for a full week, if I have not been able to break the PC in that time I know chances are nobody else will either. In all the years that I have been doing this, I had one computer returned to me due to bugs, and to be honest, the only bug I found was a Pebkac one.

Once the stress testing is complete, then it is back to the assembly line. Now is the time that I will focus on the cable management. Cable management on this build took me only about 4 hours given the beauty of the case and the planning that went into the original build phase.

Once the case is cable managed, I would take it back to the stress testing area, make sure that everything is working, and make sure I have not broken anything with my cable management. Remove all my software that I used for testing, ensure that I left copies of all the latest drivers on the hard drive, and change the network settings to DHCP. Then, and only then, is the PC ready to ship.

If it is a special build like this, and then I will document the process as I am doing now and do the photo editing. I would say the total amount of time spend on this built is therefore in excess of 80 hours spread over about six weeks. So, to summarise actual time inside the PC is about 12 hours, writing this document about 8 hours, researching and finding the hardware about 24, and testing 48. If you do the maths, you will see we are missing a couple of hours here, do not worry that is just me running around trying to find the right RAID controller.

The sad part is that I know there is only a handful of clients that will ever open up their computer cases to appreciate the cable management inside, nor appreciate the benefit of the stress testing. I do however know if you read to this stage, that you are not one of those.

This here is the final photo-shoot before I started with the cable-management.

If you are wondering why my garage appears so “dark”, it is an illusion only, I assure you. The small lights on the top are each 250W bulbs; the spotlight at the back is a 300W flood light, while the rest of the garage has dual fluorescent lights. When you want to do proper cable management, you need to have enough light to see the finest details.


I have been contemplating a long time on how to solve wiring messes inside a computer case without resorting to cutting holes into brand new chassis. I think I have found a potential solution in the form of some seriously large Velcro strips. I need to figure out a better way to glue the Velcro strips to the case since normal wood glue did not do the trick. I will be experimenting with options over the next couple of builds, but for me, the way that the Velcro assisted to hide the wiring in this case is definitely a positive sign. If you should know glue that will work on Velcro material and metal, please leave a comment for me.

The hardware itself is no slouch; we have enough hard drives for a small country, 18TB of them, 12GB of high spec memory, i7-950 CPU, a very respectable RAID controller, and a very trustworthy motherboard. I trust that this machine with its 12TB, dual redundant disk array, will also suffice as not only a suitable file server but also a significant SQL server.

Would I have done something different in this build? For starters, I do not think the masses are ready for 3TB drives yet. They are still very tricky to get up and running – even on this motherboard it was not a simple matter of plug and play. I am not sure if my choice to go with faster RAM rather than more will back fire in a year or two. Saying all that, if you plug in a high-end graphics card, add some water-cooling, do a little bit of overclocking, then you will end up with a serious LAN Monster from this build. So overall, I am very satisfied with it.

Thanks for reading, and if you have any suggestions or questions on this build, feel free to leave me a comment.

Kind regards


This entry was posted in Articles, Computers and tagged , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply