Friday morning, phone rang. “Skoups, we have a serious problem, we need speed and we need it now! The budget is between R20,000 and R25,000 (about $2000 to $2500) and we need the PC by Sunday latest.” My first response; “hmm…Sunday? As in the day right after tomorrow’s opening match between the Springboks and Argentina? You must be joking!” Remember, my builds normally takes weeks to complete. A lot of research goes into each computer, lot of searching for the exact components, not to mention actually sourcing the components when you decided upon them, and then obviously the serious amount of testing that goes before it leaves my “workshop”. Ok, obviously, this will not be easy. However, these are dear friends of mine and they have a real problem, also I do not know anybody else that will be able to pull this off either in that timeframe. Therefore, I figured I might just as well give it my best shot and see how much speed we can achieve in virtually no time at all.
Requesting a fast pc does not tell me anything about what kind of components to get. A wise man once told me, “It is easy to pick a fast component and install it, but it is a different story to build a fast system.” A system that one requires for analysing large Excel spreadsheets is different to the one required for gaming, which again is different for somebody processing SQL databases. This client’s requirement is to process large datasets, approximately 80GB in size through a program called SAS. SAS is very similar (and I am going to be nailed on saying this I know) to SQL. In my previous life, I used to work on SAS daily and have a fair amount of experience in it. The way that SAS process data is record-by-record, so sequential read and sequential writes is very important. Realising the bottleneck they currently experience is I/O’s (speed at which they can read and write data to hard drives) the only solution is some form of a RAID solution paired with as many SSD’s as possible.
Fastest RAID is RAID0, but that causes problems from redundancy point of view, so we will definitely need to require a more redundant array as well. Therefore, at least two arrays are required and depending on the operating system, perhaps a third. A dedicated raid controller needs to go onto the shopping list, tick. This computer will act as a workstation, accessible by a single individual (at least in the interim) so we need a decent spec, not ultrahigh necessary, motherboard; tick. A fair amount of memory never hurt any kind of system that requires a lot of I/O, tick. CPU should definitely not be a slouch, but perhaps not dual processor XEON either. Certain processes of SAS can bottleneck the CPU, the majority of problems come in with I/O’s, and that is where the bottleneck will be. I will rather save a bit on CPU power, and invest more in the hard drives. The client is a Microsoft Partner, so no need to worry about software licenses. Need a case, something large, I do not want to mess around in small spaces when I need to install so many hard drives in a small amount of time. Power supply, yeah, need one of those to. What else, no, that should be about it… shall we work out the quotes…
As with everything in life, you can choose two of the following three: “Time, Cost, or Quality” but on the third, you will have to sacrifice somewhat. Time is a non-negotiable given tight deadlines on the client’s side, quality is a non-negotiable for me, (I hate call-backs) so the only one that was left was cost. Prepared a couple of options for the client to choose from, ranging in price between R36k and R25k. Kudus to the client for accepting my recommendation not to go for the cheapest solution, instead they opt for my recommendation being the R27k solution.
So, my quote, making up R27k consisted of the following:
RAID: High Point RocketRAID 2720SGL Controller
SSD: OCZ Vertex 3, Solid state drives 60GB (I think 9 should suffice)
SSD enclosure: 2 x Lian-Li BZ-525 4×2.5” HDD/SSD to 1×5.25” bay
HDD: 2 x Western Digital, RE4, WD2003FYYS 2TB dual processor units with 64MB cache
PSU: Corsair HX750, modular power supply
CPU: Intel LGA2011 Sandybridge-E i7-3820, 3.6GHz up to 3.8Ghz
MOBO: ASUS P9X79 Pro LGA2011 motherboard
RAM: Corsair CMX16GX3M4A1600C9, this is a 4x4GB DDR3-1600 kit (for a 16GB in total)
CASE: Lian-Li PC-V2010, full tower, silver
Now, that represent the original order based on the original quote… now astute readers will notice that I am missing some key ingredients that one might consider a “need-to-have” for a new computer … the following kit was missing…
The “oops, I actually need that too….” List is the following:
1. Graphics card… Seriously, I forget a graphics card…
2. A CPU fan, LGA 2011 CPU’s do not come with a fan…Promise, I have done this before!
3. Mini SAS to SATA fan-out cables, ok, this one I’ll excuse myself for as this is a speciality item
4. Optical drive… yeah… try installing windows without a DVD drive… always a fun adventure
Fortunately, I remembered the GPU and the CPU Fan before I collected the goods and squeezed them in the order. The suppliers came through for me big time, they did not had the ASUS P9X79 Pro board, but upgraded me to a Deluxe free of charge (more on the board later). For the first time in a long while, all the parts was available, all but one; the key ingredient, the RAID Controller is nowhere in stock. The number of companies that stock raid controllers, well, especially those that I want to break records with, shall we just say they are not on every street corner. Ok, chat with the client, they will be happy if I cannibalise my gaming rig even further and that I then sell them a second hand controller that is currently used in my private system. The card is the exact same card as the one we are looking for, and since it is only a month old, warranties is still valid. If the cannibalising stopped there, I would have been happy, but I needed to cannibalise DVD writer and the SAS cables too. Now you see why I do not like rush jobs?
When one looks at the shopping list, buying 9 x OCZ Vertex 3 SSD’s for a single system, then it is nothing short of insane, even the sales rep looked at me funny. I did consider buying the Vertex 4 drives for their higher IOPs, but given that I am more interested in sequential reads and writes of large files, figured the Vertex 3’s will be ok. Just as a by side, the 120GB Vertex 4’s was part of the quote that ended up being R36k so I definitely did consider them. The idea given the arrays required, is to install 8 SSD’s in RAID 0, use mechanical drives in RAID 1, and the 9th lone SSD as the dedicated operating system drive. The mechanical drives, I did consider installing 1TB VelociRaptor, or even smaller SAS drives in RAID10, but the cost of them just outweighed the benefit (o, what will I do when somebody one day give me an unlimited budget!). At the end I’ve opted for Seagate 2TB Raid Edition drivers. This setup should result in a usable space of 480 GB of ridiculously fast space (on the 8xOCZ Vertex 3 drives), and 2TB of redundant storage (on the mechanical drives), while the spare SSD will take care of the OS drive for the time being. I am a bit nervous that the OS drive is not in redundant mode, but am a lot more nervous that real data will reside on an eight-drive array that is RAID 0. Look, if speed is the objective, speed is what I will give them. (Oh, and obviously a speech on the importance of backups…)
Unpacking the goodies
I do not have a lot of experience with Highpoint RocketRaid cards. It was only when I wanted to achieve more speed than what my puny Adaptec 6405 was able to give me that I stumbled across an “TheSSDReview.com”’ review of what they managed to achieve with this card and 8 SSD’s that I bought myself one to play with at home. If I had to summarise this card in one word then “Amazing” will not do it justice.
If your objective is nothing but raw, unadulterated speed, using nothing more than RAID 0 then you will search a long time to match this card for performance. Price wise, if Highpoint sells this for 3 times more, then it will still be cheaper than any alternative that I have found to date. Looking at the card itself, Highpoint market it as a “budget conscious” card aimed towards SMB’s. Each card supports up to eight SAS/SATA direct connected hard drives, (or SSDs) and are powered by a PCI-Express 2.0 x8 host interface.
This card is not really a RAID controller, as it does not contain a RAID chip (not one that I will call a raid chip in any event) on the card itself. So if you are thinking to run this card with RAID 5, just forget it now. This card sucks at that simply because the computational power needed to perform the XOR calculations exceeds this card’s ability. (Refer to my RAID article if you want to know more about RAID 5). I would love to get my hands on some of Highpoint higher cards, especially the 4000 series, as that seems to have the best of the 2720SGL and have the required RAID functionality included. That will just have to wait though.
From a raw read and write perspective, plugging in 8 SSD’s onto this card, is nothing sort of just amazing. Before recommending this card for the client, I tried to find benchmarks on competitor cards that cost up to 4 times more, and to date, I have not found a single card that is capable to match, let alone beat, the RocketRaid 2720SGL performance with 8 SSD’s in RAID 0. If you do know of such a card, or if such a card has since released, and you would like me to “test” it for you, please feel free to forward me a sample…
Solid State Drives
When you place an order for 9 of what was the fastest drives a few months ago, you know you are in for a serious build. The OCZ Vertex 3 sports a Max Read of up to 535MB/s, max write up to 480MB/s; random write 4KB of 60,000IOPS and a maximum 4K Random write of 80,000 IOPS. These numbers are not to be sneezed at individually, now, when you plug 9 of them into a system, well… yeah… drool will run.
I opt for the 60GB drives, why you ask, well because they only cost R660 each compared to R1100 for the 120GB drive. If I have to buy two 60GB drives, it will cost me 20% more than buying a single 120GB drive, but the speed will be almost 90% faster when I RAID 0 them. For instance, the 480GB SSD retails for about R4500 (17% higher than 8 x 60GB drives) but the speed comparison is 550MB/s versus 3GB/s. (Granted I did not include the RAID controller so that will be an additional R2000). If speed it the objective, it really makes a lot more sense to buy multiple small SSD’s rather than buying a single big drive. That is why I am so sad to see that the new OCZ Vertex 4 drives, the 64GB versions, only read at 460MB/s and write at 220MB/s, which is considerably slower than the Vertex 3 drives. In my mind, these small OCZ Vertex 3 drives are killers and at the prices they retail lately, I will suggest you stock up on them big time now.
When one need to install 11 drives (9×2.5” and 2×3.5”) into any case, even a Lian-Li V2010 full tower case the fun factor starts dwindling down. I’ve been meaning to test out some enclosures that allow one to install 4×2.5” drives into a single 5.25” bay. This was the ideal build for me to test this so I ordered 2 x Lian-li BZ-525 mounting brackets. The BZ-525 allows one to mount 4 SSD’s into a single optical drive bay. This is a great solution for people that still have small cases but is interested in a multiple SSD setup.
I only realised late that the panel in the front was black and not silver as the case, but at the end of the day, it worked out well. The fact that the meshed front panel matched the case perfectly, barring the colour, is pleasing on the eye.
This was the first time that I used these enclosures, I would advise a bit of caution with them though as the bottom right drive (as seen from the back) is sitting flush on the bottom of the enclosure it means you need to have “straight” SATA power cables. Not having the right power cables initially caused the array to be lost on occasions, so it was imperative for me to obtain straight power cables before releasing it to the client. (Now I remember why I like to take days of testing before releasing)
In hindsight, I am sad that I did not spend more time with these hard drives. Unfortunately, given the beauty of the SSD’s and the lack of time to test the system I just did not get to them. These drives, WD2003FYYS, are a lot more expensive than the normal 7200RPM drives, but they are rated for 24/7 operation, combined with 64MB cache, dual processors and a dual stage actuation it made sense to opt for them. They are expensive, but still cheaper than the SAS or Raptor solution and since they are rated “enterprise-ready”, just picked up two of them. The drives are mirrored in Windows, so using them will result in a small impact on the CPU, but since the bottleneck is not CPU, should not be a problem. Ideally, if the budget allowed for it, using a separate raid controller would have been first choice.
The idea of this array is that the client will store his files on these drives, and use the SSD array for processing, effectively use the SSD’s as their “scratch space”. When the file is complete, then they will write it back to the HDD. Doing it this way, and if they actually follow my recommendation, then should the SSD array crash (not if, but when with 8 drives) then they should only lose a small amount of actual processing. Here’s hoping that they never need to test that theory.
When this request came, I realised that I will need the ability to install a fair amount of memory. That combined with utmost stability of the system, the solution called for either workstation build or a server build. Eliminating the i5’s due to memory amongst other things, the choice was between a XEON processor and the i7-LGA2011 socket. From a pure performance and scalability, the XEON is by far the better choice, unfortunately going with server level CPU’s would also break the bank especially when one opts for dual socket solutions.
That left me the choice of one of the i7 CPU’s. Unfortunately, the LGA2011 i7’s does not allow one many options. Either the Quad Core, i7-3820 (R3,000), the Hex core i7-3930k (R5,800) or the “I-smoke-some-bad-stuff” i7-3960x at a wallet shattering R11,000. The top of the range, i7-3960x is overpriced with the key difference between the 3930k and the 3960x being only 100MHz on turbo boost (3.8 versus 3.9). The reason why one pays almost double is for the letter “x” in the name. The “x” refers to Intel’s “extreme edition” CPUs, these are the crème de la crème, the meanest of the meanest, effectively, the very best of a very elite group of chips. Apart from just being the purest that money can buy, they also support an unlocked multiplier used for overclocking. Since this system is not aiming to break any CPU records, we do not need that additional functionality.
Eliminating the beast of all beasts, that left me with either the quad core 3820, or the hex-core 3930k. The 3930k was very high on my list and was included in the more expensive quote (R36k). Unfortunately, when on a budget, spending that amount of money on two extra cores (4 versus 6) just did not make that much sense. As with any “fast” system, once the slowest component is eliminated there will always be a new bottleneck. I have been debating where this bottleneck will be on this build, it is possible on the CPU, but I think the odds is it is the motherboard, the RAID controller or the hard drives instead. Given the costs, the potential limited benefit, going only with a quad core i7-3820 should be fine for now.
The LGA2011 socket does not ship with a CPU cooler. This was another one of my “oops, I actually need one of those as well” moments. The fan use was a Coolermaster Hyper 212 Evo. The fan installation worked nice, and you have the ability to mount this fan on pretty much any CPU ever invented (ok, not quite, but close).
The fan worked well, unfortunately, I did not take too many benchmarks for this fan. The one caution when installing this fan, please ensure your case is big enough for it to fit. This is quite a tall fan.
As can be seen, I stick with ASUS boards as far as possible. Not only are they good quality, the after sales service is great (not that I needed to test it that often… but still). The board I wanted was the P9X79-Pro board, unfortunately my supplier did not had any stock so they upgraded it to the P9X79-Deluxe. I was tempted for the P9X79-WS board, but that was quite a bit pricey.
There is not much difference between the P9X79-Pro and the Deluxe edition barring that the Deluxe edition also have wireless and blue tooth. Given that this PC will be standing in a “server-room”, one do not need wireless nor blue tooth. As such, the difference for me was immaterial between the two setups. The P9X79-WS board, now if they upgraded me to that then that would have been a different story. The WS board sports dual gigabit Intel LAN ports as well as server level capability with RAID cards.
Some highlights of the P9X79-Deluxe board that stood out for me is the eight ram slots that supports up to 64GB of ram. Dedicated USB Bios Flashback, normally, people ignore this convenience factor, but for an immature platform, having it is great. USB 3 ports, including an internal connector (which the Lian-Li case did not had a connection for). Overall, a nice motherboard with right marks in all the right boxes.
I had some issues getting windows to boot in a stable manner but that was resolved as soon as I flashed the BIOS (the “unstable platform” I was talking off in the previous paragraph). Flashing the BIOS was not that straight forward. Since I updated from a much older BIOS version, I first needed to convert the BIOS file type to a new format, and then only could I upgrade it. A mission to get it right, but once the BIOS flashed everything worked smoothly.
I refuse to buy anything but a Corsair Power Supply. I must have bought at least 20 of them over the last two or so years. Touchwood, to date, never had any issues with any of them. Definitely not the cheapest in the world, but they do exactly what any good power supply should; “get mounted, and then go to work.”
The PSU opted for is a Corsair HX750. This PSU was one of few that had sufficient amount of SATA cables (12 in total), but it also sported a nice amount of Molex connectors, eight in total. The CPU power cable was a tad bit short for bottom mounted power supplies cases as ideally it should be about 10 cm’s longer. The problem is that many cases mount the PSU on the top, in which case the default cable is too long. Perhaps the best of both worlds will be if Corsair could ship their power supplies with two cables for the CPU power, one aimed towards the top mounted cases, and one aimed at the bottom mounted ones. Granted, that will make an expensive PSU slightly more expensive but you pay for what you get.
After installing the PSU, and booting up the system, I thought I finally had my first dud Corsair. The fan did not start up. However, after a bit more research, I confirmed that this fan is temperature controlled and will only spin as fast as is needed to keep the PSU cool. These small things would not have “surprised” me had I proper time to investigate and research beforehand.
This is the second build that I am using the Lian-Li PC-V2010 full tower. The case contains a dual chamber design that is great for when several hard drives make up the build. Having the second chamber separate the heat the drives generate away from the CPU and the graphics card. The case is from aluminium, and unlike the normal el-cheapo aluminium cases, this aluminium just feels solid. My only criticism I have with this case is the lack of USB 3 front ports, and that the included CPU power cable is only for four pins and not the “standard” eight pins as we see on virtually all new motherboards.
Cable management was semi-ok, and I am sure if given more time, I could have made it even better. At no stage did I feel cramped up when working inside the case. I will recommend removing the PCI support bar when installing tall CPU fans as mine did had a small problem. Apart from that, the case lived up to expectations in all matters. I definitely like the Lian-Li’s for their build quality and aesthetic looks. Not my first Lian-Li case build and definitely not my last either.
Yes, oops, one of the components I forget to include in my original quote. However, since the order consisted of nine Vertex 3 SSDs this is an excusable offence. Nothing fancy to report on the Graphics card, it is the second “cheapest” card that I could find and it will work for the purpose that we need it. Pairing this computer with a high-end graphics card is bound to cause some excitement at LAN parties. Any event that did not happen and only a small graphics card with no frills and no fuss was mounted. It was such an uneventful component; I forgot to take a picture…
Yes, another item not on my original quote and just salvaged from my own pc. Fortunately, as long as they have a SATA connection, and can read and write a DVD then they are all the same to me.
Once again, quick word of thanks to the client for picking up the tab on the items omitted from the original quote.
So managed to obtain all the parts on Friday afternoon, and started the build the evening. Unfortunately, the Saturday afternoon was the opening match for the Rugby Championship between the Springboks and Argentina. Since this will be the only international rugby game played in Cape Town this year, I’ve booked the tickets months before already and made plans with friends and family to join me for this event. (Ps, international rugby tickets makes great birthday presents!) Entertaining the family the whole weekend meant the PC unfortunately had to take slightly lower priority on Saturday.
At this stage, I think it would be wrong not to mention the rugby event. I wanted to test out my new 70-200mm F2.8 Nikon lens paired with my Nikon D300s. However, when I tried to enter I was refuse entry at the gate. The organisers deem this lens a “professional lens”. They are scared that I may take some great photos and as a result deny professional photographers revenue. I am sorry, but that is just a load of BS, if I can take a better photo from the stands than what somebody could take when they are right next to the players, then let’s swap places and I will sit on the ground and you can have my cramped up little seat. Entry was only allowed if I agree that they could seal my 70-200mm lens, and if I agree to only shoot with something else. Losing my 70-200mm F2.8 meant I was left with only my fixed length 50mm F1.8 lens. Having the fixed focal length meant no zoom capability or anything, but at least the increase aperture allowed me a different perspective, and yes, in my opinion I still managed to get some good photos.
Ps, I should add, that the above photo was only cropped and no other editing was done… not unlike another kind of industry that has since come under the spotlight due to their “artistic manipulations” that is performed for magazines covers. Enough diversion let us get on with the exciting stuff.
As mentioned, received all the components and started the build on Friday evening. Given all the family commitments with family arriving the Friday evening already for the rugby game, I did not get very far with the build. Saturday morning and afternoon was spend with friends and family (some who came from quite far) to enjoy the rugby match. Finally, the Saturday evening, now with most of the family commitments done, they all gone to bed, a nice game out at the rugby, lots of biltong, it is now 23:00 and I can finally get back to the PC. I do apologise for not taking as many photos as I normally do, but with the deadline looming, I just did not had enough time to do so.
02:00 AM. PC assembled; I now just need to get my RAID controller out of my computer and into this build without causing any data loss on my array. This should be fun considering that my own setup also host a bunch of Vertex 3 drives in RAID 0, and that array that we want to move is now also my OS array. I’ve installed Acronis easy migrate on my pc about a month or so ago when I’ve moved the OS across the first time to the new array, only to realise the freeware version has since lapsed. I am a big advocator to support software companies, so went ahead and bought myself a copy. This is a great piece of software, so if you are anything like me swapping and changing things in your computer, then I can recommend this.
By the time I was ready to make an image of the system, it was 02:30AM. As history will confirm, 02:30AM is not the best time to be doing complicated data transfers involving various complex arrays especially when said arrays contains the operating system. I plugged in an old spare drive, the same one I used previously for the staging of the OS when I’ve moved the partitions around. Since I will be using it again, I cleared the partitions from this drive in anticipation for transferring my current array onto it again. Moved some more files around, move some more data around and everything was honky dory. One last reboot of the system, then I can start make the image. As I waited to sign into Windows, the dreaded “unable to find operating system, please press enter to continue” appeared. This is NOT the message you want to see at 02:30AM when you are on a tight deadline.
Bottom line, I screwed up somewhere (still do not know where exactly), I think the screw up happened when I have created a dynamic array to mirror my “program files” drive with another. Either way, I just lost the ability to boot into my Windows. Knowing that if I unplug the RAID controller now, I will lose all chances to recover anything, I was really stuck between a rock and a very hard place. To make matters worse, the previous drive that I used a month ago to transfer the OS when I have upgraded to the new controller partitions was just deleted in anticipation for the new image. I am stuck, I am unable to boot into Windows, I am unable to use the Acronis Software to make an image of the OS array, and I have just deleted my previous backup. Smart Skoups, very freaken smart! Realising it is best I get to bed now rather than causing more trouble, I left the controller and everything as is…
Sunday morning, 07:00, after a short sleep that was interrupted with images of me reinstalling windows, windows activations, reinstalling tons of software… figured, let us just see what the damage is here. Made some quick attempts at fixing my OS, called it quits, cut my losses, unplug the RAID controller, moved it into the new PC. Knowing that if I unplug the controller, it mean I will permanently lose the ability to attempt a recovery on my own OS array. It was not an easy decision to make. However, it is Sunday morning, the client is bound to pick up the pc any hour now, and I have not even powered their system on yet. Whoa, no pressure! No time to worry about my own system, we can solve that latter. Let the games begin!
Install Windows Server 2008 R2, Enterprise. First bugs started to hit, Intel have crippled their software for the Intel 82579V LAN controller whereby it will not install on any Windows Server product. Intel claims that this is because the specific controller was never design for server environments. Sorry, that is a crap excuse, egg-boxes were never designed to be sound insulators, so Intel, please do not give me that crap. Turned to Google found a very detailed page written by somebody as annoyed as I was, detailing exactly how to hack the Intel driver, by changing three lines of code. Intel drivers hacked. Sorted, managed to get the Ethernet up and running, next. I only have a couple of hours left and this kind of useless attempts at forcing me to choose a different component is just wasting my time right now. Ok, Ethernet sorted, now we can get the drivers loaded. Installed all the drivers, but having issues with USB3 drivers and also having an issue that when I reboot the PC, it loses the boot drive. This is not cool ASUS, not cool at all! More Googling, turned out the UEFI BIOS combined with the Marvel chipset is the culprits. Ok, so let just update the BIOS… I mean, how difficult can that be… well turned out it is not just a matter of plug and play. First, you need to convert the existing BIOS to a new file format. Then only can I install the new one, done. Reinstall USB drivers, sweet, they work. Everything green in device manager. Restart the pc several times, no problem. Switch off pc several times, no problems. Sweet, move on, only an hour left before pick up.
Then the moment that makes all the above stress, pressure, deadlines, adrenaline rush worthwhile… the results of my first benchmark I’ve hit 3GB per second read and 2GB per second write! No, that cannot be true… rerun… confirmed… restart… reran… confirmed… reran… confirmed… holy crap, we just broke the 3GB read barrier! Shall we just consider that for a minute please, we have just read 3 gigabytes in a single second! In order to explain just how large 3GB is, shall we consider the Holy Bible, (no offence to any other religion but it is the most popular book in the world) based on this version here (http://www.truth.info/download/bible.htm) the Holy Bible contains 858,195 words, it has 31,102 lines, and there is 4,587,478 characters (including spaces) in it. We have just witnessed a computer system that is capable to read the equivalent of 719,000 bibles in a single second. It is capable to write almost 480,000 bibles out every second. Assume the world population is about 7 billion strong, well; this system can generate a bible for each person in just over 4 hours. Compared that to the fastest writing speed back in the days that the first Gutenburg Bible was produced. Johan Gutenburg produced an estimated 200 bibles and it took him almost 5 years. Here we have something that can read 719,000 Bibles in a single second!
I unfortunately don’t have a lot of benchmarks on the system given the time that I was allowed to test it, but below is some of the numbers that I’ve stored.
The client originally requested the installation of Windows Server 2008 R2, Enterprise Edition. As requested, the operating system was loaded (they would use their own activation codes) and all the updates, patches, drivers sorted. As mentioned, sorting out drivers was not that straightforward given Intel’s move to cripple their drivers to not operate on Server based Operating Systems. The client arrived to collect the system just as I was running a couple of more benchmarks. When suddenly, all hell broke loose…
The RAID 0 array failed, just as the benchmarks displayed on screen. Some explicative words followed from my side. This whole morning all benchmarks ran smoothly, consistently breaking 3GB/s, and as the client enter, disaster! First things first, silence the alarm, check the drives properly plugged in, long story short, we eventually traced it down to the two bottom drives in each of the Lian-Li 4×2.5” bays. The power connector requires a straight line connection, and not an “in-line” connection as comes standard with most power supplies. Fortunately, I had some old stock from an old build, and managed to use them on a Molex connector. Phew, that was close.
Client took PC, ready to take home and install SAS. I am exhausted, decided to take a well-earned nap, when my phone rang an hour or so later. “Skoups, help! SAS does not want to install…” quick diagnostic, the version of SAS they have is not for server architect (he took the wrong copy) they need Windows 7 installed. Instead of directing the client how to setup all the raid drives, all the drivers, etc, much easier I just do it myself. Client return the PC, managed to install Windows 7, updated all the drivers ensure everything work as it should in the new OS, and finally help him install SAS (but not before running one last benchmark for good time sake…) Run some quick SAS programmes to see that SAS works, but boy is she fast! Thrown in some quick SAS coding tips: “Proc Sort data = Whatever, Sortsize=max;” and suddenly this pc was in her elements processing data at a ridiculous rate of 3GB/s!
Working on my first system that is capable to process data at a rate of 3GB per second is an experience one will always cherished. You double click on a 100MB file, and the file is open. You double click on a 1GB file, and the file is open. I remember not too long ago, when I bought my first “big” computer. It contained a 100MHz Intel Pentium processor, overclocked to 120MHz, and paired with a standard drive of 850MB. A couple of years later, I increased my hard drive space to a 30GB Seagate drive. Today, I would have processed that whole 850MB drive in a fraction of a second, the 30GB drive in 10 seconds. I can go on just how cool it is to work on a system that fast, but you will just need to actually do it before you will appreciate it. I am fortunate that I had a lot of “great firsts” when it came to benchmarks, for instance when I hit my first 10,000 score in 3DMark03, or more recently, building my first 18TB drive array for a client, they were all great moments, but hitting that 3GB/s read is definitely one of the best ones yet!
I received some feedback the following week or so, the client confirmed that jobs that they ran on their existing platform, which would have taken several hours, is now processed in a matter of minutes. So yeah, looks like it is mission accomplished…
All that I can say is that was one hell of a weekend…
Ps, I managed to recover the deleted partition on my “previous” transfer drive, so am just using that.
PPs, my supplier got the raid cards back in stock, I’ve picked up 2 of them and some extra Vertex 3 drives… watch this space as I have a new target in mind for my own rig…. mhwahahaha….
PPPs, apologies for the quality and quantity of the photos, and the complete lack of benchmarks, I just did not had sufficient time to do everything I wanted to with this rig.