Areca RAID Controller and Performance
In this article I will provide a brief overview of the product and a variety of benchmarks to compare the Areca's RAID0 performance with my motherboard nForce4 RAID controller.
Areca 1210 Features
Some of the key features of the Areca 1210 include:
- PCI-e x8 bus
- 256MB On-board DDR-333 SDRAM Cache with ECC
- 4 SATA-II interfaces
- Intel IOP 332 I/O Processor running at 500MHz
- Write-through and Write-back cache support
- Support for RAID 0, 1, and 5 support
- Staggered hard drive spin-up
- Full BIOS configuration control
- HTTP Client configuration control for remote/Windows management (screen shot below - click to enlarge)
What separates the Areca controller from all motherboard and many other add-in RAID controllers is the 256MB of cache. This, as you will see, offers a huge advantage in burst as well as general usage since the controller can use this relatively large cache to buffer write backs and also use it for read-ahead caching.
Compatibility and Other Considerations
One of the main questions I had before purchasing this card, was would it work with my motherboard. Here's what I learned that may be helpful to other prospective buyers:
- The Areca card has a physical x8 connector and will therefore physically slot into any x8 or x16 connector on your motherboard. The card will also negotiate available bandwidth from the bus so if the slot only has 1, 2, or 4 PCI-e lanes associated with it, the card will use whatever it can up to it's native 8 lanes. Areca confirms that there is a serious performance issue when using the card with a single PCI-e lane (capping throughput at 100Mbps) so be sure you have at least 2 or more lanes available to the card. Having said that, it's not necessary to go out of your way to run the card with a full 8 lanes... Consider that a single PCI-e lane is capable of 250MB/s of throughput in each direction, and you can see that even with 4 Raptors the card will not saturate 2 lanes.
- Not all PCI-e x16 connectors on your motherboard are created equal. With my motherboard which has two such connectors for SLI, they either run in x1 and x16 mode (for single graphics cards) or in x8/x8 for SLI. Hence, I had to enable dual graphics cards in my BIOS to get the appropriate lane configuration for my connectors. Some SLI boards may not even support anything other than a graphics card in the second slot so investigate that with your board manufacturer before making any decisions. Most NF4 and P975/965 and newer motherboards should work just fine with the Areca card in the second x16 slot but try to confirm with Areca or your motherboard or with other users via the forum links below before you make a purchase.
- If you are the owner of a ASUS A8N-SLI Premium like I was (at the time of this review), there is a known problem with running the Areca in the secondary PEG slot. Not surprisingly, it's a BIOS issue that ASUS never bothered to fix during the life of this product. At any rate, it required that I move my graphics card to the secondary PEG slot and install the Areca in the primary slot... a bit of hassle given my water cooling loop configuration, but once that was all sorted out, everthing worked like a charm.
Other questions and considerations you may have as I did:
- Battery backup is not required to enable write-back caching. This does leave you exposed in the rare circumstance where your PC might suffer a power loss with unwritten data still in the cache. For home users, unless you are in South Carolina during a hurricane, the risk of loss is probably insignificant to justify the battery backup option and even a cheap UPS should put you at ease if this is really a concern. Areca does sell a batter backup option, but it is rediculously overpriced.
- The controller comes with both a HSF (heatsink with fan) and a passive heat sink. The fan was clearly audible in my PC (which is water cooled and therefore fairly silent) so I replaced it with the passive sink. I have good airflow directly over the expansion cards and motherboard which is indicated as a prerequisite for using this heatsink configuration. It gets very warm to the touch further reinforcing that requirement.
- Vista signed drivers are available for both x86 and x64. The only caveat with x64 is that as of this writing there was still no support for the HTTP management console in Vista x64 requiring that all configuration be done in BIOS. I'm not a fan of the HTTP management console as it requires an additional service to be running on your machine and changing my RAID configuration is not something I do often enough to need a GUI to do so.
The box and contents are shown in the pics below (WD Raptor not included!).
Besides the card itself, you get a passive heat sink, a half-height card bracket, a very good manual (unusual these days), driver CD, and 4 long SATA cables with right angle connectors on one end.
It's clear that Areca has it's focus on the IT market and not the average modder, because the location of the SATA connectors on this card is an absolute nightmare for trying to keep cabling unobtrusive and clean... The connectors are on the top-edge of the card near the bracket!
You can see in the next picture where the SATA cable connectors are... hardly an ideal location! I would have much preferred they be at the end of the card.
A good thing is that the card is only half height and not much longer than a x16 PCI-e slot connector. While it still may interfer slightly with aftermarket southbridge coolers, you can see from the following picture that at least I was able to maintain the use of my MCW-30 water block on the NF4 chip. (However, as noted above, this configuration was temporary as I later needed to swap the Areca and GPU).
BIOS Configuration and Drivers:
The installation and setup is very straight-forward and all options are very well explained in the accompanying manual. The basic install steps are:
- Install the physical card and connect your drives to the SATA ports on the card
- Boot your PC and wait for the Areca BIOS screen which will appear after your normal POST screen - Press TAB to enter BIOS within 5 seconds
- In the BIOS configure your array or JBOD as you wish... be sure to enable write-back cache and read ahead caching for maximum performance. You may also wish to turn off array truncation if you are using matched disks to avoid loosing a bit of space (truncation will truncate all disks to the nearest 10GB that will fit on the smallest disk). For my Raptors this reduced a 148GB dual drive array to 140GB.
- Reboot your PC and be sure to setup your machine BIOS to adjust the boot order if necessary.
- Install your OS. With Vista, you can install the RAID drivers from a USB memory stick (Yay!) instead of a floppy. However, before installing Vista on a RAID array, make sure all your other drives are disconnected from your system otherwise you risk running into the issue I had.
After an initial test to ensure the card was working properly, I removed it and swapped the fan based heat sink for the passive one. The heat sink is retained to the card by a pair of push pins with a thermal adhesive pad between the IOP core and the heat sink. I removed the thermal pad from the passive sink and instead used some higher performing Chomerics T412 thermal tape instead.
I ran a limited series of benchmarks comparing the Areca 1210 to my motherboard nForce4 RAID controller with the key objective to ensure the card I purchased was working to spec and that it was indeed faster than on-board RAID as so many had claimed. Note that I was not attempting to prove or disprove the merits of RAID0 with this testing - that's been debated at length like a religous argument on many forums and I wasn't about to digress to that.
The test system consists of my now old 939 platform:
- AMD X2 4400+ OC'd to 2.65GHz with 2GB of RAM
- My bloated Windows XP SP2 image (using Acronis True Image Home 10 to rebuild the partition after each array reconfiguration - an amazing program that deserves a review of it's own)
- 4x Western Digital 74GB Raptors with SATA-I interfaces, 8MB Cache, Firmware FLA2
- I used a typical industry stripe size defined by the simple formula: 64K/#drives... so for the dual drive arrays, I used a stripe size of 32K and for the 4 drive arrays, I used a stripe size of 16K.
The drives were mounted in a CoolerMaster 4-in-3 drive cage as shown below. I then ran the benchmarks on both 2 and 4 drive RAID0 arrays running off either the NF4 or Areca controller.
HD Tach and HD Tune
HD Tune and HD Tach both report similar metrics. I used the long bench test on HD Tach.
As you can see the NF4 controller did NOT play well with the HD Tune benchmarking utility at all. It's results were terrible. Here are the HD Tune curves for the NF4 and Areca RAID controllers... note how the NF4 curve is a terrible saw-tooth.
The NF4 controller also didn't play well with HD Tach's long bench with 4 drives. Look at the bizarre HD Tach curve below. This was repeated on numerous runs. The short test in HD-Tach produced a smoother curve but the average STR was not much better.
Obviously, HD Tach is not widely regarded as an accurate RAID benching utility but it can be somewhat indicative of underlying problems. I don't know what the underlying problems with the NF4 controller are, but there is definitely something not right with it since the Areca performed exactly as expected under both benchmarking tools.
Here's where the 256 MB of onboard cache on the Areca really come into play... That's a nearly three-fold improvement in burst rates! Note that HD Tune is more conservative with it's burst rate metric but the difference in controllers is equally staggering.
Random Access Time:
These WD Raptors have individual seek times of 4.6ms so it's not surprising to see that increase in a RAID0 array (since both drives need to seek to the right sector). The latency for the Areca is about 15% less, I'm guessing due to the cache or improved disk control algorithms.
CPU Usage as reported by each tool during benchmarking is interesting but hardly reliable, especially on my system where I didn't go out of my way to shutdown every background service or program.
The next set of benchmarks I ran were using IO Meter where I ran a 2 minute write and 2 minute read test on each array configuration. Both controllers were very competitive in these metrics and requires no further comment.
DiskBench is a simple utility that times the creation or reading of files from the disk system. It is highly sensitive to caching so I did the tests by rebooting between creations and reads and used file sizes that exceeded cache size for the Areca.
Diskbench timed the duration to read a 1GB file from the array (analogous to perhaps a large Outlook PST file) and the creation/writing of a 500MB file (perhaps analogous to writing a large compressed video file to disk). The results were then converted to net transfer rates in MB/s.
As you can see, the Areca card generally performed significantly faster than the NF4 controller. Read performance with NF4 did not scale very well going from 2 to 4 drives whereas the Areca controller's read performance scaled well. In fact, the Areca could read files faster from just a dual drive array than the NF4 controller could from a 4 disk array! As for write performance, the Areca has incredible write performance regardless of # of drives because of the write-back caching feature. Again with the NF4, the write performance lagged the Areca considerably and did not scale with added drives in the array.
Real World Applications:
I was as curious as anyone, to understand how the Areca and it's great benchmarking performance translated into real-world performance gains.
I used a stop watch to time the loading of my bloated XP SP2 install. I started the timer as soon as the post screen disappeared and the stopped it as soon as the disk thrashing subsided after the desktop appeared. I would say the times recorded below are accurate within +/- 2 seconds to be realistic.
Load Level in BF2:
Loading a level in BF2 can appear to take an eternity, especially when you are trying to get one of the last slots on a loaded server. I thought that many people could relate to this benchmark so I timed the duration to load the Dalian Plant level in a single player BF2 game from the time you select the map to the time it says "Join Game". Again, I would say my stop watch accuracy is perhaps +/- 1 second on this.
As you can see, loading XP was really within the margin of error for all RAID0 configurations. This is probably explained by the fact that loading XP is not just about disk performance but is a function of your overall system, especially your CPU and memory but even things like getting an IP address from your DHCP server can affect Windows load times.
BF2 was more interesting... The Areca dual drive array, NF4 four drive, and Areca four drive arrays all scored almost identical, but clearly beat the dual drive NF4 load time. I believe this illustrates that level loading in BF2 can be disk limited with slower disk systems (perhaps single disks and the dual drive NF4 array) but after a certain point, disk performance plays no role in the level loading performance, as again, the CPU and memory take over as the bottleneck. Thus it may be the case that going from on-board RAID to an Areca could save you 10-15% in level loading times, but that would require more exhuastive testing to be sure.
These are all the benchmarks I had time for. I'm sure if I ran some benchmarks with large media files, I would find a significant gap between the Areca and the NF4 in these particular applications. As it was, with highly CPU intensive file loading operations like the OS and a game level, the Areca controller really doesn't buy you much (at least on a system like mine).
Is it worth buying? If you are an extreme preformance enthusiast and someone who takes pride in benchmarking their system, or you work with large media files, then absolutely. If you are just an average gamer and home productivity user, you will not likely reap any rewards from such an investment.
Now that I have Vista running, I want to redo some of these benchmarks to see what changes under Vista (i.e. how does Superfetch change things). I'm also building a quad-core rig soon and as part of that will install Vista on a small partition on the 4 disk array to see if any further gains can be had from "short-stroking". I'll report back when I've got some additional results.
Links and More Information:
- Areca Home Page (the Areca KnowledgeBase has lots of useful information and is updated regularly. Areca is also surprisingly responsive to questions.)
- XtremeSystems Storage Forum (Areca is becoming very popular here)
- 2CPU Storage Forum (heavy IT focus but lots of Areca users here)
- StorageReview (heavy IT focus but lots of storage experience here)
- Xbit Labs Areca 1220 Review