Sun, 03 Apr 2005

3ware 9500 Notes

I recently upgraded a Pogo Linux StorageWare server. The server came with a 3ware 8506-8 SATA controller and 7 250 GB drives which we have configured in a RAID 10 array with one spare (the OS is on a separate 80GB drive). The server has 16 drive bays in total, so we decided to buy a second controller and 7 more drives to be configured in another RAID 10 array with one spare.

I had a couple problems getting the new card working. First, I had to upgrade the kernel to get the 3w-9xxx module. The Debian package for 2.6.10 worked. The next problem was using the 3ware management utilities. 3ware provides two utilies, a command-line one called tw-cli, and a web-based interface called 3dm2. According to their web site, the "in Engineering Phase" software should be used with a 2.6 kernel. When I tried using the software though, both tw_cli and 3dm2 reported "Application too old for controller firmware". I contacted the always helpful Pogo support folks who advised me to use version 9.1.5.2 of the 3ware software, which is a released version. The 9.1.5.2 software was able to detect the 9500, but not the 8506, so I tried version 9.2 which was able to access both. So it seems that "in Engineering Phase" means outdated as it's older than the released version.

Having gotten the 3ware software working, I was able to create and delete arrays from within Linux rather than having to use the BIOS utility. Then I could benchmark various RAID configurations. I tested RAID 10, RAID 50, and RAID 5 with 6 drives and with 7 drives. (RAID 10 is striping across mirrored sets of disks and RAID 50 is striping across RAID 5 arrays) In any configuration, the 9500 blows the 8506 out of the water. I ran my benchmarks using bonnie++. With RAID 10, I was able to get about 170 MB/s write speeds and 185 MB/s reads. Setting a large read-ahead value of 16384 (blockdev --setra 16384 /dev/sdc1) significantly improves read performance so I will have to tune it for the best performance once we put the new drives into production. Striping across 2 3ware RAID 5 or 3 RAID 1 arrays using LVM gave almost as good performance as RAID 50 and RAID 10 3ware arrays, respectively.
With RAID 5, using 7 drives in the array rather than 6 gives better read performance, but poorer write performance. The additional drive also seems to adversely impact file creation and deletion speeds.

XFS seems to give the best overall performance. Ext3 gives good read performance, but mediocre write speeds. Reiserfs is very fast at creating and deleting files, but doesn't give as good throughput.

tech | Permanent Link

The state is that great fiction by which everyone tries to live at the expense of everyone else. - Frederic Bastiat