Jump to content

HackPro with Hardware SSD RAID


tim2o
 Share

15 posts in this topic

Recommended Posts

Just thought I'd share a brief video of my most recent build. I might post some images of the box later.

 

Specs:

Quad Xeon X3350

6GB DDR2 800 RAM

RocketRAID 3520 Hardware RAID controller

2X OCZ Vertex 60GB SSD Drives in RAID 0 (16K Blocks)

2X WD Velociraptor 10K RPM 150GB in RAID 0

2X Seagate 1TB Drives in RAID 0

Mac OS 10.5.7 w/9.7 Kernel (Vanilla), All drivers working.

 

This video is of the rig opening almost every app on the system drive (SSD RAID) in a very short amount of time.

 

http://www.youtube.com/watch?v=zMOFHAO9AHo

Link to comment
Share on other sites

:-| I want to go to there

 

Could you do me a favour and post an xBench benchmark for the three RAID volumes? I've been dabbling with RAID in my Hac, it's really added a big speed boost - but nothing like this!

Link to comment
Share on other sites

:-| I want to go to there

 

Could you do me a favour and post an xBench benchmark for the three RAID volumes? I've been dabbling with RAID in my Hac, it's really added a big speed boost - but nothing like this!

 

I've read a lot of comments about SSD perfomance degradation over time. Are you aware of that? It's really a problem?

Link to comment
Share on other sites

:-| I want to go to there

 

Could you do me a favour and post an xBench benchmark for the three RAID volumes? I've been dabbling with RAID in my Hac, it's really added a big speed boost - but nothing like this!

 

SSD (2X OCZ Vertex 60GB) RAID 0 Array:

 

Disk Test 694.29

Sequential 430.98

Uncached Write 413.56 253.92 MB/sec [4K blocks]

Uncached Write 791.23 447.68 MB/sec [256K blocks]

Uncached Read 209.01 61.17 MB/sec [4K blocks]

Uncached Read 1227.22 616.79 MB/sec [256K blocks]

Random 1784.66

Uncached Write 881.16 93.28 MB/sec [4K blocks]

Uncached Write 1443.57 462.14 MB/sec [256K blocks]

Uncached Read 8320.36 58.96 MB/sec [4K blocks]

Uncached Read 3406.63 632.12 MB/sec [256K blocks]

 

 

Conventional 10K RPM (2X 150GB WD Raptor) RAID 0 Array:

 

Disk Test 685.97

Sequential 426.53

Uncached Write 410.66 252.14 MB/sec [4K blocks]

Uncached Write 789.62 446.77 MB/sec [256K blocks]

Uncached Read 205.36 60.10 MB/sec [4K blocks]

Uncached Read 1239.48 622.95 MB/sec [256K blocks]

Random 1751.01

Uncached Write 852.14 90.21 MB/sec [4K blocks]

Uncached Write 1449.44 464.02 MB/sec [256K blocks]

Uncached Read 7892.01 55.93 MB/sec [4K blocks]

Uncached Read 3398.42 630.60 MB/sec [256K blocks]

 

 

Conventional 7,200RPM (2X Seagate 1TB) RAID 0 Array:

 

Disk Test 497.28

Sequential 425.19

Uncached Write 409.25 251.27 MB/sec [4K blocks]

Uncached Write 791.22 447.67 MB/sec [256K blocks]

Uncached Read 204.89 59.96 MB/sec [4K blocks]

Uncached Read 1220.18 613.25 MB/sec [256K blocks]

Random 598.81

Uncached Write 179.63 19.02 MB/sec [4K blocks]

Uncached Write 1440.07 461.02 MB/sec [256K blocks]

Uncached Read 8059.25 57.11 MB/sec [4K blocks]

Uncached Read 3397.09 630.35 MB/sec [256K blocks]

 

I've also tried making somewhat of a hybrid RAID 0 Array, mixing the SSD disks with the 10K's (CH 1 SSD, CH 2 10K, CH3 SSD, CH4 10K). It didn't work out so well, and performed worse than either did on their own... I'm planning on adding another 2 SSD's to the Array and seeing how well that benchmarks. The RAID controller has 8 channels, so the sky is the limit for now.

 

 

I've read a lot of comments about SSD perfomance degradation over time. Are you aware of that? It's really a problem?

 

I've been using the array for about 2 months now, and I haven't noticed any slow-down. The benchmarks I just posted were taken just now, and compared to the ones I took the day of the install, they seem up-to-par.

 

I'll post any findings on that front as I encounter them.

Link to comment
Share on other sites

  • 1 month later...

I got a request for steps needed to piece this config together. Since I answered it anyway, I figured I'd post it so others could benefit. Here it is:

 

To get the Hardware RAID working, you first have to install and boot from a non-RAID disk. This disk should be different from the SSD disks you plan on using for the array, and should be connected to a regular SATA or PATA port on your motherboard, or non-RAID controller card.

 

Once booted from your non-RAID disk, go to HighPoint's website and download the Mac OS X Driver for your Hardware RAID card. Install this driver onto the non-RAID disk you're booted from. *Do not install the EFI firmware update, I've found this will break the boot capabilities of these particular RAID cards*

 

Next, enter the RAID controller's configuration utility (either by pressing the required key during boot up, or through the Web management utility the driver package can install). Set up a Striped array with the smallest possible block size. Also, just to be sure, set the newly created array as bootable through the options in the RAID utility.

 

Next, reboot to the non-RAID disk and run the Apple Disk Utility. The new array should show up there. You'll need to format the array as a GUID disk with Mac OS Extended (journaled) filesystem. If at this point you do not see the RAID array in the Disk Utility, there was likely a problem with the installation of the RAID driver, or with the creation of the RAID array.

 

Once the disk is done partitioning/formatting, you should now see it on your desktop. At this point, I recommend using a utility like Carbon Copy Cloner, or Clonetool hatchery. Use once of these utilities (I personally prefer Carbon Copy Cloner) to copy the entire contents of the non-RAID drive you are booted from onto the new RAID volume.

 

Once the data is done cloning, you'll need to install a bootloader to the new array. I recommend Chameleon 2. Open the installer package, and make sure you change the install target to the RAID array.

 

Next, you'll need to reboot your computer and enter the motherboard BIOS. You'll need to change the settings so that the RAID Array is above the non-RAID disk in the boot order list.

 

At this point, the system should now boot from the RAID array!

Link to comment
Share on other sites

  • 5 weeks later...
I want to be OCZ Vertex 120GB, but I do not know a thing or two about:

 

Can I use one with AHCI?

(I read, if I use it in AHCI, sometimes it will be freez)

Kext to be?

 

AHCI shouldn't have anything to do with this setup. It's done via a Hardware RAID controller... The controller handles the communication between the SSDs and OS X.

Link to comment
Share on other sites

  • 7 months later...

I've got 3 OCZ Vertex Turbo 60gb (4th on the way) as well as 4x320GB 7200 RPM in RAID 0 on a rocketraid 4320. Currently the 7200RPM drives are my system disk but I want to make the SSD Raid the system disk and I cant get snow leopard to install for the life of me!

 

It seems to get stuck processing essentials.pkg, either the install will go for an hour with no progress after reaching essentials.pkg or it will fail to read the pkg 3 times and give the yellow exclamation failure message. Trying the mpkg also results in the yellow exclamation failure message.

 

It works fine installing to spinning drive raid so cant see it being the raid card or the disk image. I've also tried the carbon copy cloner but I always seem to get a kernal panic before the cloning completes. I've also noticed with my SSD raid it gives an error (code 13?) when trying to mount the efi partition

 

Just wondering if any of you with SSD Raid setups went through this or if maybe one of the drives in the raid might be bad? When i just copy files back and forth its totally fine, so its only during an install attempt. I've tried playing around with the stripe sizes and cache policies but so far nothing has worked...

Link to comment
Share on other sites

I just finished dealing with almost the exact same issue.

 

I have a cube I built with pretty much these same steps, but with a micro-atx motherboard and now 8GB of RAM. I was trying a simple 2 disk vertex (standard) SSD RAID0. Got the same issues with the KP while Cloning the standard disk onto the RAID.

 

You won't like my results, but I gave up. I needed the computer for work, so any more than 3 days down and it gets very hard to function. I ended up removing the 3120 card (2 channel) and setting the vertex disks up with apple's disk utility RAID. At first, I tried using the UCH9R chipset from the motherboard to get it a little closer to HW speeds, but didn't work out so well.

 

At this point, I just assumed it was a HW incompatibility with the motherboard and the RAID controller, or a combo of that and some SW issue somewhere.

 

What are your hardware specs?

 

[edit: UCH9R=ICH9R]

 

I've got 3 OCZ Vertex Turbo 60gb (4th on the way) as well as 4x320GB 7200 RPM in RAID 0 on a rocketraid 4320. Currently the 7200RPM drives are my system disk but I want to make the SSD Raid the system disk and I cant get snow leopard to install for the life of me!

 

It seems to get stuck processing essentials.pkg, either the install will go for an hour with no progress after reaching essentials.pkg or it will fail to read the pkg 3 times and give the yellow exclamation failure message. Trying the mpkg also results in the yellow exclamation failure message.

 

It works fine installing to spinning drive raid so cant see it being the raid card or the disk image. I've also tried the carbon copy cloner but I always seem to get a kernal panic before the cloning completes. I've also noticed with my SSD raid it gives an error (code 13?) when trying to mount the efi partition

 

Just wondering if any of you with SSD Raid setups went through this or if maybe one of the drives in the raid might be bad? When i just copy files back and forth its totally fine, so its only during an install attempt. I've tried playing around with the stripe sizes and cache policies but so far nothing has worked...

Link to comment
Share on other sites

What are your hardware specs?

 

q9950 stock at 2.83 ghz, intel dx48bt2 board, evga 8800gt 1gb ddr3 akimbo, 8gb ddr3 1333 patriot viper, rocketraid 4320, 4x ocz vertex turbo FW v1.5 (in a BPU-124-SS backplane), 4x WD 320gb 16mb 7200 SATAII, 700W PSU. Running 10.6.3 w/ Chameleon RC4... had no problems installing or updating to 10.6.2 or 10.6.3. 4th ssd arrived today, hasn't made any impact as far as installation stability.

 

Hardware should theoretically be compatible based on RR/ssd raid youtube videos I've seen - if ssds were a problem with this card I dont see why it would work fine for normal usage but be incompatible to install on.... the one common thread ive noticed with success stories is i7 chips, so wondering if my C2D/x48/ICH9R cant keep up?

 

I might just have to test them in pairs to see if one might be bad. also have to test without the backplane and swap RR sata cables in case there is a bad connection... did you try using the ocz sanitary erase utility? I've gotten my 4 drives separately so curious if uneven usage might be messing with the raid if there's uncollected garbage

Link to comment
Share on other sites

I think the one thing our builds have in common is the C2D/ICH9R. Mine's technically a Xeon X3350, but in the same family as your C2D. I'm pretty sure there's a solution here, somewhere. The fact that we have completely different motherboards probably means it's not a BIOS setting somewhere... I'm using a Micro-ATX Asus P5E-VM HDMI, with the G35 chipset, with 8GB DDR2 800 RAM. If this wasn't the computer I used at work, I could probably have it hammered out in a week of spare time... I might resolve to bring my backup machine into use @ work so that I can free this problem child up for fixing. If I do that, I'll surely share my fix once I've got it. I doubt your drives are bad, because I know mine aren't. It could be, however, an incompatibility with the OCZ 60GB Vertex series (mine are regular, not turbo) and the RR controllers with OS X... but that's a bit of a long-shot I guess. Let me know if you make any progress.

 

q9950 stock at 2.83 ghz, intel dx48bt2 board, evga 8800gt 1gb ddr3 akimbo, 8gb ddr3 1333 patriot viper, rocketraid 4320, 4x ocz vertex turbo FW v1.5 (in a BPU-124-SS backplane), 4x WD 320gb 16mb 7200 SATAII, 700W PSU. Running 10.6.3 w/ Chameleon RC4... had no problems installing or updating to 10.6.2 or 10.6.3. 4th ssd arrived today, hasn't made any impact as far as installation stability.

 

Hardware should theoretically be compatible based on RR/ssd raid youtube videos I've seen - if ssds were a problem with this card I dont see why it would work fine for normal usage but be incompatible to install on.... the one common thread ive noticed with success stories is i7 chips, so wondering if my C2D/x48/ICH9R cant keep up?

 

I might just have to test them in pairs to see if one might be bad. also have to test without the backplane and swap RR sata cables in case there is a bad connection... did you try using the ocz sanitary erase utility? I've gotten my 4 drives separately so curious if uneven usage might be messing with the raid if there's uncollected garbage

Link to comment
Share on other sites

  • 3 weeks later...

I went to the NAB Convention (National Association of Broadcasters), Highpoint actually had booth there. I was trying to talk to the rep about my problem but her English wasn't so good, and she just kept telling me to buy the new 8port SATA 6gb 6XX series card coming out in June lol. But supposedly the older cards have tested with various ssd raids with no issues... one weird thing the RR temp monitor reports 491 Deg for the ssds, not sure if that means anything

 

since getting back I've tried setting each as just single disks (still through the card) and doing software raid in OSX, still no dice. Do you have your disks in a backplane unit? I have mine in an iStar 5.25" 4x bay, kinda wondering if the single shared molex power could be a issue, but I'm hesitant to unplug them as the sata connectors feel like they could easily break with repeated plug/unplug. But if you're not using one then I can probably rule that out.

Link to comment
Share on other sites

  • 4 weeks later...

Progress!:

 

1) used the sanitary erase utility from ocz since I had gotten the 4 drives over a 3 week span, and had used one of them on FW v1.4 before updating to 1.5. This seemed to make the raid drive a bit more snappy and didnt get any freeze/hang issues when reformatting like i had seen sometimes before. Install still failed at essentials.pkg

 

2) dropped ram from 1333 @ 1.9v to board default of 1033 @ 1.54v since i saw threads about essential.pkg install failures being related to ram issues. Machine would restart itself randomly mid-install. Changed back to 1333

 

2) Pulled drives out of backplane (shared 1x molex power) and connected each to individual sata power. Seemed to help at first, install got much further than I'd ever seen it before but still failed right before the end.

 

3) tried time machine restore, no dice. hang around 7-12% in

 

4) kept messing around with raid settings, and finally 32k stripe with "cache policy:none" worked!

 

Install went fine, cant access the SSD Raid EFI partition (error 13?) but can still boot with my 7200RPM raid partition. Had it running for a few hours last night, seemed very stable. However I'm a bit disappointed at the speed, apps still seem to take about the same time to load, and when I ran xbench the disk scores were almost half what I'd seen before with my default 64k + "cache: write-back", although I took a 10gb video and copied it back and forth between the SSD raid and the 7200RPM Raid and it was 300+MB/s both ways. Hopefully thats just the 7200 set being the bottleneck because with 4 vertex turbos I should think it would be more like 400+... oh and I put them back into the backplane, still works fine so now not sure it matters

 

its weird it suddenly works as I've had tried cache:none with a few different stripe settings, no luck. I'm tempted to try 64k as it seemed faster but I'm a little worried I wont get it working at 32k again, plus I kinda worry I'm needlessly wearing down the drive repeatedly reformatting and reinstalling. But can't really well depend on a system that just happed to work on a fluke

 

lol atleast we know its not a limitation of the C2Q systems

Link to comment
Share on other sites

 Share

×
×
  • Create New...