cancel
Showing results for 
Search instead for 
Did you mean: 

Issues with trying to configure an 8 drive array

AviatorDuck
Level 7
Ok folks, what am I missing here? The Intel Rapid Storage Technology platform seems to only be able to use 6 of my 8 available SATA ports at one time. UEFI BIOS/IRST won't let me configure 8 drives in an array. When I select 8 drives RAID 5....the option to create array is unavailable (greyed out)....same with 7 drives....when I select only 6 drives....then the button to create array is available (white). All 8 drives are showing up as available. If I configure two 4 drive arrays...or any other array configuration that uses all 8 drives....it is successful, and they show "Normal" and healthy in BIOS, but whichever array that was created last will fail a cyclic redundancy check in Win10. Grrr.....If I create multiple arrays using only 6 drives...then everything works as advertised. Again...it doesn't matter which drives I choose.

Bypassing IRST, using AHCI..... I can install Win10 on my NVMe drive with no issues. And with all 8 SATA drives plus my NVMe drive all showing up in Disk Manager as alive and healthy. BUT if I enable IRST in UEFI BIOS....AND have all 8 drives plugged into the PCH controller....Win10 will NOT install and I get a BSOD for iaStorAV.sys while trying to boot into Installation routines....never really starts as it BSOD's before the first configuration screens are launched.

So is this a configuration issue that I have missed setting it up correctly or possibly a bug with UEFI BIOS/IRST or a limitation of IRST?

At first, I thought maybe it was one of those issues with sharing between M.2_1 and SATA, but I believe I have the BIOS configured correctly where M.2_1 is on PCIe and not using PCH SATA at all (evidenced by using AHCI mode and having 9 drives show up in Disk Manager....my 8 SATA drives and my NVMe drive).


Here is my configuration:

Asus ROG Strix x299 Motherboard
8 SATA Ports on PCH controller (8 Seagate 6TB 7200RPM drives)
M.2_1 on PCIe (Samsung 960 Pro 2TB NVMe)
NOTE: Dual posting on both Asus and IRST forums for assistance!
3,063 Views
14 REPLIES 14

xeromist
Moderator
I was starting to speculate but I don't have this chipset so hopefully you can get some experienced replies from the the IRST forums. I will say that the graying out seems like a documentation failure on Intel's part. If there is a limit (and it sounds like there is) it should be documented.

But on a tangent hopefully someone has cautioned you against 8x6TB drives in RAID 5. Rebuilds are hard on an array and 42TB is going to take a very very long time to rebuild. The rebuild could kill a 2nd drive and you will lose everything. I wouldn't trust anything you can't afford to lose to such an array. For 8 drives you really need RAID 10 or RAID 6 to feel reasonably safe. Maybe you already know but I just wanted to make sure.
A bus station is where a bus stops. A train station is where a train stops. On my desk, I have a work station…

xeromist wrote:
I was starting to speculate but I don't have this chipset so hopefully you can get some experienced replies from the the IRST forums. I will say that the graying out seems like a documentation failure on Intel's part. If there is a limit (and it sounds like there is) it should be documented.

But on a tangent hopefully someone has cautioned you against 8x6TB drives in RAID 5. Rebuilds are hard on an array and 42TB is going to take a very very long time to rebuild. The rebuild could kill a 2nd drive and you will lose everything. I wouldn't trust anything you can't afford to lose to such an array. For 8 drives you really need RAID 10 or RAID 6 to feel reasonably safe. Maybe you already know but I just wanted to make sure.


I use the RAID5 array for performance and shear single volume size....not data integrity. I understand the rebuild time issues with an array that large...In the past I have worked for EMC! 🙂 And that brings me to the reason I run such a large array....along with being a gamer....I will use this computer as a "mini lab" to supplement my development servers....Work Machine Mon-Fri 9am-5pm....gaming machine the rest of the time! 🙂 So this array will be filled up with....and run.....lots of virtual machines that I use for work. It will also serve as the local backup for my development servers.

I have also found (and maybe this has changed over the years) that these consumer level RAID controllers don't do a very good job defragmenting themselves so over time they get a bit slower than when they were first built and loaded. I typically rebuild an array once a year or so to keep it fresh and running at top performance.

As for data loss...I use my development servers as a local backup....and vice versa (thus reasons for wanting such a large array) then that gets backed up to the cloud so in the case of a drive failure, or my house burns to the ground.... I have choices of a local or cloud restore point....(virtual machines....because of their file sizes....tend to restore quickly. Unlike a use case where one might have TB's of photos, mp3's and such that would take longer to restore from backup.)

As for the IRST forums, they have not been helpful as of yet. But your theory of max limits does seem very plausible. I will update my post over there to see if I can get the folks over there to provide some max limit data. Thanks for the tip! 🙂

AviatorDuck

AviatorDuck wrote:
I use the RAID5 array for performance and shear single volume size....not data integrity. I understand the rebuild time issues with an array that large...In the past I have worked for EMC! 🙂 And that brings me to the reason I run such a large array....along with being a gamer....I will use this computer as a "mini lab" to supplement my development servers....Work Machine Mon-Fri 9am-5pm....gaming machine the rest of the time! 🙂 So this array will be filled up with....and run.....lots of virtual machines that I use for work. It will also serve as the local backup for my development servers.

I have also found (and maybe this has changed over the years) that these consumer level RAID controllers don't do a very good job defragmenting themselves so over time they get a bit slower than when they were first built and loaded. I typically rebuild an array once a year or so to keep it fresh and running at top performance.

As for data loss...I use my development servers as a local backup....and vice versa (thus reasons for wanting such a large array) then that gets backed up to the cloud so in the case of a drive failure, or my house burns to the ground.... I have choices of a local or cloud restore point....(virtual machines....because of their file sizes....tend to restore quickly. Unlike a use case where one might have TB's of photos, mp3's and such that would take longer to restore from backup.)

As for the IRST forums, they have not been helpful as of yet. But your theory of max limits does seem very plausible. I will update my post over there to see if I can get the folks over there to provide some max limit data. Thanks for the tip! 🙂

AviatorDuck


Since you've already invested in the drives I guess it's a bit late for this now but if & when you do a hardware refresh in a few years you might consider a separate storage server running ZFS like FreeNAS for your VMs (FreeNAS is free and runs fine on consumer hardware so you could repurpose this machine into a server). If most of the size is redundant binaries you could gain a lot from deduplication. However, people don't really recommend RAID-Z for VMs as the performance isn't as good as mirror. But if you were to dedupe on a mirrored array you might save enough space to have similar effective capacity to RAID-Z/5/6 and get better performance to boot.
A bus station is where a bus stops. A train station is where a train stops. On my desk, I have a work station…

Intel came through with information and answers! The limit for a single drive array with IRST is 6 drives in a single array! 🙂 Even though their PCH controller docs state that the only limit is the number of SATA ports on the PCH controller! So that is why I could create 2 four drive arrays but not a single 8 drive array! I have provided the link below for any that would like to see all the possible configurations with IRST and various controllers!

https://communities.intel.com/thread/124926

Thanks to all for your valued assistance! 🙂

Cheers,
AviatorDuck

AviatorDuck wrote:
Intel came through with information and answers! The limit for a single drive array with IRST is 6 drives in a single array! 🙂 Even though their PCH controller docs state that the only limit is the number of SATA ports on the PCH controller! So that is why I could create 2 four drive arrays but not a single 8 drive array! I have provided the link below for any that would like to see all the possible configurations with IRST and various controllers!

https://communities.intel.com/thread/124926

Thanks to all for your valued assistance! 🙂

Cheers,
AviatorDuck


What about the IRSTE that's what I am using and it use to work with AHCI.

CharlieH wrote:
What about the IRSTE that's what I am using and it use to work with AHCI.


If I follow your question properly CharlieH....it is not a software/version/driver issue, but rather a controller limitation. But maybe I have missed something...do you have info on IRSTe that says it can do 8 drive arrays on x299? If so, please share so I can review it! 🙂

AviatorDuck wrote:
If I follow your question properly CharlieH....it is not a software/version/driver issue, but rather a controller limitation. But maybe I have missed something...do you have info on IRSTe that says it can do 8 drive arrays on x299? If so, please share so I can review it! 🙂


Then maybe its because I use VROC but like I said I have a 8 drive RAID 0 array using the IRSTe driver on the x299 platform.

G75rog
Level 10
I run an eight drive Raid array and it works quite well. Fortunately it is located in an Asustor NAS where it belongs.

From reading your MB manual and your post it appears there may be an undocumented storage size limit on the Raid 5 make up.
Even the NAS's have limits on the total size of their arrays.
All of the examples in the manual show drives measured in Gigabytes, not Terabytes.
Perhaps if you filled the board with 500Gb drives you could have your 8 drive array.

Korth
Level 14
http://www.tomshardware.com/answers/id-2281255/x99-chipset-sata-ports-thing-find-drives-allowed-raid...

The X299 PCH supports up to 10 SATA ports. But I think 6-drive RAID is a legacy limit ... it's "always" been 6 SATA ports, and was "never" an issue until Intel PCH parts could support more than 6 SATA ports. Intel's focus tends to be on their enterprise-grade stuff, not their HEDT platforms.

RAID5 theoretically supports "unlimited" maximum drives, I've seen a lot of 6+1 RAID5 setups (on C62x machines, none on X299). I don't see any technical reason a 10-port X299 can't run a 10-drive RAID. And I haven't found any reports of 10-drive arrays running on non-ASUS X299 motherboards, so I suspect the limitation is Intel's code, not ASUS code.
"All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others." - Douglas Adams

[/Korth]