cancel
Showing results for 
Search instead for 
Did you mean: 

Bad RAID0 Performance on DIMM.2 with two NVMEs

T3kno
Level 7
Hi ROG community

I set up a RAID0 with two Samsung 970 NVME on the DIMM.2 module. As i can't make RAID via VROC (sais 970 aren't supported) i did it via PCH.
The DIMM.2 RAID works and everything, i use it as Game and Workdrive (Adobesuite) but performance is "bad". I only get ~3GB of read and like 3.5GB of write speed.

Components:
I9-7900X
64GB Corsair 3200MHz RAM
2xAsus 1080TI O11G
ASUS ROG Rampage VI Extreme Mainboard
Samsung 970 Pro (System)
2x970Pro Raid 0 (Games)
Samsung 840 Pro (Data)

I can't imagine that this is all it can do. If so why would somebody do a RAID0 if he have better speeds with a single drive? With 44 lanes it should bi enaugh für 2x1080TI, 1x System NVME and 2xRAID NVME?

Why are Samsung Drives not supported via VROC? Or other question: Which Intel NVMEs are supported to create a better performing RAID0 on VROC?

Greetings
4,266 Views
13 REPLIES 13

JustinThyme
Level 13
This is quite normal as the PCH is the other end of the DMI bus. Search VROC in this area and you can see all the ins and out that have already been tried on this platform, I know as I did most of the trying. Any drive post 960 pro with sequential read speeds already up in the 3400 MB/s and higgher range actually meets and may even be capable of exceeding the limits of the DMI Bus and actually lose 4k performance where its most critical.



“Two things are infinite: the universe and human stupidity, I'm not sure about the former” ~ Albert Einstein

Hm ok, this is really bad 😞

I looked at the block diagram of the x299 and it shows "Up to 24x PCIE 3, 8Gb/s each x1".

So it'll be the same if i would use VROC with Intel NVMEs (now i do RAID over PCH, not VROC)?

G75rog
Level 10
If you look closely at the manual you find one side of the Dimm.2 card is selectable between CPU and PCH lanes while the other side is CPU only. Can you run your Raid on the CPU lanes without having to use a Windows array?

G75rog wrote:
If you look closely at the manual you find one side of the Dimm.2 card is selectable between CPU and PCH lanes while the other side is CPU only. Can you run your Raid on the CPU lanes without having to use a Windows array?

I could not raid up 2 NVMe drives on the DIMM.2 Had to be with a MOBO mounted M2 drive. And still through the bottle neck.
To make is short without a lot o verbosity if you read up on the VROC thread about every combination with and without a key have been tried. Intel is hoarding on the performance attributes of the storage for this board. The enterprise version witha commercial key can run mad amounts of any SSD they want with stupid arrangements like 4U of rack space filled with 1TB M2 drives all raided up. The commercial boards and keys are mad $$$!! They allow HEDT boards to do the same thing and MOBOs will be in the same boat as the top dog GPUs are, bought out and out of reach. Yes there are 24 lanes from the CPU That goes elsewhere. However dig deeper downstream. That 224 lanes covers a lot of ground and its not like a floating pool off lanes in a grab bag, they are all assigned and often shared.

Until the recent release of the 380GB 905P M2 drive by intel the ASUS hyperX16 was useless. Now it can actually do its job and put 4 fast intel drives in a raid array using 16 lanes of your 44 to do it. Also nice price tag on that venture of about $2100 USD. Jumping for that brass ring on performance when Intel has the fishing pole gets quite expensive. I had to see if it worked though so you can achieve the stupid speeds but you have to pony up. Im in $1,000 on two 280GBB PCIE drives to have a fast OS drive. 2 drives gets you sequential red of 5500MB/S and over 10,000MB/S for 4 drives (previously done on hyperX16 with 4X 2.5 inch 900P U2 drives with an M2 adapter).

Definitely a pay to play scheme at work and by design from its inception. Took forever to get a key in the first place because of intel restrictions. I was the resident loudmouth here on that. Im sure the delay was making sure the system could not be cheated so other than intel drives could benefit from the stupid fast connection straight to the CPU.

Search VROC in this section and read a bit. Several members here (I was one of them) pretty much wore it out trying everything. Lots of it went on at OCN too.
We all wish it wasnt so but I get it. Allowing this would cost Intel hundreds of millions in losses. Why pay the stupid $$ on enterprise solutions if you can do the same on a prosumer level for a fraction of the cost?






EDIT: This elementary block diagram may help a bit.
77168



“Two things are infinite: the universe and human stupidity, I'm not sure about the former” ~ Albert Einstein

Where in the manual did you find that info?

If this is correct, my RAID is build with one NVME over CPU and the other over PCH. I dont think this is the case?
I selected Intel RAID Premium in the SATA controller section, then rebooted, BIOS again and pressed F11, clicked on RAID and could select my 970 PRO.
But both were displayed as PCH.

T3kno wrote:
Where in the manual did you find that info?

If this is correct, my RAID is build with one NVME over CPU and the other over PCH. I dont think this is the case?
I selected Intel RAID Premium in the SATA controller section, then rebooted, BIOS again and pressed F11, clicked on RAID and could select my 970 PRO.
But both were displayed as PCH.


Your raid is over PCH, both. Im guessing you used the wizard which will reassign settings. Im afraid either I failed in the presentation or there is another barrier between presentation and comprehension. Please read all the posts here by searching VROC and it will become clear. I dont want to be verbose when its all here, several of us spent a lot of $$ and even more time on this. The search function is your friend.

Or feel free to continuing trying. Im simply conveying that which was done, tried, tested, retested, reconfigured and retested to save you some $$ and your blood pressure as well as attempt to explain why any drive other than Intel will not reap the benefits of VROC. The only option to get past the bottle neck is to put the drives on an hyperX16 card and run software raid. Oher than that you wont be able to get more than ~3500MB/s sequential reads with non intel drives.

BTW also tried two M2 optane drives on the DIM.2 and MOBO and they would not do VROC there, only on the HyperX16 where its basically a direct connection to the CPU.

Enjoy your venture!!:cool:



“Two things are infinite: the universe and human stupidity, I'm not sure about the former” ~ Albert Einstein

JustinThyme wrote:
Your raid is over PCH, both. Im guessing you used the wizard which will reassign settings. Im afraid either I failed in the presentation or there is another barrier between presentation and comprehension. Please read all the posts here by searching VROC and it will become clear. I dont want to be verbose when its all here, several of us spent a lot of $$ and even more time on this. The search function is your friend.

Or feel free to continuing trying. Im simply conveying that which was done, tried, tested, retested, reconfigured and retested to save you some $$ and your blood pressure as well as attempt to explain why any drive other than Intel will not reap the benefits of VROC. The only option to get past the bottle neck is to put the drives on an hyperX16 card and run software raid. Oher than that you wont be able to get more than ~3500MB/s sequential reads with non intel drives.

BTW also tried two M2 optane drives on the DIM.2 and MOBO and they would not do VROC there, only on the HyperX16 where its basically a direct connection to the CPU.

Enjoy your venture!!:cool:


I m wondering what do you think about make raid 0 with 2 Samsung 960 pros over PCH ?

Do you think it's a slight improvement or its better to use a single drive in non-raid form?

Deepbluee wrote:
I m wondering what do you think about make raid 0 with 2 Samsung 960 pros over PCH ?

Do you think it's a slight improvement or its better to use a single drive in non-raid form?


There is an ever so slight improvement in sequential read but a large loss in the 4k department. IMO they are better left as a single drive. I tested everything that was available at launch and even tried 2X 1TB 960 pros......Sent one back due to sticking with the 900Ps and using up all my lanes there.

As for 2GPUs and a HyperX16 it can be done but one of your GPUs will be at X8. I tested that too and noted not a bit of difference with one at X16 or both at X16. Guessing its the SLI limitations.

I have 2 1080TIs and 2 900Ps so my PCIE slots are all populated. Only one 960 pro on the DIM.2 because one side is disabled as soon as you enable the PCIE_4 for drive use. Yes Ive pushed this puppy to the limits with every single lane used up. Wish there were more, this is an attractive point for a Thread ripper.



“Two things are infinite: the universe and human stupidity, I'm not sure about the former” ~ Albert Einstein

T3kno wrote:
Where in the manual did you find that info?

If this is correct, my RAID is build with one NVME over CPU and the other over PCH. I dont think this is the case?
I selected Intel RAID Premium in the SATA controller section, then rebooted, BIOS again and pressed F11, clicked on RAID and could select my 970 PRO.
But both were displayed as PCH.


Page ix of specs summary and 3-18.



Personally I'm using an Apex with 8 NVME drives on board so I can play.