Results 1 to 5 of 5
Thread: R6E VROC with non-intel drives?
-
01-21-2021 09:44 PM #1
marcmicalizzi PC Specs Motherboard ROG Rampage VI Extreme Processor i9-10980XE Memory (part number) F4-4266C17-GTZR (4x8GB) Graphics Card #1 ROG Strix 3090 OC Sound Card Soundblaster X3 Monitor 3x PG27UQ Storage #1 2x Samsung 960 512GB Storage #2 3x WD Gold 4TB CPU Cooler EKWB Monoblock Case Phanteks Enthoo Luxe Power Supply Corsair HX1200 Keyboard ROG Strix Flare Mouse ROG Gladius II Origin Mouse Pad ROG Balteus QI Headset/Speakers 11.2 Speakers on Yamaha MX-A5000 Amplifier OS Windows 10 Pro Accessory #1 ASUS XG-C100F 10G SFP+ Network Adapter Accessory #2 Soundblaster K3+ with AKG P220 Accessory #3 Logitech BRIO 4K webcam
- Join Date
- Oct 2017
- Reputation
- 10
- Posts
- 17
R6E VROC with non-intel drives?
I've been doing a lot of looking, and I'm having trouble finding any up-to-date or definitive information on the topic, everything relevant seems to be 2019 or earlier, or about the refresh models (Omega, Encore) as opposed to the original.
Will VROC work in RAID-1 (or even RAID-5, but not yet) with Samsung 960s if I get VROCPREMMOD? And additionally, would both NVME drives need to be in the DIMM.2 card? Or can one be in the DIMM.2 and one in the on-board M.2_1(Socket 3)?
I'm running with an i9-10980XE if that makes any difference (upgraded from i9-7940X last year)
It's just that the RSTe performance on the RAID-1 makes it seem like they aren't even NVME drives at all, and I'm not too keen on running it without RAID for redundancy purposes.
Thanks in advance!
-
01-21-2021 11:35 PM #2
- Join Date
- Feb 2019
- Reputation
- 20
- Posts
- 156
Answer to how VROC works:
- short answer: NO you cannot use Samsung drives in VROC on a x299 board!
- long answer:
These are the functional rules:
On any x299 board with core i9 (7xxx, 9xxx or 10xxx):
- VROC ONLY work with high end INTEL NVMe drives, e.g. Optane 905p. if you use lesser drives you may run into trouble. other brands then intel will not work!
- RAID 0 requires no VROC key (but you can use one anyway - does not hurt but is as i said not required) - still only works with high end INTEL NVMe drives
- RAID 1/10/5 requires VROCISSDMOD and only this VROC key works with X299 core i9 - period! and again only works with high end INTEL NVMe drives
About other VROC keys:
VROCSTANDMOD and VROCPREMMOD works ONLY with Xeon's on Xeon boards
read more here: https://www.intel.com/content/www/us...-cpu-vroc.html
Read the thread here: https://rog.asus.com/forum/showthrea...and-PCIe-lanes
Short summary: if you put 2 m.2 intel drives in the DIMM.2 slot they end up on different VMD's and cannot be VROC-ed. So you need to put them connected to the same PCIe slot (so they end up on the same VMD) to VROC them. For instance I use a ASUS Hyper M.2 X16 card with 4 905p m.2 drives and that works well in RAID 0.Last edited by Int8bldr; 01-22-2021 at 12:32 AM.
-
01-23-2021 05:03 AM #3
- Join Date
- Feb 2019
- Reputation
- 129
- Posts
- 709
The above is correct to a point.
you can put Samsung drives in the M.2 X16 PCIE board and make VROC raid.
Problem, its a software raid only and you cannot boot from it. Been there and done that. Decided it wasnt worth it.
This isnt an ASUS restriction, its an intel restriction. They dont want their enterprise customers buying HEDT boards to run servers when they pay a hell of a lot more for the server boards and keys to run other than intel drives.
-
03-04-2021 04:00 PM #4
- Join Date
- Jun 2012
- Reputation
- 10
- Posts
- 52
So I also ran into this on my R6E. Bought 2 2gb 970 Evo Plus and a Intel® Core™ i9-10920X (upgrading my i7-7820x) couple months ago and I finally got to putting it together and I noticed when tying to use the 2 slots on the dimm2 and CPU raid they where being flagged as unsupported. I had no clue about the limitations on VROC. So I just ending up putting one on the DImm2 and there other one on the Mobo m2 slot and ran it on PCH Raid 0.
Last edited by asmodyus; 03-04-2021 at 04:03 PM.
-
03-14-2021 10:23 PM #5
- Join Date
- Feb 2019
- Reputation
- 129
- Posts
- 709
Problem with PCH raid iis you are limited by the x4 speed of the DMI Bus so you might gain 200MB/s tops instead of doubling the speed. At to that the latency of raid and its a loosing proposition. The latency hits you where is counts in the 4K low que depth.