Page 1 of 2 1 2 LastLast
Results 1 to 10 of 15
  1. #1
    ROG Guru: Green Belt Array
    Join Date
    Feb 2019
    Reputation
    118
    Posts
    552

    RVIEE VROC on DIMM.2

    Anyone running VROC on DIMM.2 with a pair of 905P M2 drives?

    If so did it take using F6 drivers at win install?*

  2. #2
    ROG Junior Member Array
    Join Date
    Dec 2012
    Reputation
    10
    Posts
    2

    Quote Originally Posted by BigJohnny View Post
    Anyone running VROC on DIMM.2 with a pair of 905P M2 drives?

    If so did it take using F6 drivers at win install?*
    yes vroc driver : Intel_VROC_win_6.2.0.1239_pv but used 760p
    Last edited by deboyzfun; 04-04-2020 at 04:33 AM.

  3. #3
    ROG Guru: Yellow Belt Array
    Join Date
    Feb 2019
    Reputation
    16
    Posts
    141

    Quote Originally Posted by deboyzfun View Post
    yes vroc driver : Intel_VROC_win_6.2.0.1239_pv but used 760p


    I have a R6EE with a core i9 10980XE which should have 48CPIe lanes.

    I could not get the VROC to work with two 380GB 905p in the DIMM.2 slot
    several issues:
    1. with x16, x16, x,4 mode I only see one of the 905Ps
    To even get the second 906P visible in the BIOS I had to set the mode to x16,x8,x8 then I see both

    2. with x16, x8, x8 mode, I can create a VROC RAID 0 BUT they the 2 906Ps for some reason ends up on 2 dirrerent VMDs AND
    according to the BIOS you cannot create a bootable RAID 0 VROC that work for windows 10 with a RAID 0 across 2 VMDs. SO while I can create a RAID 0 I cannot make it bootable (which is what I want)

    I you have a good answer for what I need to do, I would be very appreciative.

    Thanks

  4. #4
    ROG Guru: Green Belt Array
    Join Date
    Feb 2019
    Reputation
    118
    Posts
    552

    I didnt have an issue with different VMDs on R6E. with two 900P AIC cards.
    I do remember that setting it in the BIOS was worthless. Never did show up right and boot partition just said windows boot. Had to do the F6 drivers. Then it worked perfectly. They take X4 each with a direct connect to the CPU. So long as nothing else is populated it shouldnt be an issue. The x16 X16 should apply to PCIE cards then That covers the GPUS will see what I get.

    I do recall having to do iaStorE then iaVROC drivers only and install the RSTe software later just because so I could see what was going in and if it was correctly seeing my key.


    Thanks for the input
    Last edited by BigJohnny; 04-05-2020 at 05:24 AM.

  5. #5
    ROG Member Array
    Join Date
    Oct 2019
    Reputation
    10
    Posts
    20

    Quote Originally Posted by Int8bldr View Post

    I have a R6EE with a core i9 10980XE which should have 48CPIe lanes.

    I could not get the VROC to work with two 380GB 905p in the DIMM.2 slot
    several issues:
    1. with x16, x16, x,4 mode I only see one of the 905Ps
    To even get the second 906P visible in the BIOS I had to set the mode to x16,x8,x8 then I see both

    2. with x16, x8, x8 mode, I can create a VROC RAID 0 BUT they the 2 906Ps for some reason ends up on 2 dirrerent VMDs AND
    according to the BIOS you cannot create a bootable RAID 0 VROC that work for windows 10 with a RAID 0 across 2 VMDs. SO while I can create a RAID 0 I cannot make it bootable (which is what I want)

    I you have a good answer for what I need to do, I would be very appreciative.

    Thanks
    I have the same issue (see also the discussion in https://rog.asus.com/forum/showthrea...ane-Allocation).

    This seems to be "expected" according to the manual: On page ix, in the footnotes marked with (*) and (**), it says that running the first two PCIe slots in x16 mode disables one of the DIMM M.2 slots. The result is that 4 of the PCIe lanes that come from the CPU are "magically" inaccessible on this board. I have contacted Asus support, but so far they just replied with a screenshot of the manual which they cropped right before the footnotes, so it seems they don't even know/bother what they wrote in their own manual.
    In summary, I have no idea how all CPU lanes can be used on the "Extreme Encore", so maybe we need another "Extreme Encore Again" revision that has good VRMs and makes all CPU lanes accessible.

    Click image for larger version. 

Name:	encore-expansion_slots.jpg 
Views:	8 
Size:	99.6 KB 
ID:	84484
    Click image for larger version. 

Name:	encore-storage.jpg 
Views:	5 
Size:	103.8 KB 
ID:	84485
    Last edited by BenJW; 04-05-2020 at 09:39 PM. Reason: corrected page no. & added screenshots

  6. #6
    ROG Guru: Yellow Belt Array
    Join Date
    Feb 2019
    Reputation
    16
    Posts
    141

    Quote Originally Posted by BenJW View Post
    I have the same issue (see also the discussion in https://rog.asus.com/forum/showthrea...ane-Allocation).

    This seems to be "expected" according to the manual: On page 1-8, in the footnotes marked with (*) and (**), it says that running the first two PCIe slots in x16 mode disables one of the DIMM M.2 slots. The result is that 4 of the PCIe lanes that come from the CPU are "magically" inaccessible on this board. I have contacted Asus support, but so far they just replied with a screenshot of the manual which they cropped right before the footnotes, so it seems they don't even know/bother what they wrote in their own manual.
    In summary, I have no idea how all CPU lanes can be used on the "Extreme Encore", so maybe we need another "Extreme Encore Again" revision that has good VRMs and makes all CPU lanes accessible.
    Thank you for confirming my suspicions!

    I think the board was not made for a 48 PCIx lane CPU or they never enable the BIOS to handle all 48 PCIx lanes for the core i9 10980XE.

    The manual says this

    Click image for larger version. 

Name:	encore 48PCIx lanes - Copy.jpg 
Views:	4 
Size:	112.3 KB 
ID:	84481

    Anyone reading this would be lead to believe that you can run the both DIMM.2_1 and DIMM.2_2 in x16,x16,x4 mode, BUT DIMM.2_2 does not show

    But even if you change to x16,x8,x8 and get DIMM.2_2 to show in BIOS you cannot VROC them in RAID 0 and bootable.

    My Next attempt is to use my HYPER M.2 X16 CARD v2 and put them there instead to see if I somehow can make that work.

    I'm thinking maintaining x16,x16, x4 (two Graphics cards PCIEX16_1 and PCIEX16_2) but put the HYPER M.2 X16 CARD v2 with one 380GB M2 905p in slot PCIEX16_3 and then the other 380GB M2 905p left in the DIMM.2_1 slot. what a waste of the HYPER M.2 X16 CARD

    We'll see. other method is to forget about running 2 graphics cards altogether. and just put the HYPER M.2 X16 CARD in the PCIEX16_2 slot
    Last edited by Int8bldr; 04-05-2020 at 06:51 PM.

  7. #7
    ROG Member Array
    Join Date
    Oct 2019
    Reputation
    10
    Posts
    20

    Quote Originally Posted by Int8bldr View Post
    Thank you for confirming my suspicions!

    I think the board was not made for a 48 PCIx lane CPU or they never enable the BIOS to handle all 48 PCIx lanes for the core i9 10980XE.

    Anyone reading this would be lead to believe that you can run the both DIMM.2_1 and DIMM.2_2 in x16,x16,x4 mode, BUT DIMM.2_2 does not show

    But even if you change to x16,x8,x8 and get DIMM.2_2 to show in BIOS you cannot VROC them in RAID 0 and bootable.

    My Next attempt is to use my HYPER M.2 X16 CARD v2 and put them there instead to see if I somehow can make that work.

    I'm thinking maintaining x16,x16, x4 (two Graphics cards PCIEX16_1 and PCIEX16_2) but put the HYPER M.2 X16 CARD v2 with one 380GB M2 905p in slot PCIEX16_3 and then the other 380GB M2 905p left in the DIMM.2_1 slot. what a waste of the HYPER M.2 X16 CARD

    We'll see. other method is to forget about running 2 graphics cards altogether. and just put the HYPER M.2 X16 CARD in the PCIEX16_2 slot
    Thanks for the insight with the VMDs. It's really annoying since the obvious way for a 2 drive M.2 RAID would indeed be to put both SSDs into the DIMM.2 slots. What's worse imho is that one of the main selling points of Cascade Lake X over the previous generation CPUs are 4 extra CPU lanes, but the Encore nullifies the advantage by keeping 4 lanes inaccessible with any 44 or 48 lane CPU, even though it is already the third incarnation of the Rampage VI Extreme series.

    If it helps, there are also simple M.2 to PCIe x4 adapters (on ebay or aliexpress) that are cheaper than the Hyper X16 card, which would probably be sufficient to use a single M.2 drive in PCIEX16_3.

  8. #8
    ROG Guru: Yellow Belt Array
    Join Date
    Feb 2019
    Reputation
    16
    Posts
    141

    Quote Originally Posted by BenJW View Post
    Thanks for the insight with the VMDs. It's really annoying since the obvious way for a 2 drive M.2 RAID would indeed be to put both SSDs into the DIMM.2 slots. What's worse imho is that one of the main selling points of Cascade Lake X over the previous generation CPUs are 4 extra CPU lanes, but the Encore nullifies the advantage by keeping 4 lanes inaccessible with any 44 or 48 lane CPU, even though it is already the third incarnation of the Rampage VI Extreme series.

    If it helps, there are also simple M.2 to PCIe x4 adapters (on ebay or aliexpress) that are cheaper than the Hyper X16 card, which would probably be sufficient to use a single M.2 drive in PCIEX16_3.
    So I went with putting both 380GB 905P in my HYPER M.2 X16 CARD v2 and installed it in slot CPIEX16_2 (for now) and that worked. Both end up on the same VMD (#0) and I can create a bootable VROC RAID 0 drive.

    Conclusion is that the DIMM1 slot is useless if you want a bootable VROC RAID 0. You can still use it for individual M.2 drives and even VROC raid them but they will be non-bootable.

    Other implicit Conclusion is that if you want to use all 48 PCIe lanes you have to use the PCI slots to do so (forget about DIMM slots). 16x, 16x, 8x mode worked!

  9. #9
    ROG Member Array
    Join Date
    Oct 2019
    Reputation
    10
    Posts
    20

    Quote Originally Posted by Int8bldr View Post
    So I went with putting both 380GB 905P in my HYPER M.2 X16 CARD v2 and installed it in slot CPIEX16_2 (for now) and that worked. Both end up on the same VMD (#0) and I can create a bootable VROC RAID 0 drive.

    Conclusion is that the DIMM1 slot is useless if you want a bootable VROC RAID 0. You can still use it for individual M.2 drives and even VROC raid them but they will be non-bootable.

    Other implicit Conclusion is that if you want to use all 48 PCIe lanes you have to use the PCI slots to do so (forget about DIMM slots). 16x, 16x, 8x mode worked!
    Thanks for sharing your knowledge. One more question:
    x16 + x16 + x8 = 40, plus 4 lanes from the one working DIMM.2 slot makes 44 lanes. That's still 4 lanes short of 48 (DMI is counted separably). Do you know how the other 4 lanes are used/accessible on the Encore?

    Cheers
    Ben

  10. #10
    ROG Guru: Yellow Belt Array
    Join Date
    Feb 2019
    Reputation
    16
    Posts
    141

    Quote Originally Posted by BenJW View Post
    Thanks for sharing your knowledge. One more question:
    x16 + x16 + x8 = 40, plus 4 lanes from the one working DIMM.2 slot makes 44 lanes. That's still 4 lanes short of 48 (DMI is counted separably). Do you know how the other 4 lanes are used/accessible on the Encore?

    Cheers
    Ben
    I think the last 4 PCIe lanes go to the x4 slot = PCIEX4 slot OR to the M.2_2 over PCH (you can configure this in BIOS but I have not tested that)

    One think to notice is that you cannot move the 380GB 905P M.2s to the M.2_1 or M.2_2 they are too long to fit (905P M.2 is a 22110 long device). Even if they did fit they would be slow because the M.2 slots are limited by the PCH band width and you can only raid with IRST (not VROC-able)

Page 1 of 2 1 2 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •