Page 2 of 2 FirstFirst 1 2
Results 11 to 15 of 15
  1. #11
    ROG Guru: Green Belt Array
    Join Date
    Feb 2019
    Reputation
    128
    Posts
    577

    You can run both with only X8 on AIC card Â*but still in same boat.Â*
    This only applies to the DIMM.2 and once you set it to x16x8x4 it’s all happy. You can run two AIC cards in the X4 (small one that disables M2_2) and the bottom slot at x16/x16/X4. Then you still lose the DIMM.2_2 and the M2_2 under the cover.

    Yeah give you 48 lanes you can’t use. I’m on a 44 lane CPU but don’t have the option to disable the slots I don’t use. No matter x16 x8 for GPUs. X8 for two M2 . x8 for both DIMM2 drives and theirs your 40. The other 4 go to the DMI which is 44 which is where my 9940X is at. What good is another 4 lanes that can’t be used? My preference would be the ability to switch off the lanes of the bottom two slots. And use them where I need them.Â*
    Â*Â*

  2. #12
    ROG Guru: Green Belt Array
    Join Date
    Feb 2019
    Reputation
    128
    Posts
    577

    It’s up and operational. Only down side it latency of the VROC raid but Q14K is still more than twice as fast as a stand-alone Sammy 970 plus and sequential are past 500. I was going to use my single U2 905P but there’s no U2 port and no way to make it tidy. *The other caveat is slower boot times but I can deal with that. If I wasn’t using my GPUs vertical mount I’d have just used the AIC 900P drives that didn’t limit any lanes. At least you can use the DIMM.2 for VROC. The R6E has one of DIMM.2 drives on PCH. So these two, a 960 pro and 970 evo plus, 2x1TB 850evos and 3x1TB 860EVOs. That should do me for local storage.*

  3. #13
    ROG Member Array
    Join Date
    Oct 2019
    Reputation
    10
    Posts
    20

    Quote Originally Posted by Int8bldr View Post
    I think the last 4 PCIe lanes go to the x4 slot = PCIEX4 slot OR to the M.2_2 over PCH (you can configure this in BIOS but I have not tested that)

    One think to notice is that you cannot move the 380GB 905P M.2s to the M.2_1 or M.2_2 they are too long to fit (905P M.2 is a 22110 long device). Even if they did fit they would be slow because the M.2 slots are limited by the PCH band width and you can only raid with IRST (not VROC-able)
    Yes, that's correct but does not solve the mystery about the whereabouts of the 4 CPU lanes: Correct is that the x4 lanes to PCIEX_4 (as well as to M.2_1 and to M.2_2) each come from the PCH. The PCH can provide up to 24 PCIE lanes. It is connected to the CPU via the DMI which essentially is like a x4 PCIE connection, but this connection is counted separably from the CPU lanes, as indicated in the block diagram below (the diagram is for Skylake-X with max. 44 lane CPUs):
    Click image for larger version. 

Name:	intel-x299-block-diagram.jpg 
Views:	2 
Size:	244.9 KB 
ID:	84502

    This also means that the PCIEX4 as well as M.2_1 and M.2_2 cannot be used for VROC at all, since they are not connected directly to the CPU via CPU lanes. As you write they can only be used for an IRST chipset RAID, which would bottleneck the bandwidth to the shared DMI x4 connection.

    In summary, there is still the unsolved mystery what Asus did with the remaining 4 CPU lanes.

  4. #14
    ROG Enthusiast Array rosefire's Avatar
    Join Date
    May 2014
    Reputation
    10
    Posts
    41

    Okay, so if I understand this right, ASUS sells the R6EE board as made for i9-109XXx and "Ready for the latest Intel® Core™ X-series processors to maximize connectivity", but with a lot of constraints:

    - The M.2_1 and M.2_2 slots, and PCIe x4 are chipset lanes, not CPU lanes, so they are inaccessible to VROC, but can be used for IRST.

    - My 905P M.2 SSD are 22110s, are too long to fit in the M.2_1 and M.2_2, so they can't be set up with IRST.

    - To use my 905Ps in a RAID configuration, I must, therefore, install them on the DIMM and buy a certain type (?) of VROC key for $100++.

    - Installing them on the DIMM means PCIe_4 is only 4 bits, but the PCIex4 slot and M2._2 remain unshared

    - I have two x16 graphics cards, which can be installed in PCIe_1 x16 and PCIe_2 x16

    What VROC key is needed for this board?
    Last edited by rosefire; 04-21-2020 at 01:46 PM. Reason: Correct errors in the post

  5. #15
    ROG Guru: Yellow Belt Array
    Join Date
    Feb 2019
    Reputation
    16
    Posts
    144

    Quote Originally Posted by rosefire View Post
    Okay, so if I understand this right, ASUS sells the R6EE board as made for i9-109XXx and "Ready for the latest Intel® Core™ X-series processors to maximize connectivity", but with a lot of constraints:

    1 The M.2_1 and M.2_2 slots, and PCIe x4 are chipset lanes, not CPU lanes, so they are inaccessible to VROC, but can be used for IRST.

    2 My 905P M.2 SSD are 22110s, are too long to fit in the M.2_1 and M.2_2, so they can't be set up with IRST.

    3 To use my 905Ps in a RAID configuration, I must, therefore, install them on the DIMM and buy a certain type (?) of VROC key for $100++.

    4 Installing them on the DIMM means PCIe_4 is only 4 bits, but the PCIex4 slot and M2._2 remain unshared

    5 I have two x16 graphics cards, which can be installed in PCIe_1 x16 and PCIe_2 x16

    What VROC key is needed for this board?

    1. correct
    2. yes the do not fit in M.2_1 and M2_2. you can still potentially use IRST though if you put them somewhere else...
    3-5. more complicated answer:
    a) you can install them in DIMM.2_x slots and use a VROC key BUT they end up on 2 different VMDs: VMD0 and VMD1 AND you cannot create a BOOTABLE VROC raid 0 volume that span 2 VMDs, so if that is what you want this solution won't work either.
    b) you have to use Intel Optane drivers to create a VROC volume (900P or 905P work confirmed but non intel drives do not work)
    c) you can get a VROC key relatively "cheap" for $20 at EVGA here:https://www.evga.com/products/produc...W002-00-000066
    d) if you want to only use VROC raid 0 you do not need a VROC key, but I would recommend to get one anyway because it gives you flexibility for future and for $20 it's not a big deal....

    to create bootable VROC 0 on 2 CPU x4 lanes you can:

    i) use one of the DIMM.2_x slots (you need to try out which one it is tied to VMD 0 and one is VMD1)
    and put the other one in the PCIEX16_3 PCIe slots you have left. For instance, since you are running 2 graphics card in, i assume, PCIEX16_1 PCIEX16_2, you could put an m.2 adapter card in PCIEX16_3. I am not sure if the PCIEX16_3 is tied to VMD0 or VMD1, but it should be one or the other and could be matched with one of the DIMM.2_x slots - need to try...
    Also, if you use PCIEX4_1,make sure you do not (plan to) use the M.2_2 because they shares the bandwidth (same x4 lanes) and you can only have one enabled at a time in BIOS AND I am not even sure you can VROC PCIEX4_1 given it's PCH connection.
    Here is an example of an x4 PCIe adapter (that I have not tried) https://www.amazon.com/EZDIY-FAB-Exp.../dp/B01GCXCR7W

    ii) get a ASUS hyper x16 card with 4 M.2 slots and put it in the last PCIEX16_3. BUT it's usability is going to depend on the CPU you got, see table on page 1-8 in the user manual... the 10980XE should allow for x16, x16, x8, but in my experience it only does x16, x16, x4 (maybe a BIOS issue) so in the end you will at most be able to use max 2 slots of the Asus hyper x16 drives (like if you downshift the PCIEX16_2 to x8 for graphics card 2). if you have a "lesser" CPU with PCIe 44 or less lanes you have even less choices see the table on page 1-8.

    I have the R6EE with a 10980XE and 256GB ram etc.
    I had the same idea as I think you had: use 2 graphics card in PCIEX16_1 PCIEX16_2 and put 2 intel 905P 308GB M.2 drives in the DIMM.2 slot VROC Raid for system (and I still have space in PCIEX4_1, PCIEX16_3 for the future). AND it does not work because they end up on two different VMDs!!!

    So in the end used my Asus hyperx16 v2 card that I already had, bought 2 more 905Ps, forgot about the 2nd graphics card (wait for 3080ti which is coming "real soon now") and put the hyperx16 card in PCIEX16_2. This works100% with blazing 10GB/s speeds in writes and reads with massive IOPS and so on...

    I now have space for more future expansions and is not maxed out in PCIe slots - I'm quite happy and think that I made the right decision using 1 2080ti while waiting for 3080ti...

    R6EE is a pleasure to work with (compared to R6E) and 10980XE is easy to over clock on this board - almost effortless to sustainable achieve 24/7 4.6G (and a bit more work to get to 5GHz but not sustainable 24/7) - I used folding@home to really stress test the system for sustainable OC for days. Some of the workunits are super stressfull more so than the standard suite used (AIDA64, cinebench, pie...) and test well for sustainable load on both the graphics card and the CPU simultaneously with heave use of AVX that can drive up temps and power usage both peak and sustainable, really, really high.

    Good luck!
    Last edited by Int8bldr; 04-22-2020 at 05:36 PM.

Page 2 of 2 FirstFirst 1 2

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •