cancel
Showing results for 
Search instead for 
Did you mean: 

Can't figure it how to properly run SLI and 2 M.2 drives (Rampage V Edition 10)

ariefp
Level 8
I tried every possible scenario. One M.2 drive installed to motherboard slot, while the other drive packed in pcie adapter.

If I install the second drive to pciex8_2 slot, one of my graphic card would automatically run at x8. As far as I concern, there's no bios setting to override it.

If I install the second drive to pciex4_1 slot, the drive read speed would bottleneck at 1500MBps (out of 2500MBps when installed to pciex8_2) due to pcie 2.0 limitation I guess

If I install the second drive to pciex8_4 slot, the motherboard M.2 drive slot would be disabled.

Is it just not possible to utilize full 40 lanes out of my processor? (SLI -> 2x16 lanes = 32 lanes; M.2 drives at x4 each = 8 lanes). I have no other pcie device.

Rampage V edition 10
2x GTX 1080 Ti
Samsung 960 Pro 1TB
Samsung 950 Pro 512GB
6,231 Views
18 REPLIES 18

Korth
Level 14
GPU cards in PCIEX16_1 and PCIEX16_3 will run x16/x16. But populating PCIEX8_2 or PCIEX8_4 runs these GPUs at x16/x8.
And populating PCIEX8_4 disables motherboard M.2 lanes (even with 40-lane CPU).
This seems to be a "hardwired" design configuration, firmware (BIOS) apparently cannot reconfigure it.
People have requested/demanded this firmware function before, on two X99 R5E motherboard versions, and it never happened.
(I was always disappointed that X99 x16/x16/x8 was not an option, let alone Intel's promised X99 x8/x8/x8/x8/x8, haha.)

The 512GB 950 Pro has rated speeds up to 2500MB/s Read, up to 1500MB/s Write.
4xPCIe3 can support up to 3940MB/s. 4xPCIe2 can support up to 2000MB/s. 4 lanes used in CPU PCIe controller either way.
(You might be observing ~1500MB/s because your PCH-to-CPU DMI2 bandwidth is saturated by M.2, SATA, USB, network, audio, and other devices.)

As you've observed, you can't have it all simultaneously and your best options for installing both GPUs and both M.2 SSDs on this motherboard are:
- GPU cards in PCIEX16_1 and PCIEX16_3 (SLI x16/x16) and 950 Pro adapter in PCIEX4_1 (running up to 2000MB/s), or
- GPU cards in PCIEX16_1 and PCIEX16_3 (SLi x16/x8) and 950 Pro adapter in PCIEX8_2 (running up to 2500MB/s).

Yes, it's because of the motherboard design. But none of the other X99 motherboards is any better, they each have their own performance tradeoffs (except perhaps the X99-E WS, with its PLX/PEX chip).

So which performance metric is more important to you?

Does x16/x8 actually reduce your raw fps?
Probably not. At least not very much and only rarely, especially since first GPU gets most of the unbalanced workload.
(For perspective, run your games and your benchmark engines. Actual fps matters, synthetic scores don't. Some games balance multi-GPU loads better than others and might actually see a few more fps at x16/x16 under peak loads, but most games handle multi-GPU poorly and would see no difference at x16/x8.)

Does 2000MB/s actually slow down your loading times?
Probably a little. But not much and not often, since most system access is on the faster and bigger M.2 SSD.
(For perspective, a 100MB sequential Read at this speed takes 0.05 seconds instead of 0.04 seconds. How often do you do huge sustained multi-GB Reads off your secondary SSD which would be slowed down by more than an imperceptible fraction of a second? And can your system actually use >2000MB/s of non-system data without processing slowdown anyhow?)
"All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others." - Douglas Adams

[/Korth]

Very straightforward answer. Thank you for your reply. I guess there's not much to do except waiting for an unlikely firmware update for x99 addressing this issue.

Actually, I ran a real app fps test with 16/8 and there was not much performance drop compared to 16/16.

I work with 4K and even 5K rendering/transcoding quite frequently. I need every Bites of that M.2 bandwidth, so I guess I'll go 16/8 route.

ariefp wrote:
Very straightforward answer. Thank you for your reply. I guess there's not much to do except waiting for an unlikely firmware update for x99 addressing this issue.

I had to balance fast storage throughput vs fast GPU crunching throughput on my X99, same problem and tradeoff for the lesser evil.

I don't seriously expect any more X99 firmwares. Except, if needed, expanded compatibility with newer LGA2011-3 CPUs or denser DDR4 DIMMs or new WinOS version, or a fix-patch rushed out to seal some critical security exploit. ASUS will continue to "support" X99 motherboard models only until the 3-year or 5-year warranty expires (in Q4/2017~Q2/2020), but they've already moved on to X299 LGA2066 and beyond.

You never know, Intel might release some "Special Edition" LGA2011-3 refresh on 14nm or 10nm just to push good old X99 to its limits. Quad-channel DDR4 kits might come with 64GB or 128GB DIMMs. Microsoft could start beta on a "Windows 11" which breaks everybody's computers again. But these are all unlikely and, even if they should transpire, it would only be when other (and more compelling) post-X99 platforms are available.
"All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others." - Douglas Adams

[/Korth]

Korth wrote:
I had to balance fast storage throughput vs fast GPU crunching throughput on my X99, same problem and tradeoff for the lesser evil.


This will be my primary concern for my next build. Will stick with my x99 for one or two years, so..

afshin
Level 7
sorry, I have this problem too, can we use slot one for 1 gpu and slot four for second gpu?

I assume what you meant by slot 4 is pciex8_4. Populating this slot will disable M.2 and U.2. And your GPU will only run at x8.

Chino
Level 15
No firmware update can change how the motherboard's PCIe lanes distribution was designed. However, there is a way to have both GPUs running at x16/x16 without limiting your storage performance, but it's an expensive one. Ditch both M.2 drives and just grab the 2TB model. 😛

That's not really a solution though.

The idea here is to utilize as many PCIE lanes as possible (preferably all 40 lanes). You see when you only use one 2TB 960 PRO, it would only run at x4 with theoretical read speed up to 3500 MBps. But if you use two 1TB 960 PRO and both utilize pcie 3.0 x4, you could get a theoretical read speed up to 7000 MBps. People who use and edit RED Camera footage would appreciate that 7 GBps bandwidth.

ariefp wrote:
That's not really a solution though.

The idea here is to utilize as many PCIE lanes as possible (preferably all 40 lanes). You see when you only use one 2TB 960 PRO, it would only run at x4 with theoretical read speed up to 3500 MBps. But if you use two 1TB 960 PRO and both utilize pcie 3.0 x4, you could get a theoretical read speed up to 7000 MBps. People who use and edit RED Camera footage would appreciate that 7 GBps bandwidth.



https://www.asus.com/Motherboard-Accessory/HYPER-M-2-X16-CARD/



“Two things are infinite: the universe and human stupidity, I'm not sure about the former” ~ Albert Einstein