cancel
Showing results for 
Search instead for 
Did you mean: 

R6E Bifurcation behavior

rodval
Level 7
Hello community!

I need some advice about PCIE bifurcation on R6E motherboard. I just finished an upgrade on my Rig, replacing my old 7900x with a 10980xe. My current specs are as follow:

- R6E, BIOS 3403 (not modded), 10980xe, 64GB 3600MHz (8x8)
- PCIEX16_1 - 2080 Ti
- PCIEX8_2 - ASUS HYPER M.2 x16 VROC (with 2 670p intel NVME)
- PCIEX4_1 - Audigy Rx
- PCIEX16_3 - 2080 Ti
- PCIEX8_4 - ASUS HYPER M.2 x16 VROC (with 2 670p intel NVME)
- Onboard M.2 - Samsung 970 Evo
- DIMM2.1 - Samsung 97 Evo
- DIMM2.2 - Not in use.

The fact is that when I enter the Bios, no matter what, on advanced -> system agent configuration -> PCIEX8_2 and PCIE8_4 are always fixed at x4 bandwidth (even with 2 NVME SSDs in each HYPER adapter). The bios recognizes both SSD on PCIEX8_2 but fails to recognize one of the two SSD on the second adapter located at PCIEX8_4.

To verify if there was a possible problem with the board, I replaced the ASUS HYPER on PCIEX8_4 with one of my 2080Ti and bios recognized the video adapter with a x8 bandwidith.

I have tested all Bios from 2002 to 3403, modded or not, and the same behavior happens.

When I take off one SSD from the ASUS HYPER adapter located at PCIEX8_4 and attach it to DIMM2.2, the SSD is immediately recognized.

WIth my current "working" configuration (2x 670p on ASUS HYPER x16 at PCIEX8_2, 1x 670p on ASUS HYPER at PCIEX8_4 and 1x 670p on DIMM2.2) I am able to boot W10 in RAID0 with spanned across different VMDs and achieve from 7 to 9,2Gb/s at sequential read. But I would like to use the SSD that is currently located at DIMM2.2 in the HYPER at PCIEX8_4.

On SA Configuration, both PCIEX8_2 and PCIEX8_4 are marked as HYPER M.2 (VROC).

Do you guys have any thoughts about what could be the problem?

FYI also I have an Intel Premium SSDVROC key attached to the board (also tested without key - but no success to work on 2x 670p at ASUS on PCIEX8_4).

Any help/feedback will be much appreciated.

Thanks,
Rodrigo.
1,917 Views
7 REPLIES 7

Nate152
Moderator
Hi rodval

Intel states a maximum of 48 PCIe lanes on the 10980xe.

From the Asus spec sheet:

4 x PCIe 3.0 x16 (x16, x16/x16, x16/x0/x16/x8, or x16/x8/x8/x8 mode with 44-LANE CPU

It looks like you could be using more lanes than are available, especially with 2x 2080 ti's and 2x ASUS Hyper VROC M.2 at x16. Double check in the motherboard manual for limitations and sharing of ports.

Nate152 wrote:
Hi rodval

Intel states a maximum of 48 PCIe lanes on the 10980xe.

From the Asus spec sheet:

4 x PCIe 3.0 x16 (x16, x16/x16, x16/x0/x16/x8, or x16/x8/x8/x8 mode with 44-LANE CPU

It looks like you could be using more lanes than are available, especially with 2x 2080 ti's and 2x ASUS Hyper VROC M.2 at x16. Double check in the motherboard board manual for limitations and sharing of ports.


Hi Nate152! Thanks for the promptly response.

In fact, the board when all slots are populated is running at x16, x4, x8, x4 (as this is what appear at the bios). As I pointed before, I tested PCIEX8_4 with one of my 2080Ti and the bandwidth link was at x8. I didn't mention (and sorry for that), I also tested one single card in each slot. PCIEX16_1, was at x16, PCIEX8_2, was at x8, PCIEX16_3, was at x16 and PCIEX8_4 was at x8.

But when I populate all slots, the board behaves as x16, x4, x8, x4.

Both slots PCIEX8_2 and PCIEX8_4 have maximum bandwidth x8. PCIEX16_3 is downsized to x8 when PCIEX8_4 is populated (as both shares x8 bandwidth).

Nate152 wrote:
Hi rodval

Intel states a maximum of 48 PCIe lanes on the 10980xe.

From the Asus spec sheet:

4 x PCIe 3.0 x16 (x16, x16/x16, x16/x0/x16/x8, or x16/x8/x8/x8 mode with 44-LANE CPU

It looks like you could be using more lanes than are available, especially with 2x 2080 ti's and 2x ASUS Hyper VROC M.2 at x16. Double check in the motherboard manual for limitations and sharing of ports.


This,
You cannot run 2 x GPUs and a hyperX16 card, Best you can do is on the remaining X8 slot you can put two drives in there then whats left is X4 only and the top one is an either or with DIMM.2_2 . X4 gets your one drive. The only way to effectively use the X16 card is with a single GPU and they you can run 4 drives in one AIC.

Nate152
Moderator
Here is more from the ASUS spec sheet. The fact everything looks good with just a few devices installed pretty much points to, too many devices or configuration, your motherboard manual should tell you alot more.


* When M.2_2(DIMM.2) is populated, PCIEx8_4 runs at x4 mode

* The PCIE_X8_4 slot shares bandwidth with M.2_2(DIMM.2)

*1 When M.2_1(DIMM.2) comes from CPU, it will be shared with U.2

*2 When M.2_1(DIMM.2) comes from PCH, It will be shared with PCIe x4 slot.

Nate152 wrote:
Here is more from the ASUS spec sheet. The fact everything looks good with just a few devices installed pretty much points to, too many devices or configuration, your motherboard manual should tell you alot more.


* When M.2_2(DIMM.2) is populated, PCIEx8_4 runs at x4 mode

* The PCIE_X8_4 slot shares bandwidth with M.2_2(DIMM.2)

*1 When M.2_1(DIMM.2) comes from CPU, it will be shared with U.2

*2 When M.2_1(DIMM.2) comes from PCH, It will be shared with PCIe x4 slot.


That's absolutely correct. But I'm sorry to insist on tell you that when I tried 2x 670p on HYPER at slot PCIEX8_4, the M.2_1 (DIMM.2) was empty and the bandwidth for PCIEX8_4 was at x4. Nevertheless, what could be the cause for PCIEX8_2 also being at x4 bandwidth even though I have 2x 670p on that slot too?

Nate152
Moderator
Even though the sound card is just x1, it could be affecting the lanes too.

One way to diagnose is disconnect everything except the 2080 ti's and your boot drive and see how this looks.

Then install one device at a time, checking each time that all is good.

G75rog
Level 10
Another thing to consider is the Hyper card has 4 pcie lanes hardwired to each M.2 socket and needs 16 lanes to be fully functional. If you are only feeding it 8 lanes then 2 sockets are dead. Move a single ssd from socket to socket to identify which ones are live.

A simple solution is to pull the second 2080 and replace it with a fully populated Hyper M.2.