cancel
Showing results for 
Search instead for 
Did you mean: 

Rampage VI Extreme Encore - DIMM.2 PCIE Lane Allocation

BlueScreen
Level 7
I have the Rampage VI Extreme Encore with 2080ti's in PCIE slots 1 & 2, both running at x16. I have NVME SSD in slots M2_1 and M2_2. I also want to add NVME SSD in slots DIMM.2_1 and DIMM.2_2. The motherboard user guide page 1-8, seems to indicate that there will not be any PCIE lane conflict when using a 48-lane CPU but I would like to confirmation from someone who has a similar configuration before I shell out the money for the NVME drives.

Thanks.
3,126 Views
5 REPLIES 5

BenJW
Level 7
Hi, I am wondering about the same question. Also, the manual indicates on page 1-8 that with a 48 lane CPU, I can use PCIEX16_1 and PCIEX16_2 each with 16 lanes, PCIEX16_3 with 8 lanes, but only DIMM.2_1 and not DIMM.2_2 (each would have 4 lanes). In sum that's only 44 usable lanes with a 48 lane CPU (!). Similarly with a 44 lane CPU, the maximum number of usable CPU lanes seems to be only 40...
What happened to the remaining 4 CPU lanes (which are the main point of Cascade Lake X)?!
I am aware that there's also M2_1 and M2_2, but those are connected to the PCH (just as PCIEX4_1 sharing bandwidth with M2_2), so the DMI becomes a bottleneck if those two PCH M2 slots are occupied with fast SSDs.

BenJW wrote:
Hi, I am wondering about the same question. Also, the manual indicates on page 1-8 that with a 48 lane CPU, I can use PCIEX16_1 and PCIEX16_2 each with 16 lanes, PCIEX16_3 with 8 lanes, but only DIMM.2_1 and not DIMM.2_2 (each would have 4 lanes). In sum that's only 44 usable lanes with a 48 lane CPU (!). Similarly with a 44 lane CPU, the maximum number of usable CPU lanes seems to be only 40...
What happened to the remaining 4 CPU lanes (which are the main point of Cascade Lake X)?!
I am aware that there's also M2_1 and M2_2, but those are connected to the PCH (just as PCIEX4_1 sharing bandwidth with M2_2), so the DMI becomes a bottleneck if those two PCH M2 slots are occupied with fast SSDs.


You left out the 4 dedicated to the DMI.
If you have a 99XX the last slot can only be X4. No matter which you have X16, X16, X8 or X!^, X16, X4 if the bottom slot is used DIMM.2_2 is not. Its one or the other. The only thing the extra lanes are any good is if you are running 3 way SLI or Xfire you can have two at X16 and the last at X8. IMO unless you are plugging quadro cards in and using it as a cad workstation more than two cards are a waste and some will argue that anything SLI is a waste but those arguments always come from the cheap seats. Yes some titles are not coded for SLI but thats on the DEVs there are plenty that are. I have at least 20 installed that support SLI and scaling with RTX and NVlink is almost 100%.

BigJohnny wrote:
You left out the 4 dedicated to the DMI.
If you have a 99XX the last slot can only be X4. No matter which you have X16, X16, X8 or X!^, X16, X4 if the bottom slot is used DIMM.2_2 is not. Its one or the other. The only thing the extra lanes are any good is if you are running 3 way SLI or Xfire you can have two at X16 and the last at X8. IMO unless you are plugging quadro cards in and using it as a cad workstation more than two cards are a waste and some will argue that anything SLI is a waste but those arguments always come from the cheap seats. Yes some titles are not coded for SLI but thats on the DEVs there are plenty that are. I have at least 20 installed that support SLI and scaling with RTX and NVlink is almost 100%.


Hi BigJohnny, thanks for your reply. I have a 7980XE with advertised 44 PCIe CPU lanes. My goal was indeed to run two GPUs as 2-way SLI in PCIEX16_1 and PCIEX16_2 consuming 16 lanes each, and use the remaining 12 lanes for 3 NVMe SSDs (each consuming 4 lanes). According to several discussions (e.g. https://linustechtips.com/main/topic/859601-skylake-x-x299-pcie-lanes/), the 4 lanes for DMI are counted separably from the 44 PCIe lanes.

ASUS's own Prime X299 Deluxe-II can apparently use all 44 CPU lanes with PCIe slots in x16/x16/x8 configuration and the additional M.2_3 slot receiving the remaining 4 CPU lanes (according to the manual https://dlcdnets.asus.com/pub/ASUS/mb/LGA2066/PRIME_X299-DELUXE_II/E15016_PRIME_X299-DELUXE_II_UM_V2...).
Also, for ASRock's X299 Creator mainboard, its manual (https://download.asrock.com/Manual/X299%20Creator.pdf) explicitly states that for a "CPU with 48 lanes, PCIE1/PCIE2/PCIE3/PCIE5 will run at x16/x8/x16/x8" and for a "CPU with 44 lanes, PCIE1/PCIE2/PCIE3/PCIE5 will run at x16/x4/x16/x8", summing up to the total no. of 48 or 44 advertised CPU lanes, respectively, and that using one or two of the M.2 slots will take lanes off from PCIE2.

So I do wonder what the premium ROG Rampage VI Extreme Encore is doing with the precious remaining 4 CPU lanes. It seems actually worse than the cheaper Prime Deluxe-II, if it really is wasting those extra 4 lanes. This would render the evolution of the Rampage VI kind of pointless, after all the Encore is the already the third incarnation of the R6E that directly came with support for the Cascade Lake X series CPUs (offering 48 instead of max. 44 CPU lanes).

K01D57331
Level 9
BlueScreen wrote:
I have the Rampage VI Extreme Encore with 2080ti's in PCIE slots 1 & 2, both running at x16. I have NVME SSD in slots M2_1 and M2_2. I also want to add NVME SSD in slots DIMM.2_1 and DIMM.2_2. The motherboard user guide page 1-8, seems to indicate that there will not be any PCIE lane conflict when using a 48-lane CPU but I would like to confirmation from someone who has a similar configuration before I shell out the money for the NVME drives.

Thanks.


My setup as is follows...
Rampage VI Extreme Encore
10940x
2 2080Ti in slots 1 & 2
M2_1 and M2_2 have Samsung 870 Evo Pro in both
Dimm2.1 and .2 have Intel 660p in both

To recognize both Dimm.2 drives you need to change a setting in BIOS to PCIeX4. I am not sure of the correct wording for it off the top of my head.

Having no issues besides that I cannot find a 10980xe 🙂 Everything is running at its max speed...

BigJohnny
Level 13
Yes this can be done. Only thing you cant do is PCIEX4 and DIMM.2_2 at the same time, its one or the other and that shows up in drives section of the BIOS to select one or the other. The M2 drives under the cover are connected VIA PCH so thats all shared via the DMI bus with everything else, USB, SATA etc.