cancel
Showing results for 
Search instead for 
Did you mean: 

Zenith II Extreme Alpha experience, RAIDXper2 and Hyper M.2 X16 Gen 4

mohsh86
Level 7
BIOS: 1402
CPU 3970X
RAM: 256 GB 3200 GSkill CAS 16 (F4-3600C16Q2-256GVK)
GPU: RTX 3060 Ti in PCIE_1
GPU2: Quadro P620 in PCIE_4

M.2_1, M.2_2, DIMM_1, DIMM_2 are Samsung 980 Pro 500 GB
M.2_3 is 980 Pro 250 GB

Hyper M.2 Gen 4 with 4 x 980 Pro 500 GB in PCIE_3
Hyper M.2 Gen 4 with 2 x 980 Pro 250 GB in PCIE_2

Initially had a Highpoint 7505 (HW RAID, PCIE Gen 4), returned because of heat and noise (thermal pad not making contact).

Experience/problems/rants/notes:

1. This motherboard hasn't been tested to it's full potential, i have every slot in this motherboard populated and it does not work as expected.
2. If MB populated with more than 2 GPU's, it won't POST (with/without extra GPU molex power connected plugged in) (example, RTX 3060 Ti in PCIE_1, Quadro P620 in PCIE_2, 3 or 4). stuck VGA D4 or load VGA bios in mini oled.
3. i believe NVME RAID has never been tested in this motherboard, i can't get any of the following working:

  • When trying to install windows (with RAID drivers in the right order), windows won't boot, it will infinitely get stuck at loading screen, switching NVME RAID to off, windows will boot, i have initiated support ticket with AMD but am not expecting much
  • if Hyper M.2 Gen 4 card installed, switching PCIE configuration to PCIE RAID (i.e. bifurication on), all NVME's will be detected. if NVME RAID is turned after that, i either can't get to BIOS or BIOS screen would load and get corrupted. (i.e. rubbish showing in screen), either bifurication on or NVME RAID is on, can't have both which is no good.
  • Hyper M.2 Gen 4 card manual is wrong, they advise to install in M.2_1 and M.2_3 if you're planning to use 2 NVMe's only, this won't work in a PCIE 8X slot, you need to install in M.2_1 and M.2_2.


I have specifically purchased an Asus MB because i have been told they have the best BIOS support, or so goes the myuth.. :mad:

88052


88051
5,389 Views
13 REPLIES 13

mohsh86
Level 7
Not to mention the trash QC the card has gone through, fan doesn't spin because it's blocked by the fan power cable8805388054

Legolas
Level 9
sorry to hear that. The motherboard does support Hyper x16 given from Asus website - https://www.asus.com/us/support/FAQ/1037507

Could you please try reseating everything first including 24pin/8pin, GPU? I check your pictures, and it seems like there is some signal interference i.e noise. DId you try reseating the video cable? Try to swap the video cables.

If it still have issues, please remove the secondary video card and see if the video noise goes away. Make sure your cables (SATA, 24pin, 8pin) are snug tight and no loose wires.

For Hyper x16, please adjust wiring from the fan to the fan header so the fan can move freely.
For Raid, please adjust the PCIE to PCIE RAID Mode to support Hyper x16 card, go to BIOS -> Advanced -> Onboard Devices Configuration -> PCIEX16_1 Bandwidth or/and PCIEX16_2 Bandwidth or/and PCIEX16_3 Bandwidth and/or PCIEX16_4 Bandwidth -> PCIe Raid Mode

CSM needs to be disable (to allow NVME to load on M.2) and SATA mode to RAID mode.
Sincerely,
Legolas

Legolas wrote:
sorry to hear that. The motherboard does support Hyper x16 given from Asus website - https://www.asus.com/us/support/FAQ/1037507

Could you please try reseating everything first including 24pin/8pin, GPU? I check your pictures, and it seems like there is some signal interference i.e noise. DId you try reseating the video cable? Try to swap the video cables.

If it still have issues, please remove the secondary video card and see if the video noise goes away. Make sure your cables (SATA, 24pin, 8pin) are snug tight and no loose wires.

For Hyper x16, please adjust wiring from the fan to the fan header so the fan can move freely.
For Raid, please adjust the PCIE to PCIE RAID Mode to support Hyper x16 card, go to BIOS -> Advanced -> Onboard Devices Configuration -> PCIEX16_1 Bandwidth or/and PCIEX16_2 Bandwidth or/and PCIEX16_3 Bandwidth and/or PCIEX16_4 Bandwidth -> PCIe Raid Mode

CSM needs to be disable (to allow NVME to load on M.2) and SATA mode to RAID mode.


I don't believe the issue is relevant to GPU's, rather than the motherboard/ bad BIOS program or chip, here is why:

1. Trying to use a single GPU at the time, the same "rubbish" happens on the BIOS screen, which tells me is not relevant to the power
2. Also when having NVMe RAID as "Disabled" the problem does not happen. with bifurication on, i can see all NVMe's in Hyper card.
3. Having NVMe RAID Enabled with PCIe Raid Mode disabled, the problem does not happen.
4. It is only when both NVME Raid and PCIe Raid (bifurication) enabled, where the problem happens.
5, Reflashing the Bios through USB / button (fresh copy from internet) does not fix the issue.

I believe the issue is relevant to Motherboard/Bios, i don't believe this motherboard has been tested with varios PCIe slot been populated with different configuration.

Also, i don't know why i need to move the Hyper card fan cable myself, isn't that something that should be done by Asus before the release such a card? i bet this card hasn't been tested at all.

Dimitrios1971
Level 12
So my friend Do not activate the PCIe Raid Mode. First after the installation. Please first put the NVME raid on the module and install your windows version. When you're done and everything is installed, reboot your pc and go to bios, activate PCIe raid mode and restart the pc. Reboot your PC again and go to bios and create your PCIe raids. So should work

PS: By the way, my english is disastrous:o

Edit

That still. When you have done all of this you can also switch on the CSM

OLD


NEW

Anthalus
Level 7
The problem you are experiencing is very similar to the one I had a couple months ago.
Our system setups both exist out of 11 NVMe Drives, which is actually the cause of the issue going on.
After lots of experimenting I figured out that the system either refused to boot properly or gave me BIOS corruption as soon as 11 drives were installed.

At the time I was reading up on the AMD website and found that the maximum supported drives in NVMe RAID is limited at 10.
Weird thing is that I can't find the exact page anymore where this was stated.
Finally found something about it again in de readme.rtf file which is attached to the drivers itself:

Maximum Supported Controllers:
⦁ 7 NVMe + 4-SoC when x570/590 is set to RAID in the BIOS
⦁ 8-NVMe + 2-SoC + 1-PT when set to RAID in the BIOS
⦁ 10-NVMe when SoC and PT are Disabled in the BIOS
⦁ 10-NVMe + 1-PT when SoC is Disabled in the BIOS
⦁ 9-NVMe + 2-SoC when PT is Disabled in the BIOS

Known issues
⦁ Hibernate performance drop in RAID-5 on specific HDD.
⦁ Driver load issue with drvload command.
⦁ Array transformation with IO taking long time.
⦁ no support for 2 ODDs on the same port of 2 different controllers.
⦁ With 11 PCIe NVMe SSD's system boot to OS may fail.
⦁ RS5x64 OS taking long time to load the driver.

Try dropping 1 NVMe drive and you should be fine (worked in my case).

I was actually thinking about getting a Highpoint 7540 to move all the NVMe drives except for the boot drive away from the motherboard.
The price of that expansion card (and some bad reviews) kinda made me hesitate, so reading about your 7505 adventure actually helped me decide to NOT move into that direction.

The Zenith II Extreme board itself is one of the best boards I have ever owned.
Only thing I don't like about it is losing 4 lanes to some lame USB controller :mad: , but I can live with that.

Anthalus wrote:
The problem you are experiencing is very similar to the one I had a couple months ago.
Our system setups both exist out of 11 NVMe Drives, which is actually the cause of the issue going on.
After lots of experimenting I figured out that the system either refused to boot properly or gave me BIOS corruption as soon as 11 drives were installed.

At the time I was reading up on the AMD website and found that the maximum supported drives in NVMe RAID is limited at 10.
Weird thing is that I can't find the exact page anymore where this was stated.
Finally found something about it again in de readme.rtf file which is attached to the drivers itself:

Maximum Supported Controllers:
� 7 NVMe + 4-SoC when x570/590 is set to RAID in the BIOS
� 8-NVMe + 2-SoC + 1-PT when set to RAID in the BIOS
� 10-NVMe when SoC and PT are Disabled in the BIOS
� 10-NVMe + 1-PT when SoC is Disabled in the BIOS
� 9-NVMe + 2-SoC when PT is Disabled in the BIOS

Known issues
� Hibernate performance drop in RAID-5 on specific HDD.
� Driver load issue with drvload command.
� Array transformation with IO taking long time.
� no support for 2 ODDs on the same port of 2 different controllers.
� With 11 PCIe NVMe SSD's system boot to OS may fail.
� RS5x64 OS taking long time to load the driver.

Try dropping 1 NVMe drive and you should be fine (worked in my case).

I was actually thinking about getting a Highpoint 7540 to move all the NVMe drives except for the boot drive away from the motherboard.
The price of that expansion card (and some bad reviews) kinda made me hesitate, so reading about your 7505 adventure actually helped me decide to NOT move into that direction.

The Zenith II Extreme board itself is one of the best boards I have ever owned.
Only thing I don't like about it is losing 4 lanes to some lame USB controller :mad: , but I can live with that.


I have probably forgot to mention that i have all SATA ports populated, the SoC ones being in RAID 5..

looks like i'll install proxmox and go with ZFS and move on with my life, what a waste of capability..

Anthalus wrote:


Maximum Supported Controllers:
� 7 NVMe + 4-SoC when x570/590 is set to RAID in the BIOS
� 8-NVMe + 2-SoC + 1-PT when set to RAID in the BIOS
� 10-NVMe when SoC and PT are Disabled in the BIOS
� 10-NVMe + 1-PT when SoC is Disabled in the BIOS
� 9-NVMe + 2-SoC when PT is Disabled in the BIOS



would you mind explaining what's SoC and PT are mate?

Hi,

I was trying to put my Hyper M.2 X16 Gen 4 in a full working unit but did not succeed 😞

I have wasted about 6 hours in a row to figure it out on how to make it work.

I was able to do the raid with RAIDXpert2 in Bios, 2 x AORUS 1TB Gen4 NVME only, no other drives installed. Everything goes fine until when I boot with the Windows 10 USB Install tool, I'm not able to make it detect it with those Raid drivers we have with the Z2E. So I cannot install Windows on the RAID 0 I just did.

So sad, anyone was able to make it work?

I would be so happy.

Mike

olivieraaa
Level 8
This might be a bit unrelated to the exact issue, but I wonder if you guys have experienced the same NVME bandwidth issue I am having:

I am having a hard limit in windows in terms of file copy speeds, max. 2.1GBytes per second. It doesn't matter how I setup my NVME's or even if I software RAID them using windows (or the new Storage Spaces), the transfer speed is limited to that 2.1GBytes per second. Searched google and saw just a couple people talk about their limit without understanding why that is.

My config:
- 3990x
- Extreme Alpha
- Windows 64-bit Pro with lastest updates
- 128GB RAM @3200 and tried @3600
- 2x Samsung 980 Pro 2TB setup in M.2_1 and M.2_2
- Tried also with 2x Samsung 960 Pro 1TB

Also tried in Both DIMM.2 slots with the same hard limit (actually closer to 2.0GBytes per second in this config).

I have not tried NVME RAID option in BIOS as I presume it is only for the NVME RAID Cards.

To clarify, benchmarking with the synthetic benchmark CrystalDiskMark I can get 10 Gigabytes per second both read and right on software RAID Q1T1. But if I use any other benchmark tools or just windows copying I am stuck at a maximum speed of 2.1GBytes per second.

HD Tune benchmarking gets 2.1Gbytes per second peak.

I understand that the NVME's have peak speeds and have RAM on them and when that gets filled up then the average read and write is limited. I never reach any higher speeds even before the drive cache gets filled up. Also, in Software RAID with 2 or 3 or 4 disks it should scale but NADA, not one byte more per second. Something is limiting the speed.

When I try to bench both DIMM.2 slots at the same time they seem to be sharing the bandwidth almost 50%/50% (that's expected since they're PCIe lanes).
When I try to bench both M.2_1 and M.2_2., I hit a peak of 2.3GBytes per second combined, which is barely higher than the issue I am getting, which works out slightly better but very far from expected performance.

My BIOS is configured properly and I disable the SATA ports, and even make room to give lanes for the M.2_3 which I don't use.

Anyone can explain if there is some kind of limit (at the 2GBps mark) on AMD processors I am not aware of ? It's my first AMD.

My other (much older) Rampage VI Extreme X299 using the same drives hits 3.3 GBytes per second for the 980's pro 2TB, and 2.2GBytes per second for the 960 Pro 1TB on each drive using the DIMM.2 slot with CPU lanes configured. And they stack almost linearly when benchmarks are executed at the same time on multiple disks... never hit a limit yet on that X299 board.

Am I missing something ?

Thx.