Page 1 of 2 1 2 LastLast
Results 1 to 10 of 14
  1. #1
    ROG Member Array
    Join Date
    Jan 2021
    Reputation
    10
    Posts
    7

    Thumbs up Zenith II Extreme Alpha experience, RAIDXper2 and Hyper M.2 X16 Gen 4

    BIOS: 1402
    CPU 3970X
    RAM: 256 GB 3200 GSkill CAS 16 (F4-3600C16Q2-256GVK)
    GPU: RTX 3060 Ti in PCIE_1
    GPU2: Quadro P620 in PCIE_4

    M.2_1, M.2_2, DIMM_1, DIMM_2 are Samsung 980 Pro 500 GB
    M.2_3 is 980 Pro 250 GB

    Hyper M.2 Gen 4 with 4 x 980 Pro 500 GB in PCIE_3
    Hyper M.2 Gen 4 with 2 x 980 Pro 250 GB in PCIE_2

    Initially had a Highpoint 7505 (HW RAID, PCIE Gen 4), returned because of heat and noise (thermal pad not making contact).

    Experience/problems/rants/notes:

    1. This motherboard hasn't been tested to it's full potential, i have every slot in this motherboard populated and it does not work as expected.
    2. If MB populated with more than 2 GPU's, it won't POST (with/without extra GPU molex power connected plugged in) (example, RTX 3060 Ti in PCIE_1, Quadro P620 in PCIE_2, 3 or 4). stuck VGA D4 or load VGA bios in mini oled.
    3. i believe NVME RAID has never been tested in this motherboard, i can't get any of the following working:
    • When trying to install windows (with RAID drivers in the right order), windows won't boot, it will infinitely get stuck at loading screen, switching NVME RAID to off, windows will boot, i have initiated support ticket with AMD but am not expecting much
    • if Hyper M.2 Gen 4 card installed, switching PCIE configuration to PCIE RAID (i.e. bifurication on), all NVME's will be detected. if NVME RAID is turned after that, i either can't get to BIOS or BIOS screen would load and get corrupted. (i.e. rubbish showing in screen), either bifurication on or NVME RAID is on, can't have both which is no good.
    • Hyper M.2 Gen 4 card manual is wrong, they advise to install in M.2_1 and M.2_3 if you're planning to use 2 NVMe's only, this won't work in a PCIE 8X slot, you need to install in M.2_1 and M.2_2.


    I have specifically purchased an Asus MB because i have been told they have the best BIOS support, or so goes the myuth..

    Click image for larger version. 

Name:	20210314_003530.jpg 
Views:	2 
Size:	235.0 KB 
ID:	88052


    Click image for larger version. 

Name:	20210313_233901.jpg 
Views:	1 
Size:	602.0 KB 
ID:	88051
    Last edited by mohsh86; 03-14-2021 at 02:27 AM.

  2. #2
    ROG Member Array
    Join Date
    Jan 2021
    Reputation
    10
    Posts
    7

    Asus Hyper M.2 x16 Gen QC

    Not to mention the trash QC the card has gone through, fan doesn't spin because it's blocked by the fan power cableClick image for larger version. 

Name:	20210313_232047.jpg 
Views:	0 
Size:	154.2 KB 
ID:	88053Click image for larger version. 

Name:	20210313_232039.jpg 
Views:	1 
Size:	124.4 KB 
ID:	88054

  3. #3
    ROG Guru: Orange Belt Array Legolas PC Specs
    Legolas PC Specs
    MotherboardMaximus X Hero Wifi AC
    ProcessorIntel Core i7 8600k oc 5Ghz
    Memory (part number)Samsung DDR4 16GB
    Graphics Card #1ASUS NVIDIA GTX 1080
    MonitorASUS MS246
    Storage #1Samsung MZVPV256
    Storage #2Samsung MZVPV256
    CPU CoolerCorsair H110I v2
    Power SupplyPC 1200W Turbo-Cool 1200
    Keyboard ASUS SK2045

    Join Date
    Nov 2015
    Reputation
    10
    Posts
    293

    sorry to hear that. The motherboard does support Hyper x16 given from Asus website - https://www.asus.com/us/support/FAQ/1037507

    Could you please try reseating everything first including 24pin/8pin, GPU? I check your pictures, and it seems like there is some signal interference i.e noise. DId you try reseating the video cable? Try to swap the video cables.

    If it still have issues, please remove the secondary video card and see if the video noise goes away. Make sure your cables (SATA, 24pin, 8pin) are snug tight and no loose wires.

    For Hyper x16, please adjust wiring from the fan to the fan header so the fan can move freely.
    For Raid, please adjust the PCIE to PCIE RAID Mode to support Hyper x16 card, go to BIOS -> Advanced -> Onboard Devices Configuration -> PCIEX16_1 Bandwidth or/and PCIEX16_2 Bandwidth or/and PCIEX16_3 Bandwidth and/or PCIEX16_4 Bandwidth -> PCIe Raid Mode

    CSM needs to be disable (to allow NVME to load on M.2) and SATA mode to RAID mode.
    Sincerely,
    Legolas

  4. #4
    ROG Member Array
    Join Date
    Jan 2021
    Reputation
    10
    Posts
    7

    Quote Originally Posted by Legolas View Post
    sorry to hear that. The motherboard does support Hyper x16 given from Asus website - https://www.asus.com/us/support/FAQ/1037507

    Could you please try reseating everything first including 24pin/8pin, GPU? I check your pictures, and it seems like there is some signal interference i.e noise. DId you try reseating the video cable? Try to swap the video cables.

    If it still have issues, please remove the secondary video card and see if the video noise goes away. Make sure your cables (SATA, 24pin, 8pin) are snug tight and no loose wires.

    For Hyper x16, please adjust wiring from the fan to the fan header so the fan can move freely.
    For Raid, please adjust the PCIE to PCIE RAID Mode to support Hyper x16 card, go to BIOS -> Advanced -> Onboard Devices Configuration -> PCIEX16_1 Bandwidth or/and PCIEX16_2 Bandwidth or/and PCIEX16_3 Bandwidth and/or PCIEX16_4 Bandwidth -> PCIe Raid Mode

    CSM needs to be disable (to allow NVME to load on M.2) and SATA mode to RAID mode.
    I don't believe the issue is relevant to GPU's, rather than the motherboard/ bad BIOS program or chip, here is why:

    1. Trying to use a single GPU at the time, the same "rubbish" happens on the BIOS screen, which tells me is not relevant to the power
    2. Also when having NVMe RAID as "Disabled" the problem does not happen. with bifurication on, i can see all NVMe's in Hyper card.
    3. Having NVMe RAID Enabled with PCIe Raid Mode disabled, the problem does not happen.
    4. It is only when both NVME Raid and PCIe Raid (bifurication) enabled, where the problem happens.
    5, Reflashing the Bios through USB / button (fresh copy from internet) does not fix the issue.

    I believe the issue is relevant to Motherboard/Bios, i don't believe this motherboard has been tested with varios PCIe slot been populated with different configuration.

    Also, i don't know why i need to move the Hyper card fan cable myself, isn't that something that should be done by Asus before the release such a card? i bet this card hasn't been tested at all.

  5. #5
    ROG Guru: Yellow Belt Array Dimitrios1971 PC Specs
    Dimitrios1971 PC Specs
    Laptop (Model)GX700VO 1TB SSD + 64GB RAM
    MotherboardASUS ROG ZENITH EXTREME ALPHA
    ProcessorAMD Ryzen Threadripper 2950X
    Memory (part number)F4-4000C18Q2-64TZR (3333MHz)
    Graphics Card #1ROG Strix GeForce GTX 1080 Ti OC-Edition
    MonitorROG Swift PG348Q Gaming
    Storage #14 x Samsung SSD NVMe 960 PRO 1TB Raid0
    Storage #24 x Samsung SSD NVMe 970 PRO 1TB Raid0
    CPU CoolerCOOLER MASTER MasterLiquid ML360 RGB TR4 Edition
    CaseROG Strix Helios
    Power SupplyASUS ROG THOR 1200P
    Keyboard ROG Strix Scope Deluxe & ROG Strix Scope TKL Deluxe
    Mouse 2 x ROG Chakram
    Headset ROG Throne Qi
    Mouse Pad ROG Balteus Qi RGB & ROG Sheath
    Headset/Speakers 1 x ROG Strix Go 2.4 & ROG Strix Wireless
    OS Win10_x64
    Network RouterFRITZ!Box 6660 Cable
    Accessory #1 BW-16D1H-U PRO
    Accessory #2 ROG STRIX ARION & Asustor AS6404T
    Accessory #3 2 x HYPER M.2 X16 GEN 4 CARD
    Dimitrios1971's Avatar
    Join Date
    Apr 2015
    Reputation
    74
    Posts
    141

    So my friend Do not activate the PCIe Raid Mode. First after the installation. Please first put the NVME raid on the module and install your windows version. When you're done and everything is installed, reboot your pc and go to bios, activate PCIe raid mode and restart the pc. Reboot your PC again and go to bios and create your PCIe raids. So should work

    PS: By the way, my english is disastrous

    Edit

    That still. When you have done all of this you can also switch on the CSM
    Last edited by Dimitrios1971; 03-20-2021 at 02:57 PM.

    OLD


    NEW


  6. #6
    ROG Junior Member Array Anthalus PC Specs
    Anthalus PC Specs
    MotherboardASUS ROG Zenith II Extreme
    ProcessorAMD Ryzen Threadripper 3960X
    Memory (part number)G.Skill Trident Z F4-3600C15D-16GTZ (8modules) - Running @ 1900Fclk/MClk 14-8-15-13-29-43 130ns
    Graphics Card #1Palit GeForce RTX 3090 Game Rock OC
    Sound CardCreative Sound Blaster G6
    MonitorLG 27GN950 / Asus PG278Q
    Storage #1RAID0 arrays: 2x Samsung 980 Pro 500GB / 2x Samsung 980 Pro 1TB
    Storage #2RAID10 array: 4x Intel P660 2TB
    CPU CoolerWatercool HEATKILLER IV PRO
    CaseAlphacool ES 4U
    Power SupplyCorsair AX1200i
    Keyboard Logitech G910
    Mouse Logitech G502
    OS Windows 10
    Network RouterUbiquiti
    Accessory #1 ASUS M.2 X16 GEN 4 CARD
    Accessory #2 Intel X710-DA4

    Join Date
    Sep 2014
    Reputation
    10
    Posts
    1

    The problem you are experiencing is very similar to the one I had a couple months ago.
    Our system setups both exist out of 11 NVMe Drives, which is actually the cause of the issue going on.
    After lots of experimenting I figured out that the system either refused to boot properly or gave me BIOS corruption as soon as 11 drives were installed.

    At the time I was reading up on the AMD website and found that the maximum supported drives in NVMe RAID is limited at 10.
    Weird thing is that I can't find the exact page anymore where this was stated.
    Finally found something about it again in de readme.rtf file which is attached to the drivers itself:

    Maximum Supported Controllers:
    ⦁ 7 NVMe + 4-SoC when x570/590 is set to RAID in the BIOS
    ⦁ 8-NVMe + 2-SoC + 1-PT when set to RAID in the BIOS
    ⦁ 10-NVMe when SoC and PT are Disabled in the BIOS
    ⦁ 10-NVMe + 1-PT when SoC is Disabled in the BIOS
    ⦁ 9-NVMe + 2-SoC when PT is Disabled in the BIOS

    Known issues
    ⦁ Hibernate performance drop in RAID-5 on specific HDD.
    ⦁ Driver load issue with drvload command.
    ⦁ Array transformation with IO taking long time.
    ⦁ no support for 2 ODDs on the same port of 2 different controllers.
    ⦁ With 11 PCIe NVMe SSD's system boot to OS may fail.
    ⦁ RS5x64 OS taking long time to load the driver.

    Try dropping 1 NVMe drive and you should be fine (worked in my case).

    I was actually thinking about getting a Highpoint 7540 to move all the NVMe drives except for the boot drive away from the motherboard.
    The price of that expansion card (and some bad reviews) kinda made me hesitate, so reading about your 7505 adventure actually helped me decide to NOT move into that direction.

    The Zenith II Extreme board itself is one of the best boards I have ever owned.
    Only thing I don't like about it is losing 4 lanes to some lame USB controller , but I can live with that.

  7. #7
    ROG Member Array
    Join Date
    Jan 2021
    Reputation
    10
    Posts
    7

    Quote Originally Posted by Anthalus View Post
    The problem you are experiencing is very similar to the one I had a couple months ago.
    Our system setups both exist out of 11 NVMe Drives, which is actually the cause of the issue going on.
    After lots of experimenting I figured out that the system either refused to boot properly or gave me BIOS corruption as soon as 11 drives were installed.

    At the time I was reading up on the AMD website and found that the maximum supported drives in NVMe RAID is limited at 10.
    Weird thing is that I can't find the exact page anymore where this was stated.
    Finally found something about it again in de readme.rtf file which is attached to the drivers itself:

    Maximum Supported Controllers:
    ⦁ 7 NVMe + 4-SoC when x570/590 is set to RAID in the BIOS
    ⦁ 8-NVMe + 2-SoC + 1-PT when set to RAID in the BIOS
    ⦁ 10-NVMe when SoC and PT are Disabled in the BIOS
    ⦁ 10-NVMe + 1-PT when SoC is Disabled in the BIOS
    ⦁ 9-NVMe + 2-SoC when PT is Disabled in the BIOS

    Known issues
    ⦁ Hibernate performance drop in RAID-5 on specific HDD.
    ⦁ Driver load issue with drvload command.
    ⦁ Array transformation with IO taking long time.
    ⦁ no support for 2 ODDs on the same port of 2 different controllers.
    ⦁ With 11 PCIe NVMe SSD's system boot to OS may fail.
    ⦁ RS5x64 OS taking long time to load the driver.

    Try dropping 1 NVMe drive and you should be fine (worked in my case).

    I was actually thinking about getting a Highpoint 7540 to move all the NVMe drives except for the boot drive away from the motherboard.
    The price of that expansion card (and some bad reviews) kinda made me hesitate, so reading about your 7505 adventure actually helped me decide to NOT move into that direction.

    The Zenith II Extreme board itself is one of the best boards I have ever owned.
    Only thing I don't like about it is losing 4 lanes to some lame USB controller , but I can live with that.
    I have probably forgot to mention that i have all SATA ports populated, the SoC ones being in RAID 5..

    looks like i'll install proxmox and go with ZFS and move on with my life, what a waste of capability..

  8. #8
    ROG Member Array
    Join Date
    Oct 2015
    Reputation
    10
    Posts
    12

    This might be a bit unrelated to the exact issue, but I wonder if you guys have experienced the same NVME bandwidth issue I am having:

    I am having a hard limit in windows in terms of file copy speeds, max. 2.1GBytes per second. It doesn't matter how I setup my NVME's or even if I software RAID them using windows (or the new Storage Spaces), the transfer speed is limited to that 2.1GBytes per second. Searched google and saw just a couple people talk about their limit without understanding why that is.

    My config:
    - 3990x
    - Extreme Alpha
    - Windows 64-bit Pro with lastest updates
    - 128GB RAM @3200 and tried @3600
    - 2x Samsung 980 Pro 2TB setup in M.2_1 and M.2_2
    - Tried also with 2x Samsung 960 Pro 1TB

    Also tried in Both DIMM.2 slots with the same hard limit (actually closer to 2.0GBytes per second in this config).

    I have not tried NVME RAID option in BIOS as I presume it is only for the NVME RAID Cards.

    To clarify, benchmarking with the synthetic benchmark CrystalDiskMark I can get 10 Gigabytes per second both read and right on software RAID Q1T1. But if I use any other benchmark tools or just windows copying I am stuck at a maximum speed of 2.1GBytes per second.

    HD Tune benchmarking gets 2.1Gbytes per second peak.

    I understand that the NVME's have peak speeds and have RAM on them and when that gets filled up then the average read and write is limited. I never reach any higher speeds even before the drive cache gets filled up. Also, in Software RAID with 2 or 3 or 4 disks it should scale but NADA, not one byte more per second. Something is limiting the speed.

    When I try to bench both DIMM.2 slots at the same time they seem to be sharing the bandwidth almost 50%/50% (that's expected since they're PCIe lanes).
    When I try to bench both M.2_1 and M.2_2., I hit a peak of 2.3GBytes per second combined, which is barely higher than the issue I am getting, which works out slightly better but very far from expected performance.

    My BIOS is configured properly and I disable the SATA ports, and even make room to give lanes for the M.2_3 which I don't use.

    Anyone can explain if there is some kind of limit (at the 2GBps mark) on AMD processors I am not aware of ? It's my first AMD.

    My other (much older) Rampage VI Extreme X299 using the same drives hits 3.3 GBytes per second for the 980's pro 2TB, and 2.2GBytes per second for the 960 Pro 1TB on each drive using the DIMM.2 slot with CPU lanes configured. And they stack almost linearly when benchmarks are executed at the same time on multiple disks... never hit a limit yet on that X299 board.

    Am I missing something ?

    Thx.

  9. #9
    ROG Member Array
    Join Date
    Jan 2021
    Reputation
    10
    Posts
    7

    Quote Originally Posted by olivieraaa View Post
    This might be a bit unrelated to the exact issue, but I wonder if you guys have experienced the same NVME bandwidth issue I am having:

    I am having a hard limit in windows in terms of file copy speeds, max. 2.1GBytes per second. It doesn't matter how I setup my NVME's or even if I software RAID them using windows (or the new Storage Spaces), the transfer speed is limited to that 2.1GBytes per second. Searched google and saw just a couple people talk about their limit without understanding why that is.

    My config:
    - 3990x
    - Extreme Alpha
    - Windows 64-bit Pro with lastest updates
    - 128GB RAM @3200 and tried @3600
    - 2x Samsung 980 Pro 2TB setup in M.2_1 and M.2_2
    - Tried also with 2x Samsung 960 Pro 1TB

    Also tried in Both DIMM.2 slots with the same hard limit (actually closer to 2.0GBytes per second in this config).

    I have not tried NVME RAID option in BIOS as I presume it is only for the NVME RAID Cards.

    To clarify, benchmarking with the synthetic benchmark CrystalDiskMark I can get 10 Gigabytes per second both read and right on software RAID Q1T1. But if I use any other benchmark tools or just windows copying I am stuck at a maximum speed of 2.1GBytes per second.

    HD Tune benchmarking gets 2.1Gbytes per second peak.

    I understand that the NVME's have peak speeds and have RAM on them and when that gets filled up then the average read and write is limited. I never reach any higher speeds even before the drive cache gets filled up. Also, in Software RAID with 2 or 3 or 4 disks it should scale but NADA, not one byte more per second. Something is limiting the speed.

    When I try to bench both DIMM.2 slots at the same time they seem to be sharing the bandwidth almost 50%/50% (that's expected since they're PCIe lanes).
    When I try to bench both M.2_1 and M.2_2., I hit a peak of 2.3GBytes per second combined, which is barely higher than the issue I am getting, which works out slightly better but very far from expected performance.

    My BIOS is configured properly and I disable the SATA ports, and even make room to give lanes for the M.2_3 which I don't use.

    Anyone can explain if there is some kind of limit (at the 2GBps mark) on AMD processors I am not aware of ? It's my first AMD.

    My other (much older) Rampage VI Extreme X299 using the same drives hits 3.3 GBytes per second for the 980's pro 2TB, and 2.2GBytes per second for the 960 Pro 1TB on each drive using the DIMM.2 slot with CPU lanes configured. And they stack almost linearly when benchmarks are executed at the same time on multiple disks... never hit a limit yet on that X299 board.

    Am I missing something ?

    Thx.

    Thank you for hijacking the post..

  10. #10
    ROG Member Array
    Join Date
    Jan 2021
    Reputation
    10
    Posts
    7

    Quote Originally Posted by Anthalus View Post

    Maximum Supported Controllers:
    ⦁ 7 NVMe + 4-SoC when x570/590 is set to RAID in the BIOS
    ⦁ 8-NVMe + 2-SoC + 1-PT when set to RAID in the BIOS
    ⦁ 10-NVMe when SoC and PT are Disabled in the BIOS
    ⦁ 10-NVMe + 1-PT when SoC is Disabled in the BIOS
    ⦁ 9-NVMe + 2-SoC when PT is Disabled in the BIOS
    would you mind explaining what's SoC and PT are mate?

Page 1 of 2 1 2 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •