Page 2 of 2 FirstFirst 1 2
Results 11 to 13 of 13
  1. #11
    ROG Member Array
    Join Date
    Nov 2018

    At the YT time index in your link he's talking about a plx chip, as well. So the WS Z390 Pro (for which I'm asking about VRM details) is exactly that: the intel platform which Asus did the same thing for. ;-)

  2. #12
    ROG Enthusiast Array Carlyle2020 PC Specs
    Carlyle2020 PC Specs
    MotherboardAsus Maximus X Hero Wifi AC/NCT6793D
    Memory (part number)CMK16GX4M2B3200C16 /16-18-36/1.350
    Graphics Card #1RTX 2080 Ti FTW3 ULTRA
    Sound CardAsus Xonar STX /modified for soundstage
    Monitor21: 9 LG 34 inch / flat
    Storage #11TB 960evo/slot 2/noctua mini fan/3D print-bracket-modified
    Storage #2500 Gb 970evo/slot 1/Heatsink
    CPU CoolerNoctua NH-D14 /ott 3 fan cfg /2 r fine
    CaseSilverstone FT2 /3x180mm on the bottom
    Power SupplyCorsair HX1000W /9 years old
    Keyboard G15 / CORSAIR K63 + lapboard
    Mouse M510
    Headset Sennheiser PC360 + Philips X2HR
    Headset/Speakers klipsch 4.1 pc speaker /cooling modified
    OS Win 10 64bit
    Network Routeryeah right...
    Accessory #1 Silverstone EBA01 headphone stand / black
    Accessory #2 radio Keyfob to turn on and off from Garage
    Accessory #3 Adata USB3.1 512Mb SSD @480Mb ul/dl

    Join Date
    Mar 2018


    and true 12 lanes are available to the three M.2s, yes?
    So it is the best setup for your mentioned very high IO counts
    Last edited by Carlyle2020; 11-20-2018 at 02:07 AM.

  3. #13
    ROG Member Array
    Join Date
    Nov 2018

    Depends on how you look at it. Its 16 PCIe lanes multiplexed to three PCIEx4-NMVEs (12 lanes total) plus a graphics card with potentially 16 lanes. Of course the bandwidth at any given time is still limited to 16 lanes. But as I'm not doing any GPU-bandwidth intensive tasks, the GPU will use up that 16 lanes only rarely and only for a very short period (if at all). So if you look at it time-multiplexed, there is definitely (except for very few and short exceptions) full bandwidth available for the NMVEs.

    As for saturating those SSDs, I have a very unusual use case. One component is high IO intensive database access, so its more about the max IOs at high queue depth than about max sequential data transfer. And I have other components accessing the NVMEs, which I can't have competing with each other for SSD access, thats why I'm not using RAID but seperate SSDs. And with that scenario, it's even more important to have the SSDs on direct CPU attached lanes.

    As for the cooling, I'm not quite sure if high IO access strains the SSD's controller even more than sequential access (resulting in even more heat output). But as far as I know, using the kryoM.2 evo adapters ( ) and enough airflow should do the trick. I will replace the thermal pads it ships with with 14 W/mK pads ( ), though.

    HiVizMan said, he is trying to get a few details while I'm still wating for my CPU to get shipped. If even he can't get any details in time, I will probably have to buy the board to check out the VRM and hope for the best.

Page 2 of 2 FirstFirst 1 2

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts