Page 2 of 2 FirstFirst 1 2
Results 11 to 13 of 13
  1. #11
    ROG Member Array
    Join Date
    Nov 2018

    At the YT time index in your link he's talking about a plx chip, as well. So the WS Z390 Pro (for which I'm asking about VRM details) is exactly that: the intel platform which Asus did the same thing for. ;-)

  2. #12
    ROG Guru: White Belt Array Carlyle2020 PC Specs
    Carlyle2020 PC Specs
    MotherboardAsus Maximus XI APEX w/o vrm heatsink
    Memory (part number)Corsair LPX 4000 CL19@17.17.37/360/2T/1.375
    Graphics Card #1OOO RTX 2080 Ti FTW3 ULTRA
    Graphics Card #2GTX 1060
    Sound CardOOO Xonar STX modified
    Monitor21: 9 LG 34 inch / flat
    Storage #11TB 960evo
    Storage #22TB 970evo plus
    CPU CoolerNoctua NH-D15
    CaseSilverstone FT2 +200 bucks of sounddampening
    Power SupplySeasonic Prime Titanium 1000W
    Keyboard Cherry KC 1000 / OOO CORSAIR K63 + lapboard
    Mouse MX Master 2S
    Headset Sennheiser PC360 + Philips X2HR
    Headset/Speakers klipsch 4.1 pc speaker /cooling modified
    OS Win 10 64bit
    Network Routeryeah right...
    Accessory #1 Silverstone EBA01 headphone stand / black
    Accessory #2 radio Keyfob to turn on and off from Garage
    Accessory #3 Adata USB3.1 512Mb SSD @480Mb ul/dl

    Join Date
    Mar 2018


    and true 12 lanes are available to the three M.2s, yes?
    So it is the best setup for your mentioned very high IO counts
    Last edited by Carlyle2020; 11-20-2018 at 02:07 AM.

  3. #13
    ROG Member Array
    Join Date
    Nov 2018

    Depends on how you look at it. Its 16 PCIe lanes multiplexed to three PCIEx4-NMVEs (12 lanes total) plus a graphics card with potentially 16 lanes. Of course the bandwidth at any given time is still limited to 16 lanes. But as I'm not doing any GPU-bandwidth intensive tasks, the GPU will use up that 16 lanes only rarely and only for a very short period (if at all). So if you look at it time-multiplexed, there is definitely (except for very few and short exceptions) full bandwidth available for the NMVEs.

    As for saturating those SSDs, I have a very unusual use case. One component is high IO intensive database access, so its more about the max IOs at high queue depth than about max sequential data transfer. And I have other components accessing the NVMEs, which I can't have competing with each other for SSD access, thats why I'm not using RAID but seperate SSDs. And with that scenario, it's even more important to have the SSDs on direct CPU attached lanes.

    As for the cooling, I'm not quite sure if high IO access strains the SSD's controller even more than sequential access (resulting in even more heat output). But as far as I know, using the kryoM.2 evo adapters ( ) and enough airflow should do the trick. I will replace the thermal pads it ships with with 14 W/mK pads ( ), though.

    HiVizMan said, he is trying to get a few details while I'm still wating for my CPU to get shipped. If even he can't get any details in time, I will probably have to buy the board to check out the VRM and hope for the best.

Page 2 of 2 FirstFirst 1 2

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts