I just built a new system, and having multiple GPUs plugged into any combination of PCI-E slots cause major system instability.
CPU: Intel Core i7-6700k
Mobo: ASUS Maximus VIII Extreme
GPU: nVidia GTX 1080
RAM: Corsair Vengeance 64GB DDR4 3200
Storage: Two Samsung 850 Pro 512GB SSD's and two 6TB WD Black Series HDDs
PSU: Corsair AX860i
Case: Corsair 750D
Okay, so I plugged my two older GTX 770's from my previous Haswell rig into the new Skylake rig while I'm waiting for my GTX 1080 to arrive, and the system became very unstable. My newly and freshly installed Windows 10 installation crashes and the entire PC reboots the second I try to enable SLI with my 1440p monitor hooked up.
If I unplug the monitor and only use my old secondary 1080p screen, I can enable SLI, but the moment I launch a game or anything that uses the GPUs, the entire PC actually goes into slow motion. Whether it's the desktop UI, a game, or the sound, it all slows down like it's tripping on acid, and then everything freezes with the screen showing MASSIVE artifacts, and then auto-reboots. This happens whether SLI is actually enabled or not, as long as both GPUs are plugged in.
I've tried the last several sets of nVidia drivers. Nothing.
I've tried putting the GPUs back in the older Haswell system, and SLI works fine. So it's not the GPUs. My next thought was that the PSU must be faulty, since it acts like it's not getting the power it needs, despite the PSU being a higher grade PSU with over 100w more power. So I put in the Haswell rig's HX750 into the new rig, replacing the newer AX860i I bought for the new rig, and it did the same thing. The PC crashed. A voltmeter also shows the AX860i delivering very smooth power across the 12v rail. So the PSU is ok.
I tried unplugging all but one RAM stick, and after trying out all four sticks individually one at a time, the system still crashes.
So then my heart sunk as I realized it's gotta be the Maximus VII Extreme motherboard. I tried reinstalling Windows 10, and that didn't help.
To be clear, the system works 100% when only a single GPU is used. I've tried changing the power management settings in both the UEFI and Windows power settings, I've tried four different UEFI revisions, and tried setting the PCI-E to Gen 2. It's almost acting like either the motherboard is not supplying enough power to the cards, or inserting two GPUs is triggering some kind of invisible overclocking, because that's how it's acting. I have everything set to stock settings and factory defaults, but it acts like it's an unstable overclock, despite temps across all parts sitting around 20ºC.
Windows Event Viewer shows no record of the crashes either.
Is this a know issue? Should I RMA the board?