cancel
Showing results for 
Search instead for 
Did you mean: 

Missing Drives in BIOS Come Back after Power Cycle (Sometimes)

rosefire
Level 7
Trying to overclock to 4.5MHz my setup is almost stable in long term stress testing, but there is one very weird exception, a symptom that does not seem to be improving with higher core or memory voltages. Here are the defining symptoms:

- System freezes after an hour or so of RealBench stress testing. Core temps a modest 65C at 85 Watts power.
- After hitting reset there are drives missing. They are not present at all in Windows 8.1 Disk Management
- Reset again and look for drives in BIOS, not there either.
- Cycle power (pull the plug) until board LEDs go out.
- Power up and the missing drives are usually back, UNLESS (I think) one of the missing drives is the boot drive.
- If the boot drive (again, I think) is not back they all remain missing when I do the following:
- One missing drive was on a PCIe IDE adapter, removed the adapter - still missing (not shown in BIOS).
- Unplugged power and SATA cables from all drives except the boot drive - still missing.
- Load Optimized Settings in BIOS - still missing.
- Forced a BIOS Swap (reverts from 0701 to 05xx, an older BIOS) - still missing.
- Swap power cable to the boot drive to a different PS cable feed - still missing.
- Replace boot drive SATA connector - still missing.
- Moved the boot drive SATA cable to a different MB connector - its back!
- Hook all the other drives (Optical, HDD, SSD, both IDE and SATA) up one at a time and they are all fine.
Elapsed debugging time time: about 4 hours.

Whenever this happens the wall power must be removed to get the drives back. Twice now the missing drives persisted after the computer was unplugged and after backing out all OC settings. There is more going on than just instability. Is there UEFI information stored on the boot drive or in NVRAM that will keep drives from being seen if it is corrupted?

If it can't find the boot drive, it seems as though there may be UEFI or other NVRAM on the MB that is getting corrupted. Is it possible that this is a MB defect that is expressed only when the MB gets heated during OC? If this is the case, higher voltages won't help, which what I've seen so far. I pushed up VDRAM to ~1.675V, Vcore to ~1.18V, and VCCSA to ~1.19V and increasing these voltages seems to make this problem happen quicker.

Has anyone seen this problem before?

RIVBE, i7-4930, 32GB G.Skill Trident DDR3 kit operating at 2666MHz, GTX760, 1050W PS, Corsair H110 water cooler
Future PicPlatform.......Rampage VI Extreme Encore / i9-10940x
Memory.........G.Skill F4-4266C17Q-32GTZR 32GB Kit
Graphics ......Radeon Pro Vega 56
Boot Drive.....2X Intel 380GB, 905P M.2 SSD
Storage........2x Samsung 1TB 970 EVO M.2 SSD
Cooling........MCP355 Pump, Swiftech SKF Block, EK360 60mm Radiator



3,321 Views
4 REPLIES 4

Praz
Level 13
Hello

Drive detection issues as you describe are common with an unstable system. Normally, after removing power from the system and clearing the UEFI if necessary, switching the drive to a different SATA port will force redetection.

rosefire
Level 7
This is good news, it means I can proceed to look for the cause of the instability, rather than suspecting the hardware.

Based on your reply (thanks by the way!), my guess is that elevated CPU temperatures from overclocking are causing the IMC in the same die to fail. I can test this theory by running the CPU at stock speed and interfering with my cooler efficiency to raise temperatures. If artificially elevated IMC temperatures cause instability at stock CPU speeds, that would implicate the IMC as the source of the instability. Thermally induced IMC instability would also explain why raising both Core and VCCSA would not be terribly effective, the higher VCCSA would be partially offset by the higher die temperatures from increasing both Vcore and VCCSA.

Before I try the above, it would be helpful to know if this type of instability is normally associated with either the CPU or the memory system?

Thanks in advance!
Future PicPlatform.......Rampage VI Extreme Encore / i9-10940x
Memory.........G.Skill F4-4266C17Q-32GTZR 32GB Kit
Graphics ......Radeon Pro Vega 56
Boot Drive.....2X Intel 380GB, 905P M.2 SSD
Storage........2x Samsung 1TB 970 EVO M.2 SSD
Cooling........MCP355 Pump, Swiftech SKF Block, EK360 60mm Radiator



rosefire
Level 7
This problem went away. I am now overclocking at higher CPU frequencies 4.5GHz XMP 2666MHz with Vcore (1.2V) and VCCSA (1.195V) and have complete stability (with the exception that Luxmark crashes under RealBench - but this happens the same when I am and when I'm not overclocking and the four other stress I've tried all pass). I was even able to tighten many secondary memory timings. I don't actually know what this problems was caused by. My best guess is a bad hard drive or hard drive power connection. I moved the data off and entirely removed some old mechanical hard drives from my computer. Perhaps one of them was causing a lot of noise on its power connection due to an intermittent electrical contact or some type supply noise filter failure. I'm not going to put them back to find out, just be happy the problem is gone. 😉
Future PicPlatform.......Rampage VI Extreme Encore / i9-10940x
Memory.........G.Skill F4-4266C17Q-32GTZR 32GB Kit
Graphics ......Radeon Pro Vega 56
Boot Drive.....2X Intel 380GB, 905P M.2 SSD
Storage........2x Samsung 1TB 970 EVO M.2 SSD
Cooling........MCP355 Pump, Swiftech SKF Block, EK360 60mm Radiator



Praz
Level 13
Good to hear. 🙂