Ok let's see. What you describe sounds like the drives momentarily fail and the controller marks them as such. I will list several things that can cause this apart from drivers which I will cover at the end. You should check all of them.
1) Faulty SATA cables. If you have not tested with different cables do so.
2) Not enough juice from the PSU on boot. Mechanical drives need as much as 35-40W each during the spin up stage (depending on the drive). This is why the "Staggered spin up" option exists in real hardware controllers. It's not so they can spin up properly, it's so the drives don't spin up all at the same time causing a huge spike on the PSU. This is more important on Data Centers of course due to the amount of drives involved but still it can happen on normal systems with multiple drives if the PSU is going bad or simply is not up for that load. Overclocked systems with modern graphics cards due tend to pull a lot from the PSU. Add a few mechanical drives on such a system operating with a PSU closed to the limit and you got yourself a problem. The 12V rail is what feeds the spindle motor and that's the easiest one to starve out.
3) Are you connecting the drives via case hotbays? If so, try connecting them directly to the controller, removing the hotbays from the equation. 90% of case hotbays I've seen (even on expensive cases like the Haf-X I currently use for my dev system) are faulty causing frequent disconnects on the SATA channels. With the LSI cards I use, this is easy to diagnose as the disconnects show up on the logs. With onboard fakeraid it's impossible unless you bypass the hotbays. On the other hand most fakeraids won't complain about this problem and it will manifest as a performance drop. Unless it's severe in which case the symptoms are like what you describe.
Now regarding RSTe and RST:
1)There are some version or RSTe drivers that are quite problematic causing issues that range from dropouts to stop errors (I have diagnosed some of the later myself here from memory dumps provided by users). Unfortunately I can not recall the version numbers as it's been quite a while.
2) The plain RST won't install if the board is loading the RSTe Option ROM unless you use a driver with a modified .inf (hardware signature added) which will need Windows running in "Test Mode" or disabling the Driver Signature enforcement on every boot. Or you can use one of my patched UEFI versions (link in signature) to load the RST Option ROM. Some of the UEFI versions support loading the RST UEFI driver but you have to install Windows in UEFI mode and use the UEFI Driver instead of the Option ROM via the CSM settings in UEFI or by disabling the CSM module in the UEFI. The latest ones pack the RST Option ROM as well but you have to select in in UEFI prior to Windows installation. So what you do on this department depends on which UEFI version you are on/want to use. Use this information to make your choice:)
Regarding FastBoot:
This is entirely irrelevant. All FastBoot does is skip initializing certain components on POST (some of which are configurable in the UEFI) so that the system boots faster. The disk subsystem is certainly not one of them. And while you're on RAID the initialization of member drives is handled by the OpROM or the UEFI driver. As in it's entirely out of FastBoot's domain.
Regarding your last edit:
System instability can drop disks out of arrays. Make sure your system is stable. Also BCLK overclocking can have detrimental effect to your data as well as it can cause data corruption very very easily. Do not overclock BCLK unless you know exactly what you're doing and/or have no critical data.