I thought I'd post this as a new thread since there's been some talk on this in a couple places.
Here's the question: Why can I top 270K on image editing when the next nearest score is a solid 18K lower? If anyone has something they want me to try like recording my secondary/tertiary memory timings, I'll try to get to it if it sounds reasonable and post my results.
The system that I'm running is based around an Intel i7 5960x (pre-binned) running currently on an ASUS X99A USB 3.1 motherboard. My storage is dual Samsung 950 Pro NVMe (no raid). Memory is G.Skill Ripjaws V (32 GB Quad) with XMP at 3200/14-14-14-34. I run 1T. Video is dual GTX-960 (SLI). CPU cooling is via a dedicated custom loop.
Edit: Forgot to mention, Windows 7 professional.
Here's the first test I've run:
CPU = 4,900 MHz
Cache = 4,500 MHz
Dram = 3,200 MHz @ 14-14-14-34-1T
Let's see if the storage device makes a difference.
I installed various storage devices and made 5 runs for each of the RB image editing benchmark. Standard tweaks: Diagnostic Mode, 800x600, Explorer closed, and RealBench at realtime priority. NTFS for all. Best score for each device is recorded.
Samsung 950 Pro NVMe (PCIe) - 271,794
Toshiba Q Series Pro SSD SATA - 271,562
WD 7200 RPM SATA - 271,128
Toshiba 7200 RPM SATA via USB 3.0 - 271,345
USB 1.0 Flash - 108,717
USB 2.0 Flash - 250,689
USB 3.0 Flash - 152,198
It ain't the storage. NVMe and SATA whether solid state or spinner all clustered at 271K including a SATA spinner via USB 3.0. The USB 1.0 flash drive introduced a real bottleneck, but the performance hit for the USB 2.0 flash drive was only 21K. The USB 3.0 flash drive (PNY @ 32 GB) was interesting as it didn't perform nearly as well as the USB 2.0 flash drive (DataTraveler 100 G2 @ 32 GB) . Not all flash memory is the same. Apparently my USB 3.0 memory stick sucks.
Next, we'll drop the cache from 4,500 MHz to 3,000 MHz and see what we get.