Page 1 of 2 1 2 LastLast
Results 1 to 10 of 14
  1. #1
    ROG Enthusiast Array KeithMyers PC Specs
    KeithMyers PC Specs
    MotherboardCrosshair VII Hero
    ProcessorRyzen 7 3900X
    Memory (part number)F4-3200D-16GTZ
    Graphics Card #1GTX 1080 Hybrid
    Graphics Card #2RTX 2080 Hybrid
    Graphics Card #3RTX 2080 Hybrid
    Storage #1Samsung 960 EVO M.2 256GB
    CPU CoolerXSPC 360 V3 radiator Aqua Computer Kryos Cuplex Next
    CaseThermaltake Level 20 XT
    Power SupplyCorsair AX1200
    OS Ubuntu 18.04.3 LTS
    Network RouterASUS RT-AC68R

    Join Date
    Jun 2014
    Reputation
    10
    Posts
    45

    Someone explain why RealBench downclocks my graphics card during test

    Can someone explain how RealBench works? I am wondering why during the benchmark test that it only uses one graphics card. In my case, a GTX1070FE. Also why does it downclock the video clock speed from its normal 1.92 Ghz to 1.5 Ghz. I have a mild memory and video core overclock set on both cards through NVI. However, I notice that when you stress test the system, it is using both cards at their normal overclocks. It doesn't touch the memory clock during the benchmark, just the video clock. Can someone explain the benchmark methodology please. I can't find any help file or text document explaining what and how RealBench works.

    I can't link to any benchmark result yet since it appears that the latest available version 2.44 is broken and not able to log a user in to upload their result file.

  2. #2
    ROG Guru: White Belt Array MMikhail PC Specs
    MMikhail PC Specs
    Laptop (Model)ASUS N76V
    MotherboardASUS Z170-A
    Processor6700K
    Memory (part number)Crucial Ballistix DDR4 32GB
    Graphics Card #1STRIX GTX 1070 OC
    MonitorAMVA3+ LCD 24 FHD
    Storage #1Crucial SSD
    Storage #2WD HDDs, stack
    CPU CoolerEVO 212
    CaseFractal Define R5
    Power SupplyHX750i
    Keyboard Razer Deathstalker
    Mouse Razer Deathadder
    Mouse Pad Func Ultra
    Headset/Speakers Creative T30
    OS Win10 x64
    Network RouterZyxel Ultra

    Join Date
    Jan 2017
    Reputation
    13
    Posts
    90

    I think there is nothing special happening here, just normal overheating and the card relaxes a bit to keep alive at heavy load.

  3. #3
    ROG Enthusiast Array KeithMyers PC Specs
    KeithMyers PC Specs
    MotherboardCrosshair VII Hero
    ProcessorRyzen 7 3900X
    Memory (part number)F4-3200D-16GTZ
    Graphics Card #1GTX 1080 Hybrid
    Graphics Card #2RTX 2080 Hybrid
    Graphics Card #3RTX 2080 Hybrid
    Storage #1Samsung 960 EVO M.2 256GB
    CPU CoolerXSPC 360 V3 radiator Aqua Computer Kryos Cuplex Next
    CaseThermaltake Level 20 XT
    Power SupplyCorsair AX1200
    OS Ubuntu 18.04.3 LTS
    Network RouterASUS RT-AC68R

    Join Date
    Jun 2014
    Reputation
    10
    Posts
    45

    Quote Originally Posted by MMikhail View Post
    I think there is nothing special happening here, just normal overheating and the card relaxes a bit to keep alive at heavy load.
    Uhhh.... not the case. I never have exceeded 65°C. on any of my 1070's. They are under SIV control with a 6 point fan control curve. They run flat out all the time 24/7 on BOINC projects crunching math. They are running 60°C. right now as I type crunching for SETI at 95-100% utilization with +40 Mhz core clock boost and +400 Mhz memory clock boost. They have never clocked down. Video clock is right now at 1.92 Ghz and memory clock is at 4.2 Ghz.

    When I run the benchmark test, the memory clocks don't clock down, just the video clock. When I run the stress test both cards stay where I put them and never cross 60° C. When I run the benchmark, the temps are only 48°C. It's almost as if the benchmark wants to standardize on 1.5 Ghz for all tests. That is why I asked the question is that how it is supposed to work? I couldn't find any documentation on the methodology of the benchmark. What is it supposed to do and how does it achieve it.
    Last edited by KeithMyers; 03-09-2017 at 04:28 AM.

  4. #4
    Super Moderator - Coderat Array Nodens PC Specs
    Nodens PC Specs
    Laptop (Model)ASUS GL702VW-GC004T
    MotherboardRampage V Extreme Edition 10
    ProcessorIntel 3930K C2
    Memory (part number)CMZ16GX3M4X2133C11
    Graphics Card #1ASUS STRIX-GTX970-DC2OC-4GD5
    MonitorASUS VG278H
    Storage #13x OCZ Vertex 3 240 (RAID5 - LSI 9361-8i)
    Storage #23x WD2003FYYS (RAID5 - LSI 9361-8i)
    CPU CoolerCorsair H100i + 4x Noctua NF-P12
    CaseCoolermaster HAF-X
    Power SupplySeasonic X-1250
    Keyboard Razer Anansi
    Mouse Logitech G502
    OS Windows 10 Pro x64
    Network RouterLinksys E4200v2 (running TomatoUSB/Toastman firmware)
    Nodens's Avatar
    Join Date
    Mar 2012
    Reputation
    266
    Posts
    4,389

    RB does not downclock your card at all. If your card downclocks it happens by the hardware directly or the driver and I have no control over it. Have in mind the Luxmark load is a "COMPUTE" OpenCL load and not regulard 3D load and cards behave differently on these loads than they would on a game or 3dmark for example.
    RAMPAGE Windows 8/7 UEFI Installation Guide - Patched OROM for TRIM in RAID - Patched UEFI GOP Updater Tool - ASUS OEM License Restorer
    There are 10 types of people in the world. Those who understand binary and those who don't!

    RealBench Developer.

  5. #5
    ROG Enthusiast Array KeithMyers PC Specs
    KeithMyers PC Specs
    MotherboardCrosshair VII Hero
    ProcessorRyzen 7 3900X
    Memory (part number)F4-3200D-16GTZ
    Graphics Card #1GTX 1080 Hybrid
    Graphics Card #2RTX 2080 Hybrid
    Graphics Card #3RTX 2080 Hybrid
    Storage #1Samsung 960 EVO M.2 256GB
    CPU CoolerXSPC 360 V3 radiator Aqua Computer Kryos Cuplex Next
    CaseThermaltake Level 20 XT
    Power SupplyCorsair AX1200
    OS Ubuntu 18.04.3 LTS
    Network RouterASUS RT-AC68R

    Join Date
    Jun 2014
    Reputation
    10
    Posts
    45

    Unhappy Still doesn't make sense your answer

    Quote Originally Posted by Nodens View Post
    RB does not downclock your card at all. If your card downclocks it happens by the hardware directly or the driver and I have no control over it. Have in mind the Luxmark load is a "COMPUTE" OpenCL load and not regulard 3D load and cards behave differently on these loads than they would on a game or 3dmark for example.
    That still doesn't make sense to me since the cards are never doing anything OTHER than pure compute. I don't game and have never gamed. The cards are running 24/7 doing pure compute, OpenCL tasks for SETI@Home, MilkyWay@Home and Einstein@Home. All that work is pure compute. I run two tasks per card which makes them run at 95-100% utilization all the time. I have never had any of my cards, six total across three machines downclock either the core clock or memory clock.

    I guess I will have to to run this Luxmark test outside of your benchmark suite and see if it alone can downclock my cards. You don't have a handy link sitting around so I don't have to go searching for it do you.

  6. #6
    Super Moderator - Coderat Array Nodens PC Specs
    Nodens PC Specs
    Laptop (Model)ASUS GL702VW-GC004T
    MotherboardRampage V Extreme Edition 10
    ProcessorIntel 3930K C2
    Memory (part number)CMZ16GX3M4X2133C11
    Graphics Card #1ASUS STRIX-GTX970-DC2OC-4GD5
    MonitorASUS VG278H
    Storage #13x OCZ Vertex 3 240 (RAID5 - LSI 9361-8i)
    Storage #23x WD2003FYYS (RAID5 - LSI 9361-8i)
    CPU CoolerCorsair H100i + 4x Noctua NF-P12
    CaseCoolermaster HAF-X
    Power SupplySeasonic X-1250
    Keyboard Razer Anansi
    Mouse Logitech G502
    OS Windows 10 Pro x64
    Network RouterLinksys E4200v2 (running TomatoUSB/Toastman firmware)
    Nodens's Avatar
    Join Date
    Mar 2012
    Reputation
    266
    Posts
    4,389

    Like I said if downclocking happens it happens at the hardware or driver level. Indeed the distributed computing apps you are using are also compute loads which make them not entirely unrelated like 3d game/benchmark loads BUT still they are not very relevant either. I am guessing you are using the CUDA client on BOINK and not OpenCL (if not you should since the performance is way higher) so that in itself makes what you are currently running entirely irrelevant. If you are indeed using the OpenCL client and it does not exhibit the same behaviour then it's probably something specific to Luxmark's dataset (although it's quite possible that it will if you test that).

    In any case here is luxmark download site: http://www.luxrender.net/wiki/LuxMark#Binaries

    RB up to 2.44 uses version 2 which underperforms on Nvidia cards anyway. Latest 2.54b beta uses 3.1
    Last edited by Nodens; 03-15-2017 at 05:39 PM.
    RAMPAGE Windows 8/7 UEFI Installation Guide - Patched OROM for TRIM in RAID - Patched UEFI GOP Updater Tool - ASUS OEM License Restorer
    There are 10 types of people in the world. Those who understand binary and those who don't!

    RealBench Developer.

  7. #7
    ROG Enthusiast Array KeithMyers PC Specs
    KeithMyers PC Specs
    MotherboardCrosshair VII Hero
    ProcessorRyzen 7 3900X
    Memory (part number)F4-3200D-16GTZ
    Graphics Card #1GTX 1080 Hybrid
    Graphics Card #2RTX 2080 Hybrid
    Graphics Card #3RTX 2080 Hybrid
    Storage #1Samsung 960 EVO M.2 256GB
    CPU CoolerXSPC 360 V3 radiator Aqua Computer Kryos Cuplex Next
    CaseThermaltake Level 20 XT
    Power SupplyCorsair AX1200
    OS Ubuntu 18.04.3 LTS
    Network RouterASUS RT-AC68R

    Join Date
    Jun 2014
    Reputation
    10
    Posts
    45

    No, only OpenCL work on my cards

    Quote Originally Posted by Nodens View Post
    Like I said if downclocking happens it happens at the hardware or driver level. Indeed the distributed computing apps you are using are also compute loads which make them not entirely unrelated like 3d game/benchmark loads BUT still they are not very relevant either. I am guessing you are using the CUDA on BOINK and not OpenCL (if not you should since the performance is way higher) so that in itself makes what you are currently running entirely irrelevant. If you are indeed using the OpenCL client and it does not exhibit the same behaviour then it's probably something specific to Luxmark's dataset (although it's quite possible that it will if you test that).

    In any case here is luxmark download site: http://www.luxrender.net/wiki/LuxMark#Binaries

    RB up to 2.44 uses version 2 which underperforms on Nvidia cards anyway. Latest 2.54b beta uses 3.1
    Thanks for the link. I will go to the link you provided. Thanks. Just FYI, on SETI with BOINC I am running only OpenCL work. I haven't run any CUDA work for almost two years. The OpenCL app is five times faster than the current CUDA app. On Windows that is. MilkyWay and Einstein have only OpenCL apps. No CUDA apps. If I was to move to a Linux platform for SETI, I could then get access to a beta CUDA 8.0 app which is five times faster than the OpenCL app on Windows. That app is 50 times faster than the old CUDA 5.0 app in Windows.

    If the Luxmark test alone downclocks the card, then I will apologize for denigrating your benchmark suite. I just didn't understand why it only downclocked the video clock and left the memory clock alone. And stress testing the cards for 30 minutes didn't downclock either the video or memory clocks.

    I see you mentioned 2.54b. I thought I had the latest at 2.53b. I will go find the latest. Thanks for your responses.

  8. #8
    ROG Enthusiast Array KeithMyers PC Specs
    KeithMyers PC Specs
    MotherboardCrosshair VII Hero
    ProcessorRyzen 7 3900X
    Memory (part number)F4-3200D-16GTZ
    Graphics Card #1GTX 1080 Hybrid
    Graphics Card #2RTX 2080 Hybrid
    Graphics Card #3RTX 2080 Hybrid
    Storage #1Samsung 960 EVO M.2 256GB
    CPU CoolerXSPC 360 V3 radiator Aqua Computer Kryos Cuplex Next
    CaseThermaltake Level 20 XT
    Power SupplyCorsair AX1200
    OS Ubuntu 18.04.3 LTS
    Network RouterASUS RT-AC68R

    Join Date
    Jun 2014
    Reputation
    10
    Posts
    45

    No downclock with Luxmark 3.1

    OK. I downloaded Luxmark 3.1 and ran it. No downclocks seen. I then tried running the new 2.54b Realbench suite. It wouldn't run H265 encoding properly. Said the first job would take 47 hours. Then ran the Luxmark part of the suite and it still downclocked for some reason. Then moved onto the multi-tasking part and it never completed the first job. And was only running one core at 12%. All the other cores were idle for some reason. I assume a multi-task test would run on all cores. All the tests in the 2.44 suite ran with no issues. I had downloaded the 2.53b suite version but I had not got around to running it yet.

    I've attached some images of the Luxmark standalone test results along with a desktop image while the test was running showing the graphics cards at their full overclocked video and memory clocks via my SIV monitoring tool which I always have running on the desktop.

    Can't explain yet why the difference in RB versus the standalone Luxmark test. Also, for me, the 2.54b version is a step backwards from the 2.44 version. I had wanted to run the 2.53b version because it was supposed to have the upload benchmark results problem fixed. I still have not accomplished that yet since I can't get 2.54b to run properly.
    Miniatura de Adjuntos Miniatura de Adjuntos Luxmark 3.1 test.jpg  

    Luxmark 3.1 test no downclock.jpg  


  9. #9
    ROG Guru: Blue Belt Array jab383 PC Specs
    jab383 PC Specs
    Motherboard24/7 rig : Maximus VI Extreme
    Processori7 4790K
    Memory (part number)16GB Mushkin Redline 2400 10-12-12-28 + 16GB Corsair Vengeance 2400 10-12-12-31
    Graphics Card #1AMD Firepro W5000
    Sound CardM6E Supreme FX
    MonitorDell U2413
    Storage #1Kingston SH103S3240G SSD
    Storage #2Seagate ST1000DM003 1TB
    CPU CoolerCustom water loop, Delidded, Liquid Metal TIM
    CaseCoolerMaster HAF XM
    Power SupplyCorsair HX-750
    Keyboard Logitech G710+
    Mouse Logitech M705
    OS Windows 7 64 Pro
    jab383's Avatar
    Join Date
    Feb 2014
    Reputation
    107
    Posts
    848

    The clock slow-down seen in OpenCL is done by the nVidia driver, not Realbench. It has to do with the workload the driver sees:

    nVidia graphics cards use various performance states for various tasks. P8 - performance level 8 - is standby and has very low idle clocks. P5 is 2D with clocks a little higher and is often the level for fallback when the driver crashes. P0 is high performance 3D - the highest level clocks. P2 is for direct compute applications like OpenCL and uses slightly lower clocks, including memory clock.

    I often start an overclocked benchmark in P0, go through a driver reset and wind up with the really slow clocks of P5. I often takes a reboot to break out of P5 and be able to set clock speeds again.

    GPUTweak and other graphic card tuning software (PrecisionX, Afterburner) work on the P0 clocks. In a direct compute benchmark like Luxmark, GPU clock speed sort of follows the GPUTweak setting but memory clock is locked at the lower level of the P2 performance level. nVidia Inspector is the only tuning software that will let us tweak P2 speeds directly. Check out nVidia Inspector. It shows the P-state running at the time and the overclocking tab can be set to operate on any of the P-states. You may have to experiment with setting voltages and clocks in the right tool and in the right order. Have fun.

  10. #10
    ROG Enthusiast Array KeithMyers PC Specs
    KeithMyers PC Specs
    MotherboardCrosshair VII Hero
    ProcessorRyzen 7 3900X
    Memory (part number)F4-3200D-16GTZ
    Graphics Card #1GTX 1080 Hybrid
    Graphics Card #2RTX 2080 Hybrid
    Graphics Card #3RTX 2080 Hybrid
    Storage #1Samsung 960 EVO M.2 256GB
    CPU CoolerXSPC 360 V3 radiator Aqua Computer Kryos Cuplex Next
    CaseThermaltake Level 20 XT
    Power SupplyCorsair AX1200
    OS Ubuntu 18.04.3 LTS
    Network RouterASUS RT-AC68R

    Join Date
    Jun 2014
    Reputation
    10
    Posts
    45

    No slowdown in OpenCL for Luxmark

    Thanks for the comment. Guess I didn't make myself clear in my previous post. Don't see any slowdown in overclocks with the standalone OpenCL Luxmark benchmark. That is shown in the image I posted. I do see a downclock in the video clock though with RealBench.

    I know about the Nvidia driver forcing the cards to P2 state for compute. That is why I have used Nvidia Inspector for over a year to set where I run the cards in P2 state. The overclocks are for P2 state. Never fall out of overclock in P2 state for my normal 100 % computer utilization for distributed computing which is 100% OpenCL based work. Didn't fall out of my P2 overclock for the standalone Luxmark benchmark. Still can't explain the difference in downclock between the standalone test and RealBench.

    Cheers.

Page 1 of 2 1 2 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •