PAGE: 1 2 3 4 5 6 7

DRAM Timing Control:

Takes us to the DRAM Timing sub-section:

Most of these settings can safely be left at Auto unless you wish to tune the system for optimal scoring in benchmarks. The primary timings will be set in accordance with the memory module SPD at a given frequency or fall back on ASUS defaults as memory bus frequency is increased.

ASUS Z77 UEFI Memory Timings

DRAM CAS Latency:

Column Address Strobe, defines the time it takes for data to be ready for burst after a read command is issued. As CAS factors in almost every read transaction, it is considered to be the most important timing in relation to memory read performance. To calculate the actual time period denoted by the number of clock cycles set for CAS we can use the following formula:

tCAS in Nano seconds=(CAS*2000)/Memory Frequency

This same formula can be applied to all memory timings that are set in DRAM clock cycles.

DRAM RAS TO CAS Latency:

Also known as tRCD. Defines the time it takes to complete a row access after an activate command is issued to a rank of memory. This timing is of secondary importance behind CAS as memory is divided into rows and columns (each row contains 1024 column addresses). Once a row has been accessed, multiple CAS requests can be sent to the row the read or write data. While a row is “open” it is referred to as an open page. Up to eight pages can be open at any one time on a rank (a rank is one side of a memory module) of memory.

DRAM RAS# PRE Time:

Also known as tRP. Defines the number of DRAM clock cycles it takes to Precharge a row after a page close command is issued in preparation for the next row access to the same physical bank. As multiple pages can be open on a rank before a page close command is issued the impact of tRP towards memory performance is not as prevalent as CAS or tRCD – although the impact does increase if multiple page open and close requests are sent to the same memory IC and to a lesser extent rank (there are 8 physical ICs per rank and only one page can be open per IC at a time, making up the total of 8 open pages per rank simultaneously).

DRAM RAS Active Time:

Also known as tRAS. This setting defines the number of DRAM cycles that elapse before a Precharge command can be issued. The minimum clock cycles tRAS should be set to is the sum of CAS+tRCD+tRTP.

DRAM Command Mode:

Also known as Command Rate. Specifies the number of DRAM clock cycles that elapse between issuing commands to the DIMMs after a chip select. The impact of Command Rate on performance can vary. For example, if most of the data requested by the CPU is in the same row, the impact of Command Rate becomes negligible. If however the banks in a rank have no open pages, and multiple banks need to be opened on that rank or across ranks, the impact of Command Rate increases.

Most DRAM module densities will operate fine with a 1N Command Rate. Memory modules containing older DRAM IC types may however need a 2N Command Rate.


PAGE: 1 2 3 4 5 6 7



Comments

DRAM Timing Control: Takes us to the DRAM Timing sub-section:


Most of these settings can safely be left at Auto unless you wish to tune the system for optimal scoring in benchmarks. The primary timings will be set in accordance with the memory module SPD at a given frequency or fall back on ASUS defaults as memory bus frequency is increased.


If you do wish to overclock memory then we suggest starting off by entering the Memory Preset subsection and selecting a memory profile that is based upon the ICs that are used on your memory modules.











DRAM CAS Latency: Column Address Strobe, defines the time it takes for data to be ready for burst after a read command is issued. As CAS factors in every column read transaction, it is considered to be the most important timing in relation to memory read performance.


To calculate the actual time period denoted by the number of clock cycles set for CAS we can use the following formula:


tCAS in Nano seconds=(CAS*2000)/Memory Frequency


This same formula can be applied to all memory timings that are set in DRAM clock cycles.


DRAM RAS TO CAS Latency: Also known as tRCD. Defines the time it takes to complete a row access after an activate command is issued to a rank of memory. This timing is of secondary importance behind CAS as memory is divided into rows and columns (each row contains 1024 column addresses). Once a row has been accessed, multiple CAS requests can be sent to the row the read or write data. While a row is “open” it is referred to as an open page. Up to eight pages can be open at any one time on a rank (a rank is one side of a memory module) of memory.


DRAM RAS# PRE Time: Also known as tRP. Defines the number of DRAM clock cycles it takes to Precharge a row after a page close command is issued in preparation for the next row access to the same physical bank. As multiple pages can be open on a rank before a page close command is issued the impact of tRP towards memory performance is not as prevalent as CAS or tRCD - although the impact does increase if multiple page open and close requests are sent to the same memory IC and to a lesser extent rank (there are 8 physical ICs per rank and only one page can be open per IC at a time, making up the total of 8 open pages per rank simultaneously).


DRAM RAS Active Time: Also known as tRAS. This setting defines the number of DRAM cycles that elapse before a Precharge command can be issued. The minimum clock cycles tRAS should be set to is the sum of CAS+tRCD+tRTP.


DRAM Command Mode: Also known as Command Rate. Specifies the number of DRAM clock cycles that elapse between issuing commands to the DIMMs after a chip select. The impact of Command Rate on performance can vary. For example, if most of the data requested by the CPU is in the same row, the impact of Command Rate becomes negligible. If however the banks in a rank have no open pages, and multiple banks need to be opened on that rank or across ranks, the impact of Command Rate increases.


Most DRAM module densities will operate fine with a 1N Command Rate. Memory modules containing older DRAM IC types may however need a 2N Command Rate.


Latency Boundary: This setting contains presets for the third timing section below. A higher number is less aggressive. We recommend you start with a setting of 14 and then decrease by one digit after running a stress test to check if the system is stable.


Secondary Timings



DRAM RAS to RAS Delay: Also known as tRRD (activate to activate delay). Specifies the number of DRAM clock cycles between consecutive Activate (ACT) commands to different banks of memory on the same physical rank. The minimum spacing allowed at the chipset level is 4 DRAM clocks. Setting this any lower will result in the chipset reverting to 4 clocks internally.


DRAM Ref Cycle Time: Also known as tRFC. Specifies the number of DRAM clocks that must elapse before a command can be issued to the DIMMs after a DRAM cell refresh.


DRAM Write Recovery Time: Defines the number of clock cycles that must elapse between a memory write operation and a Precharge command. Most DRAM configurations will operate with a setting of 9 clocks up to DDR3-2500. Change to 12~16 clocks if experiencing instability. Minimum possible setting internally is 5 clocks.


DRAM Read to Precharge Time: Also known as tRTP. Specifies the spacing between the issuing of a read command and tRP (Precharge) when a read is followed by a page close request. The minimum possible spacing is limited by DDR3 burst length which is 4 DRAM clocks. Most 2GB memory modules will operate fine with a setting of 4~6 clocks up to speeds of DDR3-2000 (depending upon the number of DIMMs used in tandem). High performance 4GB DIMMs (DDR3-2000+) can handle a setting of 4 clocks provided you are running 8GB of memory in total and that the processor memory controller is capable. If running more than 8GB expect to relax tRTP as memory frequency is increased.







DRAM Four Activate Window: Also known as tFAW. This timing specifies the number of DRAM clocks that must elapse before more than four Activate commands can be sent to the same rank. The minimum spacing is tRRD*4, and since we know that the minimum value of tRRD is 4 clocks, we know that the minimum internal value for tFAW at the chipset level is 16 DRAM clocks.


As the effects of tFAW spacing are only realised after four Activates to the same DIMM, the overall performance impact of tFAW is not large, however, benchmarks like Super Pi 32m can benefit by setting tFAW to the minimum possible value.


As with tRRD, setting tFAW below its lowest possible value will result in the memory controller reverting to the lowest possible value (16 DRAM clocks or tRRD * 4).


DRAM Write to Read Delay: Also known as tWTR. Sets the number of DRAM clocks to wait before issuing a read command after a write command. The minimum internal spacing is 4 clocks. As with tRTP this value may need to be increased according to memory density and memory frequency.


DRAM CKE Minimum Pulse width: This setting can be left on Auto for all overclocking. CKE defines the minimum number of clocks that must elapse before the system can transition from normal operating to low power state and vice versa.


DRAM RTL & IOL: Unlike other timings, DRAM RTL and IOL are measured in memory controller clock cycles rather than DRAM bus cycles. These settings can safely be left on Auto for all normal use. The RTL and IOL parameters define the number of memory controller cycles that elapse before data is returned to the memory controller after a read CAS command is issued. The IOL setting works in conjunction with RTL to fine tune DRAM buffer output latency. Both settings are auto-sensed by the memory controller during the POST process (memory training).


Manual adjustment should not be necessary unless the system is being used in order to obtain maximum DRAM frequency screenshots (limited stability) or if running speeds in excess of DDR3-2400 where some drift may manifest in read/write levelling between AC power cycling of the system (cold BOOT). In such cases, it is worth noting the RTL and IOL values that the system was previously stable at and then apply offsets manually to improve system stability.
Third Timings



Most of these timings can be left on AUTO unless tweaking for SuperPi 32M. The best way to tune these settings if benchmarking is to set them to their maximum value and then decrease one step at time while monitoring stability at every change. The Latency Boundary setting above is an easy way of doing this. Should you still wish to tune manually, we have color-coded text within this section to highlight more important timings over lesser ones.


[COLOR=red]Red = more important[/COLOR]


Black = less important


On some settings , Intel have already enforced a 2 clock preset which the UEFI set value is added, and on others the memory controller calculates a minimum delay to which the UEFI value is added.


[COLOR=red]tRWDR (DD)[/COLOR]: Sets the delay period between a read command that is followed by a write command; where the write command requires the access of data from a different rank or DIMM. A setting of 1 clocks works with some high performance DIMM configurations (dependent upon CAS). Relax to 2~7 clocks only if you are experiencing stability issues when running in excess of 4GB of memory over DDR3-2300.


[COLOR=red]tRWSR[/COLOR]: Sets the delay between a read command followed by a write command to the same rank. A setting of 2 is possible with high performance 2GB DIMMs, may need relaxing to 3 at higher frequency. If experiencing instability or non-POST with CAS 8 or 9 then try a setting of 4+. To use a setting of 3 with CAS 9, try setting Stretch_ODT to 8 clocks using MemTweakit and monitor for performaqnce impact or change.


tRR (DD): Sets the read to read delay where the subsequent read requires the access of a different DIMM. For high performance DIMMs start with a setting of 2 and increase to 3+ if you experience no POST.


tRR (DR): Sets the delay between read commands when the subsequent read requires the access of a different rank on the same DIMM. This setting is an additive to an internally calculated value.









[COLOR=red]tRRSR[/COLOR]: Sets the delay between read commands to the same rank. From a performance perspective a setting of 4 clocks is optimal.


tWW(DD): Sets the write to write delay where the subsequent write requires the access of a different DIMM. 4 clocks will work with most configurations; increase if using 4GB or 8GB DIMMs with all slots populated.


tWW(DR): Sets the write to write delay where the subsequent write command requires the access of a different rank on the same DIMM; increase if using 4GB or 8GB DIMMs with all slots populated.


[COLOR=red]tWWSR[/COLOR]: Sets the delay between write commands to the same rank. From a performance perspective a setting of 4 clocks is optimal.




Misc Settings





MRC Fast Boot: Bypasses longer memory training routines during system re-BOOT. Can help speed up BOOT times. If using higher memory frequency divider ratios (DDR3-2133 and over), then disabling this setting while trying to achieve stability can be beneficial. Once the desired system stability has obtained, Enable this setting to prevent the auto sensed parameters from drifting on subsequent system re-BOOTs.


DRAM CLK Period: Defines memory controller latencies in conjunction with the applied memory frequency. A setting of 5 gives best overall performance though may impact stability.


Transmitter Slew & Receiver Slew: A setting of around ‘3’ on Transmitter Slew may yield the best results or a good starting point with most DIMM. Tweaking these settings will require some time, but can extend overclocking headroom for DRAM frequency. It’s best to adjust one step at a time and then run a memory intensive benchmark or stress test to monitor for changes in failure rates to find the optimal settings.


After changing Transmitter slew, one should go through the same steps tuning Receiver Slew. Both settings should be tuned before relying on an increase of voltage.


MCH Duty Sense CHA & CHB: These settings can be left on Auto most of the time. If experimenting, start at middle value of 15, check for impact on stability then move up by +2 and re-check. Tuning will be system and DIMM specific and depend on operating frequency.


CHA & CHB DIMM Control: Allows a user to disable a channel without physically removing the DIMM. Leave on Auto unless experimenting or testing individual channels for stability.


DRAM Read and Write Additional Swizzle: Leave these settings on Auto unless experiencing instability at high DRAM frequency. Toggling from enabled to disabled or vice-versa may help pass a benchmark where the DIMMs were otherwise unstable.









GPU.DIMM Post: This takes us to a sub-menu where we can check that DIMMs and GPUs have been detected at POST.


CPU Power Management: Takes us to a sub-menu that allows configuration of non-Turbo ratio CPU multipliers as well as set power thresholds for Turbo multipliers. Information is provided within UEFI with regards to the usage of each option.






DIGI+ Power Control:











Each of the settings within the DIGI+ VRM section has an explanation listed in the right hand column of UEFI. All settings have been configured to scale on Auto in accordance with overclocking. We recommend you leave the thermal control parameters as is for all operating conditions. Well highlight some of the other settings below for clarification purposes.


Load-Line Calibration: AKA LLC, sets the margin between applied and load voltage. For 24/7 use a setting of 50% is considered optimal, providing the best balance between set and load voltage in a manner that compliments the VRM for all loading conditions. Some users prefer using higher values, although this will impact overshoot to a small degree.


VRM Spread Spectrum: Assigns enhanced modulation of the VRM output in order the peak magnitude of radiated noise into nearby circuitry. This setting should only be used at stock operating frequency, as the modulation routines can impact transient response.

All Current Capability settings: A setting of 100% on all of these settings should be ample to overclock processors using conventional cooling methods. If pushing processors using Ln2 or other sub-zero forms of cooling then increase the current threshold to each voltage rail respectively. A setting of 140% should ensure OCP does not trip during benchmarks.





CPU Voltage: There are two ways to control CPU core voltage; Offset Mode and Manual Mode. Manual mode assigns a static level of voltage for the processor. Offset Mode allows the processor to request voltage according to loading conditions and operating frequency. Offset mode is preferred for 24/7 systems as it allows the processor to lower its voltage during idle conditions, thus saving a small amount of power and reducing unnecessary heat.


The caveat of Offset Mode is that the full load voltage the processor will request under load is impossible to predict without loading the processor fully. The base level of voltage used will increase in accordance with the CPU multiplier ratio. It is therefore best to start with a low multiplier ratio and work upwards in 1X steps while checking for stability at each increase. Enter the OS, load the CPU and check CPU-Z to check the voltage the CPU requests from the buck controller. If the level of voltage requested is very high, then you can reduce the full load voltage by applying a negative offset in UEFI. For example, if our full load voltage at 45X CPU multiplier ratio happened to be 1.40V, we could reduce that to 1.35V by applying a 0.05V negative offset in UEFI.


Most of the information pertaining to overclocking Sandy Bridge CPUs has already been well documented on the internet. For those of you purchasing retail Ivy Bridge CPUs, we expect most samples to achieve 4.3-4.5GHz with air and water cooling. Higher overclocks are possible although full-loading of the CPUs will result in very high temperatures even though the current consumed by these processors is not excessive. We suspect this is a facet of the 22nm process.


iGPU Voltage: Sets the rail voltage of the integrated GPU. Same function as CPU Vcore with regards to Manual and Offset Mode. Should this option become available when you are not using the iGPU, then you may force the iGPU to disabled via System agent config > graphics config/ option > internal graphics. Doing so ensures better memory stability and overclocking headroom.

DRAM Voltage: Sets voltage for the memory modules. 1.50V DIMMs qualified on Sandybridge and Ivy Bridge processors are recommended for use on this platform.


IMC-DRAM Offset Sign: Selects whether or not to add or subtract voltage from the IMC-DRAM Offset function below.


IMC-DRAM Offset: This setting offsets the DRAM voltage seen by a portion of the memory controller. The base voltage at AUTO is DRAM voltage. By using the positive or negative setting above, we can offset processor side DRAM voltage in 0.00661V steps. The reason this setting has been added is we found that some DIMMs exhibit more stability when a specific processor side DRAM pin has its voltage set above or below the voltage supplied to the memory modules.


Usually a setting 5 steps above or below DRAM voltage (circa 0.02V) is sufficient to help in memory intensive benchmarks. In my testing to date the memory ICs that seem to respond best to an offset are Elpida BBSE based modules. Don’t stay too far from auto.











VCCSA Voltage: Sets the voltage for the System Agent. Can be left on Auto for most overclocking.




VCCIO: May need adjustment on Sandybridge processors if using 16GB of memory or memory modules that contain ICs that represent a tough load to the memory controller. 1.05V is base, if adjusting increase in 0.025V steps and check stability at each increment. Maintaining a DC delta between this setting and DRAM voltage may be beneficial if using very high DRAM voltages (on Ivy Bridge, too).


CPU PLL Voltage: For most overclocking, the minimum voltage requirements will be centred around 1.80V. If using higher processor multiplier ratios or DRAM frequencies over DDR3 2200, then a small over-voltage here can aid stability. Do note that the processor will become increasingly sensitive to PLL voltage changes at sub-zero temperatures and when nearing the maximum frequency the CPU is capable of.


Skew Driving Voltage: Base is 1.05V. Adjustment is only required when running sub-zero processor temperatures or very high BCLKs. We have taken the time to enter offsets for you to work with within the Ln2 profiles.



2nd VCCIO Voltage
: Is split from the VCCIO power rail to allow you to adjust both separately. As a starting point keep this close to VCCIO and then try setting this at a different value if chasing maximum processor overclocks (benhmarking use).




PCH Voltage: Can be left at default values for all overclocking. We have not observed any relationship between this voltage rail and any other in our testing to date.


VTTDDR: Supplies power to the VTT input pin on DRAM memory modules. In most cases this setting can be left on Auto. At high DRAM clocks (in excess of DDR3-2400) increasing this voltage may help improve stability. Start with 0.85V and work up. Traditionally this setting should be at 50% of VDIMM, however in our testing we have found 0.85V a good starting point for improving stability in Super Pi 32M. A one or two step change above or below that can help 32M pass where it would otherwise fail.


DRAM DATA and CTRL References for all channels: Allow adjustment of the DRAM read/write reference voltages for the DATA and CTRL signal lines. A setting of Auto defaults to 50% of VDIMM which should be adequate for almost all overclocking. Adjustment can sometimes be required when benchmarking memory at very high operating frequencies. In such instances a small reduction or increase (one step) above or below 50% can help aid stability in memory intensive benchmarks. Also if processors are sub-zero cooled, there may come a point where the memory controller becomes unstable regardless of operating frequency. This is where fiddling with these voltages can sometimes help pass benchmarks that would be otherwise unstable.


BCLK and other Skews: For air/water overclocking, a slight change to BCLK (+/- 1) skew can help improve processor stability at high memory frequencies. At sub-zero temps, tweaking these settings is beneficial for achieving maximum processor and BCLK frequency. There is no hard and fast rule for these settings, it is best to adjust one setting at a time and monitor for impact on stability before moving on to the next setting.


CPU Spread Spectrum: Modulates processor core frequency in order to reduce the peak magnitude of radiated noise emissions. We recommend setting this to disabled if overclocking the as the modulation can interfere with system stability.




BCLK Recovery: When enabled, this setting will return BCLK to a setting of 100 MHz (default) if the system fails to POST. Disabling it will NOT return BCLK to 100MHz when OC Failure is detected.
Page12345
More in Maximus Motherboards (323 of 443 articles)
D3 mouse pad ROG feature