Issue
Among the timing functions, time
, clock
getrusage
, clock_gettime
, gettimeofday
and timespec_get
, I want to understand clearly how they are implemented and what are their return values in order to know in which situation I have to use them.
First we need to classify functions returning wall-clock values compare to functions returning process or threads values. gettimeofday
returns wall-clock value, clock_gettime
returns wall-clock value or process or threads values depending on the Clock
parameter passed to it. getrusage
and clock
return process values.
Then the second question regards the implementation of these functions and as a consequence, their accuracy. Which hardware or software mechanism does these functions use.
It seems that getrusage
uses only the kernel tick (usually 1ms long) and as a consequence can't be more accurate than the ms. Is it right?
Then the gettimeofday
function seems to use the most accurate underlying hardware available. As a consequence its accuracy is usually the microsecond (can't be more because of the API) on recent hardware.
What about clock
, the man page speak about "approximation", what does it mean?
What about clock_gettime
, the API is in nanosecond, does it means that it's able to be so accurate if underlying hardware allows it? What about monotonicity?
Are there any other functions?
Solution
The problem is that there are several different time functions available in C and C++, and some of them vary in behavior between implementations. There are also a lot of half-answers floating around. Compiling a list of clock functions together with their properties would answer the question properly. For starts let's ask what the relevant properties are that we're looking for. Looking at your post, I suggest:
- What time is measured by the clock? (real, user, system, or, hopefully not, wall-clock?)
- What is the precision of the clock? (s, ms, µs, or faster?)
- After how much time does the clock wrap around? Or is there some mechanism to avoid this?
- Is the clock monotonic, or will it change with changes in the system time (via NTP, time zone, daylight savings time, by the user, etc.)?
- How do the above vary between implementations?
- Is the specific function obsolete, non standard, etc.?
Before starting the list, I'd like to point out that wall-clock time is rarely the right time to use, whereas it changes with time zone changes, daylight savings time changes, or if the wall clock is synchronized by NTP. None of these things are good if you're using the time to schedule events or to benchmark performance. It's only really good for what the name says, a clock on the wall (or desktop).
Here's what I've found so far for clocks in Linux and OS X:
time()
returns the wall-clock time from the OS, with precision in seconds.clock()
seems to return the sum of user and system time. It is present in C89 and later. At one time this was supposed to be the CPU time in cycles, but modern standards like POSIX require CLOCKS_PER_SEC to be 1000000, giving a maximum possible precision of 1 µs. The precision on my system is indeed 1 µs. This clock wraps around once it tops out (this typically happens after ~2^32 ticks, which is not very long for a 1 MHz clock).man clock
says that since glibc 2.18 it is implemented withclock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...)
in Linux.clock_gettime(CLOCK_MONOTONIC, ...)
provides nanosecond resolution, is monotonic. I believe the 'seconds' and 'nanoseconds' are stored separately, each in 32-bit counters. Thus, any wrap-around would occur after many dozen years of uptime. This looks like a very good clock, but unfortunately it isn't yet available on OS X. POSIX 7 describesCLOCK_MONOTONIC
as an optional extension.getrusage()
turned out to be the best choice for my situation. It reports the user and system times separately and does not wrap around. The precision on my system is 1 µs, but I also tested it on a Linux system (Red Hat 4.1.2-48 with GCC 4.1.2) and there the precision was only 1 ms.gettimeofday()
returns the wall-clock time with (nominally) µs precision. On my system this clock does seem to have µs precision, but this is not guaranteed, because "the resolution of the system clock is hardware dependent". POSIX.1-2008 says that. "Applications should use theclock_gettime()
function instead of the obsolescentgettimeofday()
function", so you should stay away from it. Linux x86 and implements it as a system call.mach_absolute_time()
is an option for very high resolution (ns) timing on OS X. On my system, this does indeed give ns resolution. In principle this clock wraps around, however it is storing ns using a 64-bit unsigned integer, so the wrapping around shouldn't be an issue in practice. Portability is questionable.- I wrote a hybrid function based on this snippet that uses clock_gettime when compiled on Linux, or a Mach timer when compiled on OS X, in order to get ns precision on both Linux and OS X.
All of the above exist in both Linux and OS X except where otherwise specified. "My system" in the above is an Apple running OS X 10.8.3 with GCC 4.7.2 from MacPorts.
Finally, here is a list of references that I found helpful in addition to the links above:
- http://blog.habets.pp.se/2010/09/gettimeofday-should-never-be-used-to-measure-time
- How to measure the ACTUAL execution time of a C program under Linux?
- http://digitalsandwich.com/archives/27-benchmarking-misconceptions-microtime-vs-getrusage.html
- http://www.unix.com/hp-ux/38937-getrusage.html
Update: for OS X, clock_gettime
has been implemented as of 10.12 (Sierra). Also, both POSIX and BSD based platforms (like OS X) share the rusage.ru_utime
struct field.
Answered By - Douglas B. Staple Answer Checked By - Willingham (WPSolving Volunteer)