Tuesday, November 16, 2021

[SOLVED] Digging deeper into System.currentTimeMillis(), how precise can we expect it to be? What does millisecond actually precision mean?

Issue

Please only answer WHEN you fully comprehend the question.

Do not close down, as there does not exist a similar question.

I am aware of System.nanoTime() gives ns from an arbitrary "random" point after the JVM starts. And I am aware that System.currentTimeMillis() only gives ms precision.

What I am looking for is for the PROOF and keep an open mind, to the hypothesis that the ms changes are not exact once we try to define what exact means.

Exact would in my world mean that everytime we were to register a new ms say, we go from 97ms, 98ms, 99ms and so forth, on every time we get an update, through whatever mechanisms, we can not expect at least observed Java to give us nanosecond precision at the switches.

I know, i know. It sounds weird to expect that, but then the question comes, how accurate are the ms switches then?

It appears to be that when you ask System.nanoTime() repeatedly you would be able to get a linear graph with nanosecond resolution.

If we at the same time ask System.currentTimeMillis() right after System.nanoTime() and we disregard the variance in cost of commands, it appears as if there would be not a linear graph on the same resolution. The ms graph would +-250ns.

This is the to be expected, yet I can not find any information on the error margin, or the accuracy of the ms.

This issue is there for second precision as well, or hour precision, day, year, and so forth. When the year comes, how big is the error?

When the ms comes, how big is the error in terms on ns?

System.currenTimeMillis() can not be trusted to stay linear against System.nanoTime() and we can not expect System.currenTimeMillis() to keep up with ns precision.

But how big is the error? In computing? In Java, in unix systems?


Solution

From the documentation:

"Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.

See the description of the class Date for a discussion of slight discrepancies that may arise between "computer time" and coordinated universal time (UTC)."

So both the precision and accuracy of the call is undefined. They pass the buck to the OS and shrug. I doubt that 250 ns is an accurate measure of its quality. The gap is likely much larger than that. "Tens of milliseconds" as per the documentation is a much more likely value, especially across multiple systems.

Also they essentially disavow any knowledge of UTC as well. "Slight discrepancies" are allowed, whatever that means. Technically this allows any value at all, because what exactly is "slight?" It could be a second or a minute depending on your point of view.

Finally the system clock could be misconfigured by the person operating the system, and at that point everything goes out the window.



Answered By - markspace