Issue
I was running a C++ program that provides a service, and noticed that it was taking 100% of a CPU even when serving no requests. I narrowed the problem down to a while loop which calls std::sleep_for
in order to prevent the service from exiting.
To test, I compiled and ran this simple test program:
#include <chrono>
#include <thread>
int main(int argc, char * argv[])
{
std::this_thread::sleep_for(std::chrono::hours::max());
}
My expectation was that this would sleep for a very long time, and indeed when I tried it on my M1 Mac I saw the expected behavior. However when I ran it on a Redhat Linux 8 machine it returned immediately. I also tried it on a Rocky Linux 8 docker container, running on the Mac, and this also returned immediately. This confirms that this occurs on RHEL 8 systems in general - or at least with gcc 8.5.0, since that compiler version is the same on both linux systems (the compiler on the Mac is the Apple-provided clang).
This explains why my service was taking 100% of CPU, since it was calling this in a while loop. But I've never heard of this behavior. Has anyone else?
Of course, I can easily solve the problem by sleeping for std::chrono::seconds(1)
. I'm only asking this question out of intellectual curiosity.
Solution
This is a bug in libstdc++ https://godbolt.org/z/vce44vjx5, looks like an overflow.
It inlines nanossleep()
call with
timespec req{ -3600, 0 }; // -1 hour.
main: # @main
push rbx
sub rsp, 16
mov qword ptr [rsp], -3600
mov qword ptr [rsp + 8], 0
mov rbx, rsp
.LBB0_1: # =>This Inner Loop Header: Depth=1
mov rdi, rbx
mov rsi, rbx
call nanosleep@PLT
Answered By - 273K Answer Checked By - David Marino (WPSolving Volunteer)