Issue
I am reading Robert Love's "Linux Kernel Development", and I came across the following passage:
No (Easy) Use of Floating Point
When a user-space process uses floating-point instructions, the kernel manages the transition from integer to floating point mode. What the kernel has to do when using floating-point instructions varies by architecture, but the kernel normally catches a trap and then initiates the transition from integer to floating point mode.
Unlike user-space, the kernel does not have the luxury of seamless support for floating point because it cannot easily trap itself. Using a floating point inside the kernel requires manually saving and restoring the floating point registers, among other possible chores. The short answer is: Don’t do it! Except in the rare cases, no floating-point operations are in the kernel.
I've never heard of these "integer" and "floating-point" modes. What exactly are they, and why are they needed? Does this distinction exist on mainstream hardware architectures (such as x86), or is it specific to some more exotic environments? What exactly does a transition from integer to floating point mode entail, both from the point of view of the process and the kernel?
Solution
Because...
- many programs don't use floating point or don't use it on any given time slice; and
- saving the FPU registers and other FPU state takes time; therefore
...an OS kernel may simply turn the FPU off. Presto, no state to save and restore, and therefore faster context-switching. (This is what mode meant, it just meant that the FPU was enabled.)
If a program attempts an FPU op, the program will trap into the kernel, the kernel will turn the FPU on, restore any saved state that may already exist, and then return to re-execute the FPU op.
At context switch time, it knows to actually go through the state save logic. (And then it may turn the FPU off again.)
By the way, I believe the book's explanation for the reason kernels (and not just Linux) avoid FPU ops is ... not perfectly accurate.1
The kernel can trap into itself and does so for many things. (Timers, page faults, device interrupts, others.) The real reason is that the kernel doesn't particularly need FPU ops and also needs to run on architectures without an FPU at all. Therefore, it simply avoids the complexity and runtime required to manage its own FPU context by not doing ops for which there are always other software solutions.
It's interesting to note how often the FPU state would have to be saved if the kernel wanted to use FP . . . every system call, every interrupt, every switch between kernel threads. Even if there was a need for occasional kernel FP,2 it would probably be faster to do it in software.
1. That is, dead wrong.
2. There are a few cases I know about where kernel software contains a floating point arithmetic implementation. Some architectures implement traditional FPU ops in hardware but leave some complex IEEE FP operations to software. (Think: denormal arithmetic.) When some odd IEEE corner case happens they trap to software which contains a pedantically correct emulation of the ops that can trap.
Answered By - DigitalRoss