Current time subsystem in Linux kernel is built on top of jiffies, with typical resolution raning from 1 ms to 10 ms. Periodic timer interrupts happen to advance jiffies. Time and timers typically have resolution of a jiffy, which is not satisfactory for many applications.
There is a open source project that tries improve the resolution, but serious flaws exist.
My idea is to completely revamp the time system with native high resolution from hardware to all the way up. The existing jiffy time subsystem can be initially running on top of an emaulation layer and later completely removed when users of time subsystem move to the new interface.
The main idea of the current HRT project is to introduce a new higher resolution time unit, called sub-jiffy. A time instance is represented as jiffies + sub-jiffies. Existing time and timer code is extended to deal with the new sub-jiffy notion.
The biggest problem comes from the fact that jiffies and sub-jiffies are owned managed by different entitites in kernel. Jiffies are managed by the common time code, while sub-jiffies necessarily are managed by arch-specific code.
This split in ownership causes a lot of problems (mainly resulting in convoluted code and prone to tricky bugs).
There are some other issues as well: aliasing of the same time instance, etc..
The key of the new idea is something called monotonic time. See mtime.h
Monotonic time is built on top of an abstraction of hardware clock, called tock. See tock.h.
Wall time, or calendar time, is built on top of monotonic time. See wtime.h.
One time I also made PowerPoint slides taling about this idea. See it at here
The three-layered notion about time, i.e., tock, mtime and wtime, are simple and yet technically sound. They should satisfy all potential time and timer need.
It is simple way to provide high resolution.
It transform Linux to a tick-less operating system, making power management easier too.
The transition path from jiffies to jiffy-less is clear and easy.
64-bit manipulation is inefficient on 32-bit machines. A couple of tactics should lessen this effect, but won't eliminate it.
Jiffy-subsystem gets inefficient on the emulation layer. The overhead mainly comes from re-inserting the timer back to the list to emulate jiffy interrupts. First of all this overhead is not big. Secondly a couple of things that could help this: make use of tock's periodic alarm, and/or introduce periodic timers in addition to one-shot timers.
With the new high-resolution interface it is conceivable system will have more overhead for a lot of timers expiring in a short time of period. One possible counter-measure is to allow timer users to specify the resolution they want. A low resolution can be set as default system-wide. Timers with lower resolution are coerced to align along their resolution boundaries.