In Realtime Operating Systems, Interrupt latency is the time between the generation of an interrupt by a device and the servicing of the device which generated the interrupt. For many operating systems, devices are serviced as soon as the device's interrupt handler is executed. Interrupt latency may be affected by interrupt controllers, interrupt masking, and the operating system's (OS) interrupt handling methods.
Minimum interrupt latency is largely determined by the interrupt controller circuit and its configuration. They can also affect the jitter in the interrupt latency, which can drastically affect the real-time schedulability of the system. The Intel APIC Architecture is well known for producing a huge amount of interrupt latency jitter.
Maximum interrupt latency is largely determined by the methods an OS uses for interrupt handling. For example, most processors allow programs to disable interrupts, putting off the execution of interrupt handlers, in order to protect critical sections of code. During the execution of such a critical section, all interrupt handlers that cannot execute safely within a critical section are blocked (they save the minimum amount of information required to restart the interrupt handler after all critical sections have exited). So the interrupt latency for a blocked interrupt is extended to the end of the critical section, plus any interrupts with equal and higher priority that arrived while the block was in place.
Many computer systems require low interrupt latencies, especially embedded systems that need to control machinery in real-time. Sometimes these systems use a real-time operating system (RTOS). A RTOS makes the promise that no more than an agreed upon maximum amount of time will pass between executions of subroutines. In order to do this, the RTOS must also guarantee that interrupt latency will never exceed a predefined maximum.
Modern hardware also implements interrupt rate limiting. This helps prevent interrupt storms or live lock by having the hardware wait a programmable minimum amount of time between each interrupt it generates. Interrupt rate limiting reduces the amount of time spent servicing interrupts, allowing the processor to spend more time doing useful work. Exceeding this time results in a soft (recoverable) or hard (non-recoverable) error.
US Patent Issued to Western Digital Technologies on Sept. 13 for "Disk Drive Implementing Shared Buffer Memory with Reduced Interrupt Latency" (California Inventors)
Sep 14, 2011; ALEXANDRIA, Va., Sept. 14 -- United States Patent no. 8,019,914, issued on Sept. 13, was assigned to Western Digital Technologies...