Java threading introduction  Thread-safety  Thread methods  Interruption  Thread scheduling  Context switching  Thread priorities  sleep()  yield()  Deadlock  Threading with Swing  invokeLater()  Thread pools  CoundDownLatch  ThreadPoolExecutor  CyclicBarrier

Thread Scheduling (ctd): quanta, switching and scheduling algorithms

This page continues from part 1 our discussion of thread scheduling. On the next page, we'll discuss the Java implications of thread scheduling.

Quanta and clock ticks

Each thread has a quantum, which is effectively how long it is allowed to keep hold of the CPU if:

  • it remains runnable;
  • the scheduler determines that no other thread needs to run on that CPU instead.

Thread quanta are generally defined in terms of some number of clock ticks. If it doesn't otherwise cease to be runnable, the scheduler decides whether to preempt the currently running thread every clock tick. As a rough guide:

  • a clock tick is typically 10-15 ms under Windows; under Linux, it is 1ms (kernel 2.6.8 onwards);
  • a quantum is usually a small number of clock ticks, depending on the OS:
    • either 2, 6 or 12 clock ticks on Windows, depending on whether Windows is running in "server" mode:
      Windows modeForeground processNon-foreground process
      Normal6 ticks2 ticks
      Server12 ticks
    • between 10-200 clock ticks (i.e. 10-200 ms) under Linux, though some granularity is introduced in the calculation— see below.
  • a thread is usually allowed to "save up" unused quantum, up to some limit and granularity.

Because clock ticks aren't such granular units, and because we don't want to waste time making fine-tuned measurements, certain simplifications have to be made in assessing how much time a thread has used. In Windows, for example, it is assumed that a process that enters a wait state mid-way through a clock tick has used 1/3 of a tick during that timeslice. This last point shows that quanta may actually not be calculated in whole numbers of clock ticks, e.g. Windows uses a base unit of a third of a clock tick.

In Windows, a thread's quantum allocation is fairly stable. In Linux, on the other hand, a thread's quantum is dynamically adjusted when it is scheduled, depending partly on heuristics about its recent resource usage and partly on a nice value. (See our discussion of thread priority for illustrations of the relationship between Java thread priority and CPU allocation under Linux.)

Switching and scheduling algorithms

At key moments, the thread scheduler considers whether to switch the thread that is currently running on a CPU. These key moments are usually:

  • periodically, via an interrupt routine, the scheduler will consider whether the currently running thread on each CPU has reached the end of its allotted quantum;
  • at any time, a currently running thread could cease to be runnable (e.g. by needing to wait, reaching the end of its execution or being forcibly killed);
  • when some other attribute of the thread changes (e.g. its priority or processor affinity4) which means that which threads are running needs to be re-assessed.

At these decision points, the scheduler's job is essentially to decide, of all the runnable threads, which are the most appropriate to actually be running on the available CPUs. Potentially, this is quite a complex task. But we don't want the scheduler to waste too much time deciding "what to do next". So in practice, a few simple heuristics are used each time the scheduler needs to decide which thread to let run next:

  • there's usually a fast path for determining that the currently running thread is still the most appropriate one to continue running (e.g. storing a bitmask of which priorities have runnable threads, so the scheduler can quickly determine that there's none of a higher priority than that currently running);
  • if there is a runnable thread of higher priority than the currently running one, then the higher priority one will be scheduled in3;
  • if a thread is "preempted" in this way, it is generally allowed to keep its remaining quantum and continue running when the higher-priority thread is scheduled out again;
  • when a thread's quantum runs out, the thread is "put to the back of the queue" of runnable threads with the given priority and if there's no queued (runnable) thread of higher priority, then next thread of the same priority will be scheduled in;
  • at the end of its quantum, if there's "nothing better to run", then a thread could immediately get a new quantum and continue running;
  • a thread typically gets a temporary boost to its quantum and/or priority at strategic points.

Quantum and priority boosting

Both Windows and Linux (kernel 2.6.8 onwards) implement temporary boosting. Strategic points at which a thread may be given a "boost" include:

  • when it has just finished waiting for a lock/signal or I/O5;
  • when it has not run for a long time (in Windows, this appears to be a simple priority boost after a certain time; in Linux, there is an ongoing calculation based on the thread's nice value and its recent resource usage);
  • when a GUI event occurs;
  • while it owns the focussed window (recent versions of Windows give threads of the owning process a larger quantum; earlier versions give them a priority boost).

In Linux, threads can be given negative boosts or penalties. In either direction, there are generally limits to these temporary boosts.

A general implementational difference is that Windows tends to use yes-no decisions ("has received I/O"), whereas Linux uses slightly more complex heuristics (e.g. quantifying the amount of CPU recently used).

Next: implications for Java

Having seen how thread scheduling generally works under the hood, in the next section we'll look at how thread scheduling affects Java.

Further reading

In this discussion, we've tried to concentrate on the generalities of thread scheduling. If you want to look at even lower-level details for a particular operating system, then the following are recommended as a starting point:

Windows

Windows Internals (5th ed, due Feb 2009): see the chapter on Threads and Processes, and the subsection on Thread Scheduling. The book gives some details of the specific priority and quantum boosts made in certain circumstances, and gives a discussion of modifications made on multiprocessor systems.

Web resources include the Process and Thread Reference section of the MSDN library.

Linux

In Linux, the ultimate resource (though not the most understandable) is the kernel source code, of course.

For a slightlier friendlier but quite detailed introduction to the nitty gritty of Linux scheduling, an excellent starting point is Bovet & Cesati (2008) Understanding the Linux Kernel (3rd Ed).


3. There are corner cases where this won't happen. For example, a scheduler will won't break the processor affinity of a thread, which means that a high priority thread could potentially sit waiting for a preferred processor to become available, even though there's another processor already available.
4. The way in which thread priorities are boosted is subtly different on Windows depending on the cause of the boost. After I/O completion, the thread effectively receives a "staggered boost" over a couple of quanta, the priority decaying by 1 after each quantum.
5. Although not directly settable from Java, processor affinity is an indication of which processor(s) a given thread is allowed to run on. Usually, any thread can run on any processor (though for cache performance reasons, a scheduler will tend to favour scheduling a thread on the same processor it has recently run on).

comments powered by Disqus

 Java threading articles  Java threading and concurrency  Java profiling  Java performance graph index

Unless otherwise stated, the Java programming articles and tutorials on this site are written by Neil Coffey. Suggestions are always welcome if you wish to suggest topics for Java tutorials or programming articles, or if you simply have a programming question that you would like to see answered on this site. Most topics will be considered. But in particular, the site aims to provide tutorials and information on topics that aren't well covered elsewhere, or on Java performance information that is poorly described or understood. Suggestions may be made via the Javamex blog (see the site's front page for details).
Copyright © Javamex UK 2014. All rights reserved.