Tutorial :Operating System Implementation of events/signals/wait handles


Out of curiousity I was wondering how Operating Systems implement waking threads that are waiting on events/handles etc.

For example say an OS thread continually scans through a list of wait handles and executes the respective threads if necessary. Not that I believe its implemented this way as it would seem inefficient.

I think its more likely that the OS sets hardware interrupts over a region of memory that contain synchronisation primitives associated to the exposed wait handles/events and then when they are triggered it can schedule the thread with care not to schedule it more than once?


Actually I guess what more specifically I was trying to think about but didn't quite get to the root of was what happens to wake up a sleeping core to run a blocked thread?


In order to understand it in detail, you'll have to take a course in operating system (or at least buy a good book on the subject) because it actually involves quite a few systems.

Basically, however, it relates to how thread state is managed. A thread is one of a few different states at any one time: sleeping, ready or running (there's usually more, but that's all that is needed for the purpose of this discussion). A thread in the running state is actually running and code in the thread is executing. A thread in the "sleeping" state is not running and the scheduler will skip over it when deciding who to run next. A thread in the "ready" state is not currently running, but once another thread goes to sleep or it's timeslice runs out, the scheduler is free to choose to schedule that thread to go into the running state.

So basically, when you call "wait" on a mutex object, the OS checks whether the object is already owned by another thread and if so, sets the current thread's state to "sleeping" and also marks the thread as "waiting on" that particular mutex.

When the thread that owns the mutex is finished, the OS loops through all of the threads that were waiting on it and sets them to "ready". The next time the scheduler comes around, it sees a "ready" thread and puts it in the "running" state. The thread starts running and checks whether it can get a lock on the mutex again. This time nobody owns it, so it can continue on it's merry way.

In reality, it's a lot more complicated than that, and a lot of effort goes into making the system as efficient as possible (for example, to avoid waking a thread only to have it go immediately back to sleep, to avoid having a thread starving on a mutex that has lots of other threads waiting on it, etc, etc)


The introductory textbook answer is that when one thread sleeps waiting on an event to happen, it gets put onto a queue of waiting threads. The thread gets marked as "waiting," so the operating system's process scheduler skips over the thread when looking for things to run on a processor. Eventually (in a correct program), another thread will wake up one or all threads that are waiting on an event queue. Then the threads are marked as "ready" and the OS starts scheduling them again.

Of course, how that's actually implemented is rather tricky. I think this is your real question. For Linux, the mechanism you're looking for is called a futex, and they're too complex for me to do them justice here. If the Wikipedia blurb piques your interest, dig into those external links at the bottom of the wiki page.


The implementation is simpler i believe, a thread is put in a list of waiting threads (all the threads which are waiting for certain event/handle/mutex/etc. When the synchronization primitive is awaked, all the threads are moved to running state and the list is cleaned.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »