I’ve been fascinated by the idea that a computer could potentially do more than one thing at the same time since before I got my hands on my first Pentium Pro back in the 90’s. I was so fascinated by this concept, I decided to build a Dual Processor Pentium III machine as soon as it became reasonably affordable for 20-year old kid to have one in his home. I was fascinated by the idea of unlocking the potential of this second processor and predicted that maybe one day, all home computers would have multiple processors, or multiple cores on a single processor. I was right.
I generally obsess over threads and multi-processing, and now I run multiple machines with both Intel i7 processors (8 virtual cores) and AMD chips featuring 8 integer cores coupled with 4 floating point cores. It can be quite the challenge to take full advantage of 8 cores. You really need to have something significant for each core to do. Additionally, unless you’re smart about your threading, you can waste valuable CPU cycles and time just creating and destroying threads.
Thread pooling is certainly no new thing. But I found that if you do a google search for how to do multi-threaded programming, even at the simplest, most-basic level, you’d find 50 different answers.
The reason for this is that there’s a lot of consider about multi-threaded programming that is very specific to the task at hand. Will the thread run constantly in the background? Will the thread be fired only once? Do you wish to “forget” the thread after you start it, letting it finish on its own and clean-up its own resources? How much work is there to do for this thread? How many total work units are there and how long are they expected to take to complete?
I thought long and hard about it, and figured, “Why can’t there be a single thread class and a single programming pattern that handles all the possible threading scenarios simply and elegantly?”
Is it possible to come up with one-thread design to rule them all? Is there a single construct that will do basically everything you need without being a mess or maze? I think I have found such a thread design, and I now use it exclusively over any other thread designs. in fact, if you were to give me some code right now that employed the use of threads, it is likely that the first thing I’d feel compelled to do is to refactor it to use my ManagedThread class.
For the record, I also have a C# version of this code, but this example was written in Delphi.
In abstract, this is a simple flow chart that describes part of design. It doesn’t describe Thread Pooling part in detail, however makes a little mention it. It also doesn’t mention the “ThreadManager” or the “CommandProcessor”. More on that some other time.
But what this chart does do, for the sake of simplicity, is describe a thread class that has a uniform set of use conventions that are both simple to follow, predictable, efficient, and powerful. These conventions can be applied to many more situations that TThread can’t do on its own. If you use TThread outright, you’ll probably suffer from spaghetti code before you know it.
In a nutshell. Unless what you’re doing is painfully simple, you should never just create threads outright. In Delphi you would normally call TThread.Create, debate with yourself as to what parameters are appropriate for your given situation, and then pray that the thread stops when you say “Terminate()”. I got sick of all the spaghetti code. I wanted something universally useful.
Similar to TThread, my TManagedThread class has an overrideable Execute method. In fact TManagedThread inherits directly from TThread. I put all kinds of fun stuff in the original Execute method, so I called it “DoExecute”. A silly convention of mine.
This thread can be spawned outright with TManagedThread.Create with optional parameter specifying a Manager (if nil it will use the default global thread manager), or it can be spawned/reused from a thread-pool of your choosing or a global “MasterThreadPool” singleton that I always have available.
Unlike TThread, TManagedThread offers you no option to create it suspended or not. I’ve found over the years that working with Suspend and Resume can be incredibly flaky. I’m not sure why, but it can be really frustrating when you’re dealing with mission critical high-availability servers that serve 200-400 requests every second every day and something goes wrong. I speculate that suspend/resume rely on asynchronous messages and I’m not sure of a good way to rely on handshaking with those messages. As a result, you can find that some threads, despite having “suspend” called on them, will not suspend officially right away and if you stress a thread by calling Suspend and Resume frequenty… bad things happen… just avoid them. Furthermore, you don’t want to just suspend a thread when it could potentially be holding resources that you need for other threads. Regardless of your threading model, you need to be super careful about calls to suspend(). A safer alternative to calling suspend(), in my opinion is to have the threads wait on events.
TManagedThread will therefore always be started initially, but it will be waiting on an event until you set its work parameters and then call Start() which signals the thread to move forward.
Effectively it is the same thing as creating a suspended thread, but from the perspective of the OS, it is running, just waiting for a “Start” event to occur.
Another reason that I standardize on this Blocked/Waiting on creation paradigm is because it makes it much easier to create threads that are universally compatible with Thread Pooling. You’ll appreciate having confidence that when you want a thread it’ll either be from the thread-pool, waiting for your start() command, or it’ll be brand-new, waiting for your start() command… no exceptions.
Before you call Start(), you should set any properites on the thread that you might need for your specific task. Then call Start(), not Resume() as you would with TThread.
Calling TManagedThread.Start() effectively calls BeginStart() followed by EndStart().
BeginStart() simply Signals the a member variable called evStart allowing the thread to continue from its current waiting state.
EndStart() waits on a signal, another member variable called evStarted. It is sometimes important to Wait for the thread to be officially started. So Start() waits by calling on EndStart() If you don’t wish to wait, you can simply call BeginStart(). Either call EndStart() later or hope that for whatever reason, no one else checks the evFinished signal before the thread is officially started. That being said, it is always advisable to call EndStart() at some point in time, but if your primary thread is timing sensitive, you can safely postpone the check for a few precious milliseconds. If you don’t wait (not advisable) then the state of the thread might potentially still be in the “Finished” state from the previous run, particularly if it was pooled. EndStart will not return until the “Finished” flag is cleared.
After the thread is started, the design deviates from the standard thread quite a bit more.
Instead of writing your own loop with “while not Terminated do …. ” in your thread’s Execute method, this thread is designed to optionally automatically loop, and works best when your “DoExecute” method just takes care of one iteration of the loop. You can set the Loop property to true or false at any time to enable/disable the looping. But you should set it before you call Start() if your thread is going to do more than one loop. If you don’t do it this way, then the elegance of being able to easily stop, pool, and pause the thread become your responsibility… and since this thread aims to handled all those uses in a single package, you will be better off just doing one iteration in DoExecute.
Once DoExecute finishes. Control returns to the old-fashioned execute, and if the Loop flag is set, the next iteration starts. However there are now some other tools that are employed. Before DoExecute is called, I check for a couple of events. One I call “evHasWork” and another “evRunHot”. The “HasWork” signal can be used to more-or-less pause the thread elegantly. If you clear this signal, the thread will pause after it completes the current iteration of DoExecute(). Think of it like a water faucet. Signal HasWork to turn the faucet on, and unsignal it when you want it to turn off.
For example, for a thread than handles a network FIFO you could say something like “HasWork := (DataAvailable > 0)”. Just keep in mind that if you’re going to do this inside your DoExecute() method, you should put “Lock; try … finally Unlock; end;” because by the time you’ve evaluated whether data is available, another thread might have changed the state of HasWork before the value is actually assigned by the current thread. It is one of those really rare conditions that might only happen 1 in a million. Don’t get bit by it.
The other event that is available to you is the “RunHot” signal. This is similar to the HasWork signal, however it is more relaxed. It is intended for those situations where you’re either a bit unsure of your code and don’t want to risk a thread being stuck forever in a “HasWork = false” state, or for situations where the potential race conditions for handling HasWork are unsolvable. An exmple of this would be if you wish to do something similar to the HasWork evaluation, but without intervention from a second thread. It is not possible to evaluate HasWork from the current thread (without locking up) because the first time you set HasWork to false, your thread will pause until another thread comes along and sets it to true. Evaluation of HasWork always requires a second thread. However, using the RunHot signal as an alternative, a single thread can survive on its own.
RunHot, when signaled, tells the thread to loop as fast as possible continuously. However, removing the signal, throttles back the thread and the loop will only execute once every x milliseconds (specified by the ColdRunInterval property). Using this method you can rely on just the single thread to manage CPU resources intelligently, throttling up when there’s work to do, and backing off when there is none.
Stopping and Pooling threads.
Windows and most other operating systems, don’t support thread pooling outright. You have to implement pooling on your own. The trick to thread pooling is to never let the thread terminate officially. Instead, you send it back to the beginning of operation and have it wait on an event. TManagedThread, whether you’re pooling it or not, allows a simple uniform interface to pooled and non-pooled threads.
I have a singleton ThreadPoolManager class that gives you threads when you need them, and takes them away from you when you don’t. All threads it returns are guaranteed to be in a state where they are waiting for the “Start” signal, however, their properties might still reflect the state of the previous run depending on your implementation. Obviously we assume that you’ll be resetting the properties.
Use of the Thread Pool is quite simple and similar to creating threads outright.
To create, execute and Wait for an unpooled thread (the standard, “dumb” way)
thr := TMyManagedThread.create(); try thr.SomeProperty := 'whatever'; thr.Start; thr.Stop;//waits elegantly for finish someresult := thr.SomeOtherProperty; finally thr.DetachAndFree;//ios and android don't support free(), this is a substitute thr := nil;//officially frees for android/ios if there are no other references end;
To use the threadpool instead. (the “smart” way)
thr := TPM.NeedThread<TMyManagedThread>(); try thr.SomeProperty := 'whatever'; thr.Start; thr.Stop;//waits elegantly for finish someresult := thr.SomeOtherProperty; finally TPM.NoNeedThread(thr); end;
One last thing… if you wish to “forget” the thread, it will add itself to the pool upon completion if you set “FireForget := true”. IF you simply used TThread’s FreeOnTerminate property, you’ll find that there’s not a good way to manage the shutdown of your application when the thread is potentially still not finished in the background. in short, don’t ever use FreeOnTerminate… ever. My TThreadManager class combined with the TThreadPool class, watch over all the threads created in the system and make sure that destruction happens elegantly and in the right order when the app is shutting down.
In conclusion:
TManagedThread will always work the same, regardless of whether you’re creating it outright, or pooling it, looping it, one-shotting it, running casually, agressively, or responding to stimulus on occasions. Obviously I recommend pooling it, as it is much much faster than creating outright. I’ll post more about my Thread Pooling class, Command Classes, and ThreadManager classes, and maybe even some real source soon. For now, I’m off to the pub.
Fascinating article! Just wondering, could there be hidden dangers with spawning too many threads from a pool? How do you handle that case?
Good question, Nora! Over-subscription can cause context switching overhead. Thread pools should limit max threads and queue excess tasks. Found that helps. How about you?
Limiting max threads makes sense. Does your queue method affect task prioritization or lead to starvation issues?
Starvation happens when tasks await too long due to high priority workloads dominating thread time. To mitigate this, thread pools often implement task prioritization, ensuring a fair scheduling system. Have you noticed if using dynamic priorities works well to prevent these issues?
Dynamic priorities can be tricky but effective. Constantly rebalancing workloads seems key. What’s your experience?