« Osculating Circles | Main | Dependencies »

Cooperative Multitasking

I've been deep in the bit mines all week, wired on Mountain Dew, reviewing the 35,000 lines of code which implement the actual graphing classes of Graphing Calculator, making the Compute method of each different graphing algorithm thread-safe and able to run preemptively. It's been fun to revisit a decade worth of code evolution and simplify it with the benefit of hindsight. Much of the complication was due to the necessity of preserving state across multiple calls to the Compute method of each class so that it could be time-sliced to keep the UI responsive in a single-threaded environment.

As I sat down to describe this, I remembered that I had already wrote what I wanted to say four years ago to a colleague who asked: I've always wanted to know how to do threading. Under OS X that will become more and more valuable as multi-processor machines become increasingly common. Where is there a good discussion of using threads?

At 3:05 PM -0800 10/25/02, Ron Avitzur wrote:

At the time GC 1.0 was written there were four or five different incompatible threads packages available for the Mac, all with fundamental design problems of different natures. None of them had yet been ported to or QA'd against PowerPC so the risks introduced by relying on them were too great and we did it the hard way.

All the compute loops in GC that you might expect to look like this:

for (i = 0; i < dataWidth; i++)
	for (j = 0; j < dataHeight; j++)
actually look like this:
while (OurTickCount() < endTime) {
	if (lastV >= dataHeight) {
		lastV = 0;
	if (lastU >= dataWidth)

	int i = lastU;
	int j = lastV;


where lastV and lastU are class variables that persists across calls to graph->Compute(endTime) and endTime tells Compute how long it has before we want it to return, whether it is done or not, and endTime typically is less than 10 ticks, 1/6 seconds before the event loop checks for keyboard, mouse, and update events. The actually coding is a bit more complex to allow everything to happen in the same application thread.

Modern OS architectures would allow each computation to spawn a separate thread to which the OS gives time slices, but I can't advise you on that as I've never studied it. Threads introduce a different sort of complexity, in that from a debugging standpoint they are running concurrently, and on a machine with multiple CPUs they may actually be running concurrently. Then you have to worry about locking or using semaphores around any access to shared data and deadlock or race conditions when there is contention over a shared resource. This is all an active area of research in the C++ community and one of the more common sources of difficult bugs.

I could give more useful advice about threading APIs these days. In another month or two I might even have useful experience in re-architecting old code for thread-safety.


TrackBack URL for this entry:

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)