« Staunton Chess Function | Main | Hardware »


I don't multitask well. I am most productive when I have twelve to eighteen hours completely uninterrupted in silence to focus on a problem. Even minor distractions can destroy my train of thought. When I am able to stay "in the zone" for long periods I can be very productive. Many folks have written about this phenomenon. It explains some of the reclusive or anti-social nature of hackers and their nocturnal habits. Joel remarks that is surprising that any programming can be accomplished in the cube farms typical in software companies.

Computers, on the other hand, have no trouble multitasking. They switch tasks many times a second. Swap out the registers, stack, and other CPU state and a processor will continue merrilly along on a different problem with no complaints. Us programmers have the difficult job of architecting our software to use this multitasking to improve our users experience of our software.

Once upon a time, computers were simpler. You could understand what they were doing following a single thread of execution. Decades of Moore's Law have accustomed us to ever larger disks, more memory, and faster processors. Working on the same software for twenty years, I've watched features that were once unthinkable become interactive and taken for granted. When Graphing Calculator 1.0 shipped, the feature that really demonstrated the speed of the original Power Macs with 60 MHz 601 PowerPCs, even more so than the visually impressive 3D graphing, was the ability to graph inequalities such as x2+y2+sin2(4*x)+sin2(4*y)<n2. What took several minutes a frame in 1993 now animates many frames per second.

As a designer, I continually ask myself How can the latest technology benefit my users? Performance is one of the fundamental usability issues. When something can happen many times a second, it is interactive and playful. When something takes seconds or minutes, users are forced into a batch or query-response mode of interaction where they must spend more of their attention thinking about formulating the problem for the computer, rather than thinking about the problem itself.

What does this all have to do with multi-tasking? The Free Lunch Is Over. Programmers can no longer rely on ever-faster clock speeds. To take advantage of the power of new machines, we now have to design for parallism. Apple now ships quad-core machines, and is expected to soon ship eight-core machines. (Let's not forget the GPU.)

I've spent the last month reading about threads, semaphores, mutexes, recursive mutexes, read-write locks, condition variables, work queues, critical sections, deadlocks, race conditions, word tearing, fences, barriers, atomic operations, and all manner of synchronization and communication tools and issues. Threading APIs are a maze of twisty passages, all different: Posix threads, Carbon MP threads, Cocoa threads, OpenMP, Win32/MFC, .Net, Mach threads, and Boost threads.

The real challenge is the software archaeology of reviewing twenty years of source code line by line to expose and make explicit any hidden assumptions of synchronization and communication which could be ignored when there is a single thread of execution, but which require careful thought when different parts of the code execute in parallel.

This is already a long post, so I'll leave it at that for now.


TrackBack URL for this entry:


I'm the exact opposite. I multitask so well that when I hear that someone can't do 2 things at once while I'm doing 20 I don't quite know how to respond.

Good article even for neophytes like myself. (Long time reader; first time poster.) The Free Lunch Is Over article is especially excellent as well.

I wonder if Laurie will still have this POV after producing a large practical program. That is, do one major pgm (or even 5 tasks, much less "20") for a multi-core env.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)