Wired on Friday: The 21st century has been, to date, rather lax at keeping to its appointed schedule. It's 2005 and still we do not have flying cars, food pills, or bicycle-wheel space stations. For the computer industry, everything since 2003 has been a little tardy, too.
Plot the clock speed of the past 30 years of PCs - the megahertz and gigahertz (GHz) of computer advertisements - on a graph, and you'll notice a sharp kink around early 2003.
From 1971 until then, computer processor speeds increased at a remarkably consistent - and ever-doubling - rate. Extrapolating from that trend and you'd expect us to be purchasing 10GHz processors around now.
As it is, PC users are currently stuck at 3.4GHz, with no plans to move much beyond 4GHz. Computers have hit a wall.
The main problem is heat and power consumption. A chip has to stay small to stay fast (the limiting factor here is the speed that electrical signals move: make a chip faster, and the signals don't have time to cross as much silicon real estate). But like any machine, chips leak power in the form of heat. Chip designers are fighting to keep down the power demands of chips because more power means more heat, and that heat has to be dissipated somehow.
The densest chips long ago passed the thermal density of a furnace, and are now approaching that of a nuclear reactor. Chip designers struggle to find ways of keeping their tiny, hot specks cool.
The solution for processor manufacturers has been to abandon the traditional leaps in speed and seek to improve their chips in other ways. For Intel and rival AMD, the solution has been to offer a two-for-one deal: put two previous generation processors on the same die and link them. Both CPUs run at the same time, so code that uses them both will experience the traditional doubling in speed.
That means chip company marketers can still promise speed improvements. But it's a new headache for computer programmers. As Microsoft's Herb Sutter notes in the March edition of the coder's magazine Dr Dobb's Journal, for them, the free lunch is over.
Until now, programmers could rely on the hardware to speed up their programs. If your code ran slow, the old saying had it, wait 18 months and it'll be a speed demon.
That was true when clock speeds were leaping ahead. Coders, however, write programs to do one thing at a time, one operation after another. Write a program that operates in this way - as most applications currently do - and it'll run exactly as fast as before, using just one of the new machine's two CPUs.
Writing programs that can be shared between processors is harder. A lot harder.
It's a bit like the difference between throwing and catching a ball, and juggling. The first is easy: throw ball, wait, catch ball. It doesn't matter how fast the balls goes up, or comes down, the instructions are the same.
Juggling, even though it is the same in principle, has far more potential scenarios to cope with. What if the ball arrives fractionally earlier than you expected? What happens if two balls head towards the same hand? How do you put all the balls in the air to begin with?
"Current concurrent programming is fraught with peril," says Mr Sutter. "We desperately need something better."
Microsoft is involved in research projects to break its programs down into more concurrently running blocks, without introducing more bugs. But it's very early days.
"These innovations are currently black magic: for geniuses only. You can't expect mortal programmers to write this stuff," says Mr Sutter.
Sooner or later such ingenious approaches will be better understood and simplified so that everyday coders will be able to use them.
However, prominent computer scientist John Ousterhout of Electric Cloud has previously argued that the very complexity of concurrent programming means that the risks of untraceable bugs far outweigh the advantages. He is more optimistic that we will find other, less demanding ways to utilise the new chips.
Mr Ousterhout suggests that the compromise may not be programs that are painstakingly written to run on more than one processor, but a change in overall computing habits.
As we struggle to eke improvements from our PCs, we may learn to install and manage programs that run in the background: programs that occupy their own corner of the multiheaded CPUs of the future and which will speed up a main, plodding, program we use on a single core in the foreground.
Search programs such as Google's desktop search that silently index our files while we work, or clever utilities that work to anticipate our next actions, so that the work is done before we ask for it, are examples of background applications that are already growing popularity, and may represent a trend in the future.
"I'd suggest that it will be the users, not the programs or the programmers, that learn best how to optimise for multiple processes in the future," says Mr Ousterhout. We may learn to juggle before our computers do.