Home > Books and culture, Mac & other computers > The singularity and the knife-edge

The singularity and the knife-edge

September 27th, 2005

I’ve been too busy thinking about all the fun I’ll have with my magic pony, designing my private planet and so on, to write up a proper review of Ray Kurzweil’s book, The Singularity is Near. The general response seems to have been a polite version of DD’s “bollocks”, and the book certainly has a high nonsense to signal ratio. Kurzweil lost me on biotech, for example, when he revealed that he had invented his own cure for middle age, involving the daily consumption of a vast range of pills and supplements, supposedly keeping his biological age at 40 for the last 15 years (the photo on the dustjacket is that of a man in his early 50s). In any case, I haven’t seen anything coming out of biotech in the last few decades remotely comparable to penicillin and the Pill for medical and social impact.

But Kurzweil’s appeal to Moore’s Law seems worth taking seriously. There’s no sign that the rate of progress in computer technology is slowing down noticeably. A doubling time of two years for chip speed, memory capacity and so on implies a thousand-fold increase over twenty years. There are two very different things this could mean. One is that computers in twenty years time will do mostly the same things as at present, but very fast and at almost zero cost. The other is that digital technologies will displace analog for a steadily growing proportion of productive activity, in both the economy and the household sector, as has already happened with communications, photography, music and so on. Once that transition is made these sectors share the rapid growth of the computer sector.

In the first case, the contribution of computer technology to economic growth gradually declines to zero, as computing services become an effectively free good, and the rest of the economy continues as usual. Since productivity growth outside the sectors affected by computers has been slowing down for decades, the likely outcome is something close to a stationary equilibrium for the economy as a whole.

But in the second case, the rate of growth for a steadily expanding proportion of the economy accelerates to the pace dictated by Moore’s Law. Again, communications provides an illustration – after decades of steady productivity growth at 4 or 5 per cent a year, the rate of technical progress jumped to 70 per cent a year around 1990, at least for those types of communication that can be digitized (the move from 2400-baud modems to megabit broadband in the space of 15 years illustrates this). A generalized Moore’s law might not exactly produce Kurzweil’s singularity, but a few years of growth at 70 per cent a year would make most current economic calculations irrelevant.

One way of expressing this dichotomy is in terms of the aggregate elasticity of demand for computation. If it’s greater than one, the share of computing in the economy, expressed in value terms, rises steadily as computing gets cheaper. If it’s less than one, the share falls. It’s only if the elasticity is very close to one that we continue on the path of the last couple of decades, with continuing growth at a rate of around 3 per cent.

This kind of result, where only a single value of a key parameter is consistent with stable growth, is sometimes called a knife-edge. Reasoning like this can be tricky – maybe there are good reasons why the elasticity of demand for computation should be very close to one. One reason this might be so is if most problems eventually reach a point, similar to that of weather forecasting, where linear improvements in performance require exponential growth in computation (I still need to work through this one, as you can see).

So far it seems as if the elasticity of demand for computation is a bit greater than one, but not a lot. The share of IT in total investment has risen significantly, but the share of the economy driven primarily by IT remains small. In addition, non-economic activity like blogging has expanded rapidly, but also remains small. The whole thing could easily bog down in an economy-wide version of ‘Intel giveth and Microsoft taketh away’.

I don’t know how much probability weight to put on the generalized Moore’s Law scenario. But as Daniel points out, even a small probability would make a big difference to a mean projection of future growth. Since the Singularity (plus or minus pony) has already been taken, I’ll claim this bit of the upper tail as my onw pundit turf.

  1. September 27th, 2005 at 16:50 | #1

    Singularity? Twice in one week? I got this in my weekly John Mauldin briefing, which you may find interesting. I’m not even close to a duality, let alone a plurality, so singularities (can there be more than one?) are way beyond my ken.

    See http://www.frontlinethoughts.com/printarticle.asp?id=mwo092305

  2. September 27th, 2005 at 17:22 | #2

    The way I see elasticity working out in discussions like this is, it’s not a driver but a symptom. That is, the direction of causality is the other way, in this sort of discussion. If computing can take over practically everything, it will and – surprise – we get one set of possible elasticities, and vice versa. But if it can take over only in certain respects, with others being unaffected, that’s what will happen – and then, the knife edge is what you get, not the unlikely thing you must achieve.

    Consider this real use of knife edges. For almost any longish object, say a golf club, no matter its precise shape or variations in its friction along its length, you can find its centre of gravity using two knife edges. First, balance it on them when they are wide apart – this is easy enough to arrange. Then slowly move the blades together. Momentum can be ignored, so what happens is that first one and then the other balance point slips, according roughly to how far each is from the centre of gravity. When they finally come together, the object is effectively balanced on a single knife edge.

    It looks at that point like someone struck very lucky, but it’s actually a necessary consequence of what was done. Knife edge situations aren’t always unrealistic.

  3. wilful
    September 27th, 2005 at 17:39 | #3

    With computing, tt all comes down to whether or not you think AI is possible. If not, then they can get as cheap and ubiquitous as you want, but it’s still got to operate with and for humans, who haven’t had a hardware upgrade in a long time (we all seem to have installed Obesity 2.0 though). If AI can exist, that’s when it becomes more singularityesque.

  4. GDP
    September 27th, 2005 at 18:16 | #4

    I’ll confess in advance to being something of a nanocyberAIscifitechnoutopiabooster, but that and price elasticity aside, I seriously doubt “computers in twenty years time will do mostly the same things as at present, but very fast and at almost zero cost.”

    Computers today do vastly more than did the computers of 20 years ago. 20 years ago you may not have had one on your desk. You certainly didn’t carry one home with you. They had almost nothing to do with communications.

    Now we all carry them around with us (mobile phones), they run our diaries (and dairies), we communicate using them in at least 7 different ways (voice, sms, video, IM, email, blogging, moderated forums, newsgroups, etc). We shop with them. They control our cars and our appliances.

    And the pace is only accelerating. Looking back, I could never have predicted the last 20 years, so I won’t pretend to have any idea what computers will be doing 20 years from now (designing molecules for biotech?). But if it all stops today and just gets faster for the next 20 years, I’ll eat my shorts.

  5. ab
    September 27th, 2005 at 18:24 | #5

    wilful,

    In his book, Kurzweil makes the amusing point that the term ‘AI’ is often used to describe whatever hasn’t been invented yet. Once a particular computing problem has been solved, its solution is no longer be called ‘AI’ and it becomes just another accepted method for getting the job done.

  6. Factory
    September 27th, 2005 at 18:34 | #6

    “But Kurzweil’s appeal to Moore’s Law seems worth taking seriously. There’s no sign that the rate of progress in computer technology is slowing down noticeably. A doubling time of two years for chip speed, memory capacity and so on implies a thousand-fold increase over twenty years.”
    1) Moore’s law only refers to transistor density.
    2) Chip speed has slowed down, we will not be seeing large increases that we have seen previously.

  7. September 27th, 2005 at 20:11 | #7

    AB, I’ve heard the same thing claimed about philosophy. The idea is, if philosophers come up with something useful, they put it in another box and call it something else. An example given was economics, stemming from Adam Smith’s involvement with moral philosophy. But calling economics useful is rather ambiguous evidence at best, requiring further support in its turn. Remember Keynes’s crack about the world being a better place if only economists were as useful as dentists?

  8. September 27th, 2005 at 22:40 | #8
  9. Andrew Reynolds
    September 27th, 2005 at 23:45 | #9

    Factory is correct on Moore’s law – it does not refer to what we do with computers. What I do with my PC has moved on a considerable amount since I got my first over 20 years ago (pre-IBM PC) – but I am not achieving 1000 times more out of this laptop that I was out of my Vic 20. It looks a lot nicer and the HDD performance is much better than using a cassette tape, though.
    Some (on this blog inter alia) may argue that my thoughts were clearer then than they are now – I was a social democrat then. Greater processor density has not been the change.

  10. SJ
    September 28th, 2005 at 00:19 | #10

    digital technologies will displace analog for a steadily growing proportion of productive activity, in both the economy and the household sector, as has already happened with communications, photography, music and so on. Once that transition is made these sectors share the rapid growth of the computer sector.

    I take exception to this statement. Take photography as an example. Digital photography dispayed the typical exponential growth, and has pretty much wiped out film photography.

    But so what? One technology replaced another. Thousands of jobs were generated during the transition (in digital camera design and manufacture), and thousands were lost in film camera design and manufacture and in film manufacture. The transition has now been completed, and those thousands of digital camera designers and manufacturers are being laid off.

  11. September 28th, 2005 at 00:19 | #11

    Factory makes a good point; here’s some other food for thought:

    In my view, the productivity gains from Moore’s Law come in fits and starts: at a certain point chip technology advances to a point where using a new “digital solution” becomes a superior option. Over the next few years, that technology is broadly adopted and from then on is only incrementally improved by the continuation of Moore’s Law. A good example is digital cameras: a couple of years ago they reached a tipping point where they were better value (in most applications) than film. Now they’ve become ubiquitous, but while they continue to improve the actual value of the improvements isn’t that great.

    Now the question arises – how many unsolved problems are there out there that are waiting for a few more generations of more densely packed chips (or hard disks or network bandwidth) to solve? As far as computation goes, there’s not that many off the top of my head. There’s a few for hard disk space (imagine a hard disk that stores your entire life’s worth of accumulated information, for instance), and there’s certainly some obvious ones for network bandwidth.

    And on to Kurzweil. His argument boils down to the idea that “real AI” is one of those problems which just requires us to progress our hardware development enough and it’ll be solved. Kurzweil is not the first to make this argument; Alan Turing himself speculated that this might be the case way back in his seminal paper on AI in 1950. He at least had the excuse that he was making a stab in the dark without the benefit of actual experimentation in the field. Others continued to make the same claim through the late 1970′s; a notable one was in a BBC series called “The Mighty Micro” by a guy called Christopher Evans. Computers had at this point existed for 35-odd years and the fastest ones were roughly 5000 times more powerful than the first generation of computers. Despite this, efforts to create “real AI” had shown only very limited success. 25 years on, and the fastest computers are (very roughly) 540 times more powerful than the computers of the time. And still, there’s no sign of that miraculous breakthrough in AI taking advantage of all this extra speed and capacity. And, as far as I know (and I would almost certainly know) there isn’t any existing AI technique that would suddenly give us human-like AI if we had a computer another 5000 times faster and more capricious than the ones we currently have.

    Now, I’m not denying that such a thing is possible; for instance, it may be that at a certain point it will become feasible to completely and accurately enough model significant parts of the human brain and achieve general machine intelligence that way. But to claim it is inevitable, as Kurzweil does, shows a wilful neglect of over 50 years of history.

    But anyway, back to the central point; will we suddenly get a massive growth spurt from computer technology? There are scenarios where this might occur, even excluding machine AI, that come back to the “knife edge” problems you’ve identified.

    The computational problems that perpetually suck up as much computation as we can throw at them tend to be similar, in a hand-waving way, to the weather forecasting issue you described earlier. In a technical sense, many have “exponential-time” algorithms; for a problem of “size” n, the time they take to solve is proportional to k^n, where k>1. A whole bunch of interesting scheduling and optimization problems have this property. Others are not technically exponential-time, but from an economic sense have the property you describe where an exponential increase in the computation applied to them only gives a linear improvement in utility gained.

    But concentrating on the actual exponential-time problems for a moment. To give a rough explanation, many of these problems have the property that, while *finding* a solution appears to be very difficult, *verifying* that you have a solution is easy. However, despite 30 years of trying, it’s been found exceedingly difficult to *prove* that it’s more difficult to find a solution than to verify one! And, even more intriguingly, many of these problems have the property that if you find a way to solve one type of problem efficiently, you can solve all of these “easy to verify, hard to find” problems!

    There are a couple of ways an efficient solution method might be found for these problems. Some clever grad student somewhere might figure out an efficient algorithm for solving one of these problems. Secondly, some of the theoretical physicists have suggested it might be possible to build quantum computers of a type which can solve these problems (note: this is not the same type of quantum computer of which there has already been proof-of-concept demonstrations; they can solve some interesting problems that current computers can’t yet solve, but aren’t believed to be able to solve a problem of the type I’m describing). Either way, the implications would be profound.

    The most mind blowing, however, is the observation that most mathematical proofs, in the right form, can (theoretically) be verified mechanically and efficiently, but finding them is extremely difficult. But with our magic algorithm (or quantum computer) finding proofs is no more difficult than verifying them! Mathematics would be revolutionised; the effort of finding proofs would be gone. As the conventional computer has done for numerical computation, this new computation would do for symbolic mathematics.

    Anyway, I’m not holding my breath waiting for either scenario to occur, but they *are* tantalizing possibilities. Great fun for sci-fi writers…

  12. kyangadac
    September 28th, 2005 at 02:24 | #12

    Sorry I can’t source this, but somebody recently pointed out that increases in storage capacity have also been occuring at the rate described by Moore’s Law or faster and that new storage devices might have real consequences in the economy (on the positive side of the knife edge as it were).

    But I can’t help thinking of dinosaurs and the ‘bigger is better’ theory of evolution. To write text like this(and even use a browser) I don’t need half the processing power that I’m using. But I’m using the extra power because that’s the way the browser is designed. In computer jargon the term bloatware has been coined to describe this phenomenon.

    Perhaps the question to ask first is what is evolutionary benefit of a brain? Asking this question gives me a Wittgensteinian moment of queerness. The answer is that a brain cannot exist without a body. Similarly IT cannot exist without serving a purpose beyond the provision of information for itself. Even art exists for social purposes beyond aesthetics no matter how idealistic the artist.

    The dark side of this debate is how much real knowledge is being produced with this new technology. It’s my gut feeling that there has been a fairly precipitous drop in the amount of basic research being done especially over the last 10 years or so. So while there is the tantalising possibility that many classes of NP hard problems are close to solution in mathematics. The collapse of civilization as we know it might well intervene!

  13. September 28th, 2005 at 08:05 | #13

    you are all forgetting manufacture, focussing on chips and memory alone is an idee fix, singularly perhaps, and lets put nanotech to one side for the mo…

    3D printers will destroy a lot of manufacturing as we know it, first there willl be a 3D printer in your state, then in your town, then in your street and then, mabe in your house,

    they’ll do one-offs first, and eventually, through digital design and memory eveything we use will be a stock design or a one-off,

    there are 3D printer structural memebrs on the International SpaceSstation, they were protoypes but they tested so good they did not bother to remake them traditionally

    China is the latt site of industrialization, stuff knows what Africa will do.

    And yes trades will disappear too, no more carpenters, maybe plumbers will get a look in but even these currently rennaisance occupations will go, and everyone will be a clerk working at centrelink with a HECS debt, or receiving the dole, or spending auntie’s dividends

    3D printers, are descendant of the completely underrated inkjet printers and laserprinters, fine control of particles (unlike my typing) in 2D is easily transferable to 3D

    there will be design and there will be commodities and not much else imbetween the source and the consumer

    once the factory is history things will be very different

  14. September 28th, 2005 at 08:07 | #14
  15. still working it out
    September 28th, 2005 at 08:23 | #15

    The reason the elasticity of demand for computing power is a little bit higher than one could be that it takes a lot of time and human effort to make use of the glut of computing power available. So what happens is that increasing computer power makes a new application possible, but it takes a very long time for the complementary technologies required to make it work to be developed as well as for consumer acceptance and the requisite infrastructure to be built.

    Consider digital mobile phones. The computing chips to make these work was around for a long time before they became widespread. It took time to work out the best way to make mobile phones and to build the factories to produce them in large numbers. And then the consumer had to get used to them and the mobile phone infrastructure had to be built as well and the Telco’s had to learn had how to run a mobile phone system.

    Or how about digital camera’s? Without ubiquitous pc’s, cheap digital storage, email and the web a digital camera does not provide a lot of advantages over an old film camera. In this case the development of the supporting complimentary technology and infrastructure was a pre-requisite for wide spread use.

    I do not believe for a second that the elasticity of demand for computing power will turn out to be less than one. It only takes a little bit of imagination to find a way for computers to improve the efficiency of practically anything we do at the moment. The limiting factor always seems to be rate we develop the complimentary technologies and infrastructure to make it work.

  16. Peter2
    September 28th, 2005 at 09:37 | #16

    If you plot the different exponentials that describe improvements in memory speed, processor speed, storage capacity, and network bandwidth, you find that the exponents increase in that order. That is, network bandwidth is increasing faster than storage capacity, and so on down the line.

    The very fact the processor speed has been increasing faster than memory speed causes some fundamental difficulties in making computers actually run faster. Even if transistor density does continue to douple (which is doubtful, given heat dissapation constraints), and this translates directly into faster clock speeds, it will not necessarily mean computers that are twice as fast, primarily because of the memory bottleneck caused by the slower growth in memory speeds.

    The exponential growth in bandwith (that is, the number of bits you can shove through a fibre optic wire) and storage capacity has more important implications, imho, which raises well covered questions (sorry no links) such as ‘how do things change if you never have to throw anything away?’
    and ‘if network bandwidth were almost free, how do things change?’

  17. Chris
    September 28th, 2005 at 10:24 | #17

    The increase in storage is – or offers the possibility of – an increase in information, which permits such things as a rational health system (one that will almost certainly show that Kurtzweil’s health regimen is worthless).

    Increasing computer power and new identification technologies have created new possibilities for social epidemiology.

    Begin by imagining a country with a universal medical card linked to a central database that records all dealings with hospitals, doctors, or pharmacies. The obvious advantages of such a move are the increased comprehensiveness of the patient record and the avoidance of confusion or ignorance. Beyond this, however, such a system would immensely extend our knowledge of the significance of an untold number of medical factors in the prevention and treatment of ill-health.

    It would be possible to monitor any increase in mortality or morbidity linked to prescriptions of any drug and, more significantly, any combination of drugs. This would constitute a significant improvement over the present system, where drug trials are largely conducted on people using no other medication other than that being tested and where sample size makes it virtually impossible to identify reactions between three or more random drugs. The outcomes of surgical procedures could similarly be monitored, speeding up such discoveries as the recent finding that tube feeding does not extend life in dementia patients . Particular spikes among the clients of any practitioner could be flagged, facilitating the exposure of new Harold Shipmans.

    That central database could then be linked to identity and tax office records, expanding the realm of possible questions from linkages between medical procedures and outcomes to linkages between mortality and morbidity and income, profession, education, location, age, sex, marital status, educational record, criminal record, immigration status, or any combination of these qualities. It would be possible to contemplate extending Michael Marmot’s examination of the correlates of ill-health in English public servants across the entire population, both confirming (or refuting) his insights into the significance of hierarchy and enabling us to identify significant pockets where this relationship may not apply and where we might look for clues as to how to mitigate or reverse its ill effects.

    The mindboggling numbers of subjects necessary to test claims about the inevitably marginal changes in behaviour brought about by particular health promotion campaigns would no longer be a problem.

    The next step would come when the financial system was integrated into the knowledge base, enabling income records to be analysed in more detail and crossreferenced with credit card data and, through bankcard purchases, supermarket receipts. This would enable consumption patterns to be mapped at the level of the household, providing scope for, for example, testing the health outcomes of the Atkins Diet over months and years.

    Implantable microchips triggered when passing a network of sensors would add some extra capacity at the margin, and are certainly feasible, but are not essential. A further link to a national DNA database, however, would catapult both epidemiology and genetics into a new world, allowing close analysis of the interaction of environmental and genetic influences on health.

    Data analyses would be based on hard data, not on possibly biased questionnaire responses. Analyses would also be dealing with the total population, and not with an inevitably biased sample of those willing to participate. Some of these advantages might be gained through other means, but the bulk of the projected gains come from universality of data collection.

    At this time such a system would still be unable to track in any detail
    • Cash transactions
    • Unitemised transactions
    • Illegal transactions (illegal drug use, for example)
    • Activities and exercise
    • Social connectedness
    • Personal relationships
    • Religious, political or miscellaneous beliefs
    • Quality of life
    • Happiness

    All of these factors may possibly relate to health status, and their omission reduces the utility of the model. For some of these effective proxy measures may be available, but a fully Laplacean description of a society may not be possible until the middle of the century when cheap petabyte drives enable the recording of every second of each person’s entire life –audio, video, pulse rate, blood alcohol levels, Global Positioning System co-ordinates – in real time.

    It is also true, of course, that such a regime would not so much be open to privacy objections as incompatible with any known concept of privacy. These potentially favourable health outcomes are accompanied by a level of surveillance that would make the Stasi blush, and that while it might be theoretically possible to insert some safeguards against abuse these would do little to reduce public suspicion. It is also obvious that such a system would change our society extensively in ways that it is not now possible to predict, or perhaps to comprehend.

    How many years of extra life expectancy would compensate the average citizen for the omnipresence of the medical gaze?

  18. still working it out
    September 28th, 2005 at 11:15 | #18

    •Social connectedness
    •Personal relationships

    Its probably already possible to monitor these. I had an idea of using records of mobile phone calls to track this. Most people keep the same mobile number for a long time these days and unlike landlines each mobile phone number generally represents one person. You could track the number of calls made and how many different people the calls are made to. Not perfect, but considering the amount of raw data to work with its a pretty good proxy for the above factors.

    It would also be the perfect tool to map social networks throughout a society by seeing who calls who and how often. A bit like the map of the blogosphere that was made a while back by seeing who links to who. Its a very dangerous idea because its Big Brother on super steroids but I wish it could be done. Some amazing things about social networks could be studied relatively simply and at low cost with the information gained from Mobile Phone Social Networks. You could identify all the communities within Australia and see which ones are relatively isolated or the distribution of economic and political power among communities. You could answer questions about the relation between social connectedness, health and income. You could do all this by asking someone some simple questions and then getting their mobile phone number.

  19. abb1
    September 28th, 2005 at 22:01 | #19

    I don’t have a mobile number and never had. I’m in different world.

  20. September 30th, 2005 at 02:56 | #20

    I think he gets it wrong when he assumes that so many agents will be created. So far, we have been very good at creating utilities and tools on computers, and generally failures at creating agents: we don’t know enough about how people make decisions to mimic it. To the extent his analysis assumes that we will magically get better at this, (to my mind, largely), it’s not worth paying attention to.

  21. Ian Gould
    September 30th, 2005 at 21:03 | #21

    An important point to remember about Moore’s Law is that it can take 20 years or more for the full social and economic impact of technological advances to become manifest. For example, personal computers were invented in the 70′s, started entering the workplace in the early 1980′s but it seems that they only started to seriously impact on productivity in the mid-90′s.

    So even if the hyperbolic growth in capacity does stop, there’s a lot more social and economic change still in the pipeline.

Comments are closed.