Not so deep thoughts about Deep AI

Back in 2022, after my first encounter with ChatGPT, I suggested that it was likely to wipe out large categories of “bullshit jobs”, but unlikely to create mass unemployment. In retrospect, that was probably an overestimate of the likely impact. But three years later, it seems as if an update might be appropriate.

Source: Wikipedia

In the last three years, I have found a few uses for LLM technology. First, I use a product called Rewind, which transcribes the content of Zoom meetings and produces a summary (you may want to check local law on this). Also, I have replaced Google with Kagi, a search engine which will, if presented with a question, produced a detailed answer with links to references, most of which are similar to those I would have found on an extensive Google search, avoiding ads and promotions. Except in the sense that anything on the Internet may be wrong, the results aren’t subject to the hallucinations for which ChatGPT is infamous.

Put high-quality search and accurate summarization together and you have the technology for a literature survey. And that’s what OpenAI now offers as DeepResearch I’ve tried it a few times, and it’s as good as I would expect from a competent research assistant or a standard consultant’s report. If I were asked to do a report on a topic with which I had limited familiarity, I would certainly check out what DeepResearch had to say.

Here, for example, is the response to a request to assess whether Canada should adopt the euro. I didn’t give much in the way of prompting except to request an academic style. If we ignore developments post-Trump (which wouldn’t have been found by a search of academic and semi-academic publications) it’s pretty good, at least as a starting point. And even as regards Trump, the response includes the observation “If, hypothetically, … political relations with the U.S. soured badly, the notion of joining a stable currency area might gain some public traction. ”

So, just as Github Copilot has led to big changes in coding jobs, I’d expect to see DeepResearch having a significant impact on a lot of research projects. That in turn implies an increase in productivity, which had been lagging in the absence of major new developments in IT for the decade or so before the recent AI boom.

None of that implies the kind of radical change often tossed about in discussions of AI, or even the kind of disruption seen when an existing analog technology is suddenly subject to digital challenge – the classic example was the “quartz crisis” in the watch industry. Coding has been progressively automated over time with the development of tools like online code libraries.

Similarly, research processes have changed year by year over my lifetime. When I was starting out, reference aids like citation indexes and the Journal of Economic Literature were brand new, and only accessible in libraries. Photocopying articles was sufficiently expensive and painful that you only did it for the important stuff (Daniel Ellsberg’s account of the work involved in producing multiple copies of the Pentagon papers gives you an idea). Today, a well organised research assistant with reasonable background knowledge could do the same job as Deep Research in a couple of days, and without needing to leave home (to level the playing field with AI, I’m assuming the RA is free to plagiarise at will, as long as the source is cited).

The other big question is whether efforts like this will generate profits comparable to the tens of billions of dollars being invested. I’ve always doubted this – once it became clear that LLMs were possible, it was obviously possible to copy the idea. This has been proved, reasonably conclusively by DeepSeek (no relation to DeepResearch), an LLM developed by a medium-sized Chinese company at a claimed cost of $500m. It charges a lot less then ChatGPT while providing an adequate substitute for most of the current applications of LLMs.

The likely collapse of the current AI boom will have some big economic repercussions. Most importantly and fortunately, it will kill (in fact, is already killing) the projections of massively expanded electricity demand from data centres. In combination with the failure of Tesla, it will bring down the market valuations of most of the “Magnificent Seven” tech stocks that now dominate the US share market. That will be bad for the US economy, but to the extent that it weakens Trump, a good thing for the world and for the long-term survival of US democracy.

2 thoughts on “Not so deep thoughts about Deep AI

  1. Presumably, I am permitted some not so deep comments.

    There seem to be problems at the base of assuming LLMs are or can be AGI. J.Q.’s post re uses for LLMs may well not require that LLMs are AGI. The use being envisaged there is perhaps different from any AGI application. It seems to me that even a properly curated LLM only provides a summary of curated knowledge as good but not better than the curation. It cannot understand how or why this knowledge was developed.

    As for AGI, that gets into the business of competing with (biological) evolution, including the evolution of the modern humans brain(s), we maybe should remember Orgel’s 2nd rule.

    “”Evolution is cleverer than you are.”

    This rule is well known among biologists. It does not imply that evolution has conscious motives or method but that people who say “evolution can’t do this” or “evolution can’t do that” are simply lacking in imagination.

    Orgel’s second rule tells us that the process of natural selection is not itself intelligent, clever or purposeful but that the products of evolution are ingenious.” – Wikipedia.

    AGI or ASI (Artificial Super Intelligence) needs to be evolutionary, and more than at the coding level. Human intelligence is also evolutionary at the firmware (wetware) level (neuroplasticity) and even at the biological hardware level in the multi-generation sense. Orgel’s first rule implies this last assertion.

    “”Whenever a spontaneous process is too slow or too inefficient a protein will evolve to speed it up or make it more efficient.”

    This “rule” comments on the fact that there are a great number of proteins in all organisms which fulfil a number of different functions through modifying chemical or physical processes. An example would be an enzyme that catalyses a chemical reaction that would take place too slowly to benefit an organism without being sped up by this enzyme.” – Wikipedia.

    Since no other class of compounds matches the complexity, variety and encoding capacity of proteins then only biological, protein based intelligence will, in theory, be able to muster or evince equal or better processing, replication, self-repair and novelty emergence characteristics as does intelligent biological life, especially in generational terms. At least, that is my opinion. That doesn’t mean that specialised AI is not a powerful tool. It could be used, for example, to start designing proteins for protein based computers. Just a wild thought.

  2. Hmmm, my above post is lousy. It looks like it was generated by a very, very poor and amateurish LLM or AI. Nay, it looks even worse than that but I can assure you, sadly, it all came out of my wildered brain on a bad day.

    However, we should not confuse current LLMs with “AI” or “Generative AI”. Even Google Generative AI in development says that and in this case I agree with it. Current LLMs are not close, even to what currently seems to pass for the AI label. I would go further and say current AI is not AI. I would need to put forward an agreed definition of AI first and then develop a full case. I doubt that is easy to do in a way that produces any consensus.

    My attempt to drag Orgel’s Rules into this has, I think, some validity. Evolution has produced intelligence and indeed conscious, self-reflective and feeling or qualia-experiencing intelligence. Humanly developed AI (so-called) has not cleared that bar yet nor even come anywhere close, in my opinion. Proper ethical being must be twinned with intelligence I deem and requires consciousness, self-reflection, feelings (qualia experiences) in general and sympathy/empathy qualia in particular, at least for higher social or eusocial species. Without all that, critical elements are missing. (Shades of Asimov I guess.) What is produced would remain decidedly a-human, non-human and inhuman in response and behaviour and profoundly dangerous in proportion to the autonomy granted to it or assumed in some manner by it.

    Overall , reports of genuine AI are still greatly exaggerated but the dangers are already too real and extensive.

Leave a comment