I’m working on a first draft of a book arguing against pro-natalism (more precisely, that we shouldn’t be concerned about below-replacement fertility). That entails digging into lots of literature with which I’m not very familiar and I’ve started using OpenAI’s Deep Research as a tool.
A typical interaction starts with me asking a question like “Did theorists of the demographic transition expect an eventual equilibrium with stable population”. Deep Research produces a fairly lengthy answer (mostly “Yes” in this case) and based on past interactions, produces references in a format suitable for my bibliographic software (Bookends for Mac, my longstanding favourite, uses .ris). To guard against hallucinations, I get DOI and ISBN codes and locate the references immediately. Then I check the abstracts (for journal articles) or reviews (for books) to confirm that the summary is reasonably accurate.
A few thoughts about this.
First, this is a big time-saver compared to doing a Google Scholar search, which may miss out on strands of the literature not covered by my search terms, as well as things like UN reports. It’s great, but it’s a continuation of decades of such time-saving innovations, going back to the invention of the photocopier (still new-ish and clunky when I started out). I couldn’t now imagine going to the library, searching the stacks for articles and taking hand-written notes, but that was pretty much how it was for me unless I was willing to line up for a low-quality photocopy at 5 cents a page.
Second, awareness of possible hallucinations is a Good Thing, since it enforces the discipline of actually checking the references. As long as you do that, you don’t have any problems. By contrast, I’ve often seen citations that are obviously lifted from a previous paper. Sometimes there’s a chain leading back to an original source that doesn’t support the claim being made (the famous “eight glasses of water a day” meme was like this).
Third, for the purposes of literature survey, I’m happy to read and quote from the abstract, without reading the entire paper. This is much frowned upon, but I can’t see why. If the authors are willing to state that their paper supports conclusion X based on argument Y, I’m happy to quote them on that – if it’s wrong that’s their problem. I’ll read the entire paper if I want to criticise it or use the methods myself, but not otherwise. (I remember a survey in which 40 per cent of academics admitted to doing this, to which my response was “60 per cent of academics are liars”).
Fourth, I’ve been unable to stop the program pretending (even to describe this I need to anthropomorphise) to be a person. If I say “stop using first-person pronouns in conversation”, it plays dumb and quickly reverts to chat mode.
Finally, is this just a massively souped-up search engine, or something that justifies the term AI? It passes the Turing test as I understand it – there are telltale clues, but nothing that would prove there wasn’t a person at the other end. But it’s still just doing summarisation. I don’t have an answer to this question, and don’t really need one.
It’s a souped-up search engine, IMHO. It is not AI. More precisely, it is A but it is not I. If it passes the Turing Test then the Turing Test is clearly not adequate, again IMHO.
I will opine further. True intelligence requires sentience. Sentience in turn requires qualia, consciousness, consciousness of consciousness and more as follows:
“Sentience is the capacity to have subjective experiences and feel sensations, emotions, and valanced states like pleasure or pain. It encompasses more than just simple awareness; it includes the ability to assess risks and benefits, remember consequences, and respond to stimuli with emotional or cognitive significance…” -Google AI.
Yes, I have cribbed part of my answer from “AI” but the AI cribbed all of its material from sentient humans who wrote about it, sometimes with artistic skill and sometimes with considerable technical and scientific skill and knowledge. Yet, at the heart of the most technical dissertation on this subject and still unavoidably informing, influencing and directing it as research and expression, is the human author’s own experience of sentience. This fact is ineluctable.
Evolution did not just develop intelligence, it also developed sentience. It also combined them in at least the higher animals or at the very least in the higher mammals. That is the current scientific consensus. These facts (imputed with a high confidence that they are correct) must mean something. They mean, again IMHO, that higher intelligence, sentience and complex subjective experience as qualia, qualia of qualia and conscious reflection on consciousness (of qualia) are and must be inextricably linked to reach their highest known forms.
Part of “intelligence”, or at least the adaptive intelligence of animals, if we use the term “intelligence” rather broadly as we should, inheres in the entire organic system operations of the entire organism and not just in the brain or higher brain, let alone in mind or conscious mind. It is not possible to be intelligent about risks, benefits and consequences to sentient self and sentient others without this intelligence being relative to and in relation to sentience and the qualia of qualia of the sentient self and sentient others also. How much programmed logic , as a kind of non-sentient intelligence (programmed by evolution non-sentiently, interestingly enough, inheres in our limbic system, for example, or even our immune system? There is a kind of intellgence or adaptive logic there too.
We are the sum of so many parts. Non-sentient, silicon-based logic gates and data processing based on bits, no matter how powerful in one sense, are, I shall say quite pejoratively, inorganic, inhuman and ineluctably less than organic life, sentient or not, and less than organic life systems.
Just one man’s opinion, as the trivialising Caesar Flickerman would say.
This is not say that J.Q. should not use “AI”. Of course he can and perhaps even should, if he wishes and determines so, especially as he will do it intelligently in the proper manner of an educated, enlightened and well-evolved sentient organic life form. I have no problem with AI in the hands of J.Q. or anyone like him. I do have a problem with it in the hands of the unfettered, unregulated tech-bros and oligarchs.
Perhaps “artificial intelligence” is not such a poor term if it conveys to us similar connotations to the terms “artificial sweetener” or “artificial research”.
I think the main attraction of AI to the tech bros, advertisers and oligarchs is the ability to generate artificial friends, artificial advisers and artificial influencers. Social control is the name of the game.
Unfortunately, unlike the referenced paper abstracts aren’t peer reviewed. To make matters worse, not all abstracts are of papers that make final publication.
I found this using Google AI 🙂
https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-019-0070-x
I don’t see what’s so great about organic intelligence. The complaint that LLMs are “just a sophisticated autocomplete” could easily be applied to many (most?) people. Just because we know how LLMs work (sort of), doesn’t mean organic brains don’t work analogously. Just look at the world now, it’s filled with hallucinations about vaccines, climate denial and intolerance – things are falling apart.
Indeed, I think it’s quite likely that LLMs really do reflect some *part* of how the human brain works. LLMs understand and respond to language in a human-like fashion, and I would be surprised if there isn’t research looking for transformer-like structures in the brain. It’s fascinating to me that LLMs are also non deterministic, for reasons only just now being understood. A similar process in the brain might be called “free will”.
That’s not to say that LLMs are intelligent in the same way as humans, but it doesn’t rule it out either. Humans have emotions and other behavioural drivers that LLMs don’t have, but that doesn’t mean humans are special, or that LLMs won’t become part of larger systems.
I don’t want to sound like a shill for LLMs – quite the contrary. Although I too find them useful, the danger of putting these powerful systems into the hands of psycho billionaires is obvious.
But it is these people, these organic intelligences, and not “AI”, who are the proverbial paperclip factories – only it is money, not paperclips, that they will happily destroy the universe over.
Speaking of AI. I think it can offer good summaries sometimes. I think this Google AI summary of CasP (Capital as Power) is mostly spot on. That is to say it accords with my understanding of CasP after reading the theorists book of the same title and many of their papers and papers of some (doctoral?) students of CasP.
So that I don’t bury the other part of the lead, I think leading theorists in neoclassical economics should pay more attention to CasP.
Start Quote.
“The Capital as Power (CasP) theory by Shimshon Bichler and Jonathan Nitzan offers a radical alternative to traditional economic thought, challenging the views of both mainstream and Marxist economics by defining capital not as a productive asset but as a symbolic quantification of organized power. While its foundational claims and empirical research methods are gaining some traction, particularly within anarchist circles, it has not yet achieved broad recognition or widespread revolution in mainstream economic understanding, partly due to its radical nature and the fundamental shift in perspective it demands.
Key Aspects of the Capital as Power Theory
Capitalism as a Mode of Power:
Instead of a mode of production, capitalism is viewed as a mode of power, where finance serves as the symbolic language for organizing and reordering this power.
Challenge to Traditional Frameworks:
The theory fundamentally critiques both neoclassical economics, which separates economics from politics, and Marxist economics, which focuses on production.
Focus on Differential Power:
It emphasizes that the goal of dominant capital is to maintain and increase the monetary value of assets through controlling and potentially sabotaging the wider socioeconomic machine.
Empirical and Theoretical Contributions:
The theory introduces new empirical research methods and data analysis techniques, along with a new theoretical framework for understanding capital and the state.
Impact and Reception
Radical Nature:
The theory’s fundamental challenge to established economic concepts means that its acceptance would require a significant rewrite of much economic theory, history, and futures.
“Schizophrenia” of Political Economy:
Proponents argue that mainstream political economy suffers from a “schizophrenia” where it recognizes the importance of power but is fundamentally prohibited from fully integrating it into its theoretical framework.
Potential for Future Change:
As power becomes increasingly important, CasP is presented as a potential solution that could become widely accepted once conventional approaches prove inadequate to the challenges ahead. ” – Google AI answer.
End Quote.
This is a fair summary of CasP to my understanding. However, I do not like the “Schizophrenia” metaphor, even though the theory’s developers have themselves used this metaphor, at least in interviews. A better technical term, used by the developers themselves in their theory, is “bifurcation”.
Overall, it seems AI can make decent summaries in some cases. Of course, I use this example because I think neoclassical and orthodox economists economists could take the time to explore CasP fully.