I’ll be doing the Sydney launch of my new book, Economics in Two Lessons at Gleebooks tomorrow (Thursday 27 June). I’ll be talking to the always insightful Peter Martin, so it should be a great event. Details here.
Last night’s Brisbane launch, at Avid Reader with Paul Barclay (ABC Radio, Big Ideas) was very successful

Nice man, Peter Martin.
Enjoyed him on the Drum again last night.
To add to your reading list.
Rage Inside the Machine: an insightful, brilliant critique of AI’s computer science, sociology, philosophy and economics
Smith’s own work in genetic algorithms — woven into a memoir about growing up white in the post-Jim Crow south where racial prejudice was thick on the ground — has shown that diversity is always more robust than monoculture, a theme that recurs in everything from ecology to epidemiology and cell biology.
But, as Smith demonstrates, we are being penned in and stampeded by algorithms, whose designers have concluded that the best way to scale up their statistical prediction systems is to make us all more predictable — that is, to punish us when we stray from the options that the algorithms can gracefully cope with. In a way, it’s the old story of computing: forcing humans to adapt themselves to machines, rather than the other way around — but the machines that Smith is critiquing are sold on the basis that they do adapt to us.
This is a vital addition to the debate on algorithmic decision-making, machine learning, and late-stage platform capitalism, and it’s got important things to say about what makes us human, what our computers can do to enhance our lives, and how to to have a critical discourse about algorithms that does not subordinate human ethics to statistical convenience.
https://boingboing.net/2019/06/27/rage-inside-the-machine-an-in.html
And please JQ when will we haer something of;
Epistemic & Personal Transformation:
Dealing with the Unknowable and Unimaginable workshop.
KT2,
You raise some important points. As Ulf Martin points out in his paper “The Autocatalytic Sprawl of Pseudorational Mastery”.
“2.1 Modern Rationality : Operationa lSymbolism
According to Sybille Krämer (1988, 1991), modern rationality is computable rationality or reason (berechenbareVernunft). Computability is the ability to turn an argument into a Kalkül, a formal system. The algebraic formula is the archetype of such a computable form. In a formal system: a)The construction of symbols is decoupled from their interpretation. The allowed operations do not depend on what the symbols are supposed to mean in the end. b)Language becomes a technique (Technik); formal artificial languages, syntacticmachines, can be constructed. c)Symbols become manipulable objects. Gottfried Wilhelm von Leibniz (1646–1716) developed a theory of symbols that can be used for ‘rational calculation’, calculusratiocinator: a)Symbols are objects that are manipulated according to rules. b)Symbols are autonomous with respect to what they signify; they become a formal system whose inner order is independent of the interpretation of the symbols; and the symbols appear in a new kind of script, typographic script, which is independent of spoken language. c)Formal systems (Kalkülisierung) and typographization turn symbols into mechanical production systems, symbolicmachines; artificial languages are a technology. d)Scientific thought produces knowledge (Erkenntnis); since knowledge production requires symbols, knowledge is the product of the operation of symbolic machines. e)As a consequence, the objects of knowledge (GegenständederErkenntnis) themselves are also generated by symbolic machines.”
I agree that all this i the modern context (which is the same thing your article refers to) is the latest extension of humans being required to conform to machines. Initially, as thinkers from Adam Smith to Karl Marx pointed out, man (laboring man) was required to conform to physical machines, to become the slave of the physical machine. Imagine a stoker having to feed coal into the firebox for a steam engine or a man restricted to one repetitive task in a factory implementing specialized but highly stereotyped labor.
More utopian thinkers, including Marx in his “Fragment on the machines” envisaged a time when the motive power of all machines would be mechanical (no need for hard, repetitive physical labor) and the directing power of the machine would be automatic. The “Fragment on the machines” is uncanny in its prediction of fully automated production. What these utopianists did not predict (it seems) is that the algorithms running the new automated machinery of society and decision making would not so much free humans for philosophical, artistic and cultural pursuits as tie humans to a new calculus further reducing everything to money and finance (capitalized value) operations.
This last at least has been the outcome from non-learning algorithm systems. We see what these dopey systems do when they are unleashed. A clear example is the Centrelink robo-debt system and the fallacious debts it is raising in many cases. To me, a clear example is the rigid “un-service” and “anti-service” I receive from my bank. I notice how rigid they are. When they automate a mistake it gets made over and over and it is very hard to get the mistake corrected. The bank attempts to force you through automated systems to get the automated mistake corrected. However, their systems are mostly not efficient or flexible enough to recognize and correct their auto-mistakes upon human feedback. One always has to fight to get to a human worker in the bank to get the mistake corrected. The customer spends hours of time (his or her own work) to get through to get the mistake corrected. The customer essentially works for the corporation for free. To be a consumer is to be an unpaid worker for the corporations. We pump our own gas, we select our own groceries etc. etc.
Learning systems will change this dynamic again. Any hope that learning systems will be used to help non-rich people in our current system will be forlorn. They too will be deployed to manipulate and control ordinary people and their lives and to funnel all profits and power (or capital-as-power) up to the rich. Instead of a dumb machine working against one, one will be faced with a smart machine working against one. Even a learning machine can only operate to learn to maximize those parameters it is given in its directives. Their parameters will be given in their position evaluation functions (as these functions are termed in computer game theory). Those who control the programming of the position evaluation functions will control the world (albeit the world will also be collapsing because of climate change and limits to growth).
Hi Ikon. It always amazes me that ” nothing new under the sun” seems forever apt even with our techno wizardry… “Gottfried Wilhelm von Leibniz (1646–1716) developed a theory of symbols that can be used for ‘rational calculation’, “.
And it also amazes me how we still have blind and wilful blind spots.
“Imagine a stoker having to feed coal into the firebox for a steam engine or a man restricted to one repetitive task in a factory implementing specialized but highly stereotyped labor. ”
After reading the article below, the coal stoker you mentioned is actually digitized once and discarded, so in future we will just use us humans to privide data to ai and cover the 1% the ai doesn’t cope with – and NOT pay for our humaness or data. So stoke once for free and race to the bottom payment for covering and learning the 1% the -stoking’ ai doesn’t cover.
Ikon, I am not quite as absolute as you in decrying all this, as I for example, can’t wait to hop into an ai driven car / transport system. But the way humans are being treated is apalling and will be seen in the future as a massive externality ‘we’ never accounted for – and just writing that makes me want to shout “keep your data” until my throat hurts. And “AI is a threat to jobs, not work, ” needs a bumper sticker.
“Random pixels or made-up faces are worth nothing to these systems, but real faces take work and time to grow. Beyond their pure geometry, faces contain a wealth of information from makeup, facial hair, acne, skin tone, hairstyle, and expression. Having this data collected at the DMV only requires volunteering a few hours of a finite, irreplaceable, human life, but creating that data requires living a finite, irreplaceable, human life.
“When it appears as if AI is replacing human work, that is an illusion created by entities that are shifting work from legally recognized employment to on-demand gig work and to unacknowledged unpaid labor, putting AI where it is visible and humans where they are invisible. AI is a threat to jobs, not work, and it’s not the AI doing it but the startups and companies using it to enter a fuzzy legal space where there’s no precedent of enforcement under existing labor law.”
https://theartofresearch.org/ai-ubi-and-data/
Author of above is Vi Hart who has been showing my 11yo great math videos. Vi is “… currently doing research on AI, economics, and data as labor with the support of Microsoft”.
As an example of good use of ai but still a human who will have to pay in future to use ai:
“new artificial-intelligence tool captures strategies used by top players of an internet-based videogame to design new RNA molecules. Rohan Koodli and colleagues at the Eterna massive open laboratory present the tool, called EternaBrain,
https://www.sciencedaily.com/releases/2019/06/190627143329.htm
And one ai which it seems the cosmos supplies the data:
“For the first time, astrophysicists have used artificial intelligence techniques to generate complex 3D simulations of the universe. The results are so fast, accurate and robust that even the creators aren’t sure how it all works. The Deep Density Displacement Model can accurately simulate how the cosmos would look if certain parameters were tweaked — such as the dark matter composition of the universe — even though the model never received training data where those parameters varied. ”
https://www.sciencedaily.com/releases/2019/06/190626133800.htm
But if we don’t beat inequality I fear digital apartied will become dominant.
Finally, on learning for algorithms, Friston is your man!
“The Fristonian agent started slowly, actively exploring options—epistemically foraging, Friston would say—before quickly attaining humanlike performance.”
https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/
And here is a fictional potential ai future story for good learning;
“Rounding Corrections” by Sandra Haynes
https://www.gizmodo.com.au/2018/01/read-the-into-the-black-contests-winning-story-set-in-a-future-where-economics-are-also-humane/
Because there was some discussion of the space program, and its opportunity cost, I thought you might be interested in this (or at least find some entertainment value in it):
http://www.thepaincomics.com/weekly110713.htm
Tim Kreider doesn’t use the term ‘opportunity cost’, but that is what he’s talking about, isn’t it?
Quick reply and thanks Ikon for the ref. to The Autocatalytic Sprawl of Pseudorational Mastery.
Here is the current paradigm and inherent problem stated succinctly:
“This turns the means into the end: rationality, or a rationally ordered society, becomes the goal rather than the most effective way of achieving something that has been decided on by, for example, the citizens of an autonomous society.”.
Tail wagging the dog?
4.3… isn’t this also the definition of cancer? Or slime mould. Or a fire?
“Furthermore, the irrationalities of bureaucracy create the need for new bureaucracies. Like finance, bureaucratic regimes sprawl autocatalytically: they grow indefinitely by way of their own processing in a seemingly chaotic way.
6.2-C – last sentence from
“The Autocatalytic Sprawl of Pseudorational Mastery”
… is great, and sums up how easy it would be to run the world sensibly…
“find inspiration there [ cern ] to organize the creation and maintenance of any other technical system (railway system, aircraft, etc.) in a relatively power-free way, since these will likely be technologically simpler. The picture here would be a world in which all the technological infrastructure was socially organized into autonomous collaborations.”
And we just have to get Hanson Lambie and scomo et al into; Cern and to pool their power toward the creation in a “relatively power-free way, since these will likely be technologically simpler”. Sigh.
As with musical bands which are long lived, a correlation to cern is rights to works and money being all owned by all. Making them long lived and creative powerhouses. As compared with our current capitalisation, bureaucracy and politics reverting to more derivatives, more bureaucratic rules and nationalism and splintering toward populists.
Run the world like CERN + U2!
U2 and the Red Hot Chili Peppers divide royalties equally no matter what each person’s relative contribution.
https://www.forbes.com/sites/ruthblatt/2014/02/03/six-surprising-things-that-u2-and-the-red-hot-chili-peppers-have-in-common-other-than-a-spotlight-at-the-super-bowl/#7bdad5fafc23
Also Ikon, your paper has promoted me to dig out and read ” Present at the Creation: the story of CERN and the Large Hadron Collider”, which has been languishing. In my “Autocatalytic Sprawl”.😊
JD – opportunity cost of burning:
2579 – 3 = 2576 lives AND $’s:
Is the opportunity cost the full $432Bn or the difference between afganistan minus moon. $432 – $145 = $287Bn?
I understand ‘opportunity’ but now I need a definitiin of ‘cost’ as cost is not taken as also ‘lives lost/saved’. Maybe a new rule; cost to life then $ cost of opoortunity.
I like the linking arrow at top labeled “Whoops!”.
[…] launch for Economics in Two Lessons was at Avid Reader on 25 June and the Sydney launch at Gleebooks on 27 June. On my blog, I discussed A new two-party system? and whether globalisaton can be […]