Algorithms

This is an extract from my recent review article in Inside Story, focusing on Ellen Broad’s Made by Humans

For the last thousand years or so, an algorithm (derived from the name of an Arab a Persian mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.

As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.

Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed.

A typical example, says Broad, is the use of an “algorithm” to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.

Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called “models.” The archetypal examples — the first econometric models used in Keynesian macroeconomics in the 1960s, and “global systems” models like that of the Club of Rome in the 1970s — illustrate many of the pitfalls.

A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.

Broad’s book suggests that the developers of AI “algorithms” have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between “false positives” and “false negatives.” As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.

The superstitious reverence with which computer “models” were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.

Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.

21 thoughts on “Algorithms

  1. Arab mathematician, al-Khwarizmi

    He was Persian. Famous medieval Muslim scholars were generally Persian but wrote in Arabic.

  2. “Model estimation can go wrong because causal relationships are misspecified … because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.”

    Aren’t the last two just subsets of the first? Teasing out what mattered causally in the past is the essence of the problem of predicting the future, whether by formal or informal modelling of the world (true even with Hume’s famous example of predicting the sun will rise tomorrow).

    It seems to me that, based on your review, Broad in criticising the AI researchers for their epistemologic naivety is not really offering an alternative. Predicting based on “human agency” is just another form of informal modelling that refuses to engage with the (fiendishly difficult) epistemologic, and threfore methodological, questions. Or am I doing Broad an injustice?

  3. @ M Thanks, I hadn’t realised that!

    @DD On your first point, I had in mind the more specific problems of endogeneity and reverse causation.

    On the second, I think you are doing an injustice to Broad, and I don’t think it’s true that informal modelling (necessarily) refuses to engage with the difficult questions. Rather, there’s a massive evasion on the part of people who say “the computer says X” as if that’s a good reason for believing X.

  4. A completely different, physical reason that computer model predictions can fail is due to “the butterfly effect”. This was discovered after detailed mathematical analysis of incorrect weather forecasts that arose from time to time and were widely inaccurate for no other apparent reason…At times the future is susceptible to very small changes in initial conditions. A boulder finely balanced on a narrow ridge above two opposite valleys might easily end in either one. A boulder balanced 5 metres below that ridge will only end in one place if disturbed.

    On a brighter note, it’s perhaps surprising that productivity hasn’t increased faster given the advances in low-cost computing power and instant world-wide communication between researchers. Business-owners are seemingly unable to absorb innovations at the pace they are being created.
    A simple example where advances have helped: instruction via mobile phone by an electrician inside a home installing a new smoke detector, giving advice to an apprentice in the ceiling cavity above on how to identify the cables to which he was to wire smoke detector.
    Small robotic devices to detect and fix faults inside water mains are both feasible and economic, while the cost of water main failures are at present “unavoidable” and costly.
    Accidents on building sites are serious hazards to safety and delay progress even if only physical damage results. Automated devices to oversee potential hazards and avoid risk are far cheaper and easier to construct than ever before.
    Advanced technology can be developed using software tools that are now available at no cost. For example: https://www.arduino.cc/en/Main/Software
    and the low-cost microprocessors to implement advanced robotic systems cost only a few dollars. For example https://www.ebay.com.au/itm/WeMos-D1-R2-Analog-WiFi-D1-R2-ESP-12F-ESP8266-32-Mb-Flash-For-Arduino-Uno-R3/282694006366

  5. Right after reading your article I noticed the word algorithm being being used in a way that probably exceeds its manufacturer’s guidelines. Some engineers looked at units of electronics with a sample size of 45 and then came up with a model of how they deteriorated. But instead of calling it a model they called it an algorithm instead. Why did they call it that? Probably because maths class taught us that an algorithm is something that is supposed to be correct.

  6. Computer models embody a state of the world as it is (or as it is projected to be), almost always around some core of “normality” with limited exceptions/variations. Since large ones are very costly to build (not least because around half the builds fail), and equally costly to modify, the tendency is to try to force the world to fit the model, rather than the other way round. But the world is always changing, and the exceptions and variations are the stuff of change. And anyway the model is usually a very imperfect fit anyway.

    Take name data – people change names (marriage), have “foreign” names which do not fit the first name/surname pattern, are transliterated differently, often mis-spelt anyway (particularly if the speller uses another orthography – I counted 16 different ways to spell “Ford” in international shipping data) and so on. Simply, the world is a messy place, and not getting less messy.

    What computers gain in speed, cost and transferability comes at a cost in flexibility and allowance for variation. In economic terms, they have enormous returns to scale, but limited adaptability. There are good reasons why the cobol behemoths of the 1970s soldier on.

  7. All of these issues (and many more) are well known to skilled Machine Learning practitioners. And of course, many people who do ML don’t understand the issues. https://xkcd.com/552/

    Also, these issues aren’t limited to machines. Humans make “gut decisions” or “intuitive decisions” all the time and they’re often based on inaccurate models — e.g., judgments based on a person’s skin colour or accent. “Iintuition” is little more than a model and algorithm, trained over a lifetime, and we still don’t understand very well how these are created or how accurate they are (e.g., “Thinking, Fast and Slow” by Daniel Kahneman).

    (As a nit — the “algorithms” are the things that process the model, both in creating the model from data and in generating predictions from the mode. But the formulation of the model and the algorithms are so intertwined that it’s not very useful to distinguish them. e.g. https://en.wikipedia.org/wiki/Support_vector_machine https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm)

  8. Models (and the algorithms and heuristics they are built out of) can get things egregiously wrong if the ontology they are based on is wrong… or at least is not anywhere near right. Often, it can be very difficult to ascertain the correct ontological objects in the first place. Take economics for example. At base, economics is a philosophical subject.

    “Economists are essentially philosophers with full employment.” – Samuel Hammond.

    Hammond expands;

    – “Theoretical Statisticians are Epistemologists;
    – Welfare Economists are Applied Ethicists;
    – Social Choice scholars are Political Philosophers; and
    – Macroeconomists are Metaphysicians.”

    “Jokes aside, my glaring omission, of course, was ontology.” – Samuel Hammond.

    Take the relatively standard set of orthodox economic objects (or processes) namely resources, commodities, goods, services, monies, values, prices, economic agents, prefercnes and markets. This list already contains a set of unexamined ontological assumptions. The list contains real categories (physically real), and formal categories (notional items with a quantitative basis like money or a qualitative basis like value) plus amalgams of the two. I could expand on these examples of ontological confusion. The global un-examined assumption is whether these different ontological categories can be related by derived laws at all. Here, I am using the word “laws” in the sense implied in the phrase “the laws of physics”. We can note that physics only attempts to derive laws for one ontological category, everything physical in the physical relational system of the cosmos.

    I still hope that J.Q. will address ontology for economics at some stage. The more wonkish the post the better.

  9. Part of what’s going on here at the moment is that machine learning techniques used to be the preserve of experts in machine learning. Now, any idiot (including me) can download a machine learning library, throw some training data at it, and start doing classifications, without really understanding the limitations of the system.

    Throw in management reading something in the AFR about the wonders of Big Data, and you have a recipe for distaster.

  10. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.

    This statement seems an over simplification and is arguably factually incorrect, I think. Isn’t the algorithm (statistically) correctly predicted the chance of reoffending, the problem is that it was racist. It would estimate a higher chance or reoffending for African American than whites with otherwise identical profiles. This clearly violates the liberal principle this kind of decision should not be based of skin colour. It’s a tough problem. If ethnicity was excluded from the training data it is likely that the calculation would find a very good proxy for ethnicity without using it explicitly. Using humans to make this kind of decision has been found to be equally fraught. The US justice system often appears to be operating on similar principles to a sausage factory.

  11. @Robert Merkel

    But was the machine learning actually better (or less biased) back when the “experts” you describe were its gatekeepers than it is in the hands of today’s “idiots”? Or was there just so much less of it that nobody really cared?

  12. @Time Macknay – probably ML has always been biased, but the problem is better understood today. However, people who claim they have some ML “secret sauce” (such as predicting recidivism) should be treated very sceptically … their “trade secrets” are likely to not withstand outside scrutiny very well (statistics is *hard*). The number of people who can do *good* ML is quite small (and they are all well aware of the problems, and working hard to deal with those) and most are employed at companies such as Google, Apple, Amazon, Facebook, or at universities. (I might make a similar comment about security being very hard and therefore people who claim to have secure voting machines (for example) should also be treated very sceptically.) (In fact, as a visit to almost any website will show, writing good software systems is hard, and ML is just one example.)

  13. OK. No I haven’t read the book. I’ll look for a good short summary that doesn’t use anecdotes. I would have expected an ‘algorithm’ that is trained on real data to beat intuition, and to be less likely to be affected by prejudice. Bad data, crap design, or multiple failures?

  14. @jimb2 Try reading the whole review, at least. There’s no guarantee that a statistical model (or “algorithm”) will do better than careful informal reasoning.

  15. I would expect that careful informed reasoning would get a better (nuanced) result but a good algorithm would beat stressed-out sausage factory human justice. When people don’t have time and their cortisol levels are up they will come up with stock judgements from the “lower brain.” Releasing a potential recidivist is inherently stressful.

  16. “a good algorithm”

    An algorithm that can differentiate between “goodness” vs the self-interest of its own creators might truly approach something resembling AI.

    But I’m not sure how training a machine to make sentencing decisions makes law and order less of a sausage factory.

  17. I’m not sure that there is such a thing as goodness, to me it’s more about reliably meeting the policy. If the policy is right it is optimising costs and risks. It’s a similar question to self-driving cars: they can’t be perfect but they don’t have to be that good to be an improvement on the worse drivers, or even the median driver across the range of circumstances. I don’t know about sentencing but I’ve read a US study of parole decisions that showed that when parole boards are fatigued and stressed (later in the day) they tend to fall back on the keep-em-locked-up position. This is pretty clearly a worse result, irrespective of the quality of the actual parole policy since it adds a random incarceration element. I have a feeling there are similar studies of sentencing but I haven’t read them.

    (High cortisol-lower brain reliance is a common finding, which I think needs to be acknowledged and countered in policy and design.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s