58 thoughts on “Sandpit

  1. Tim Macknay @ #42 said:

    You really hate being disagreed with, don’t you Jack.

    No, not if the disagreeable person tells me something I don’t know or corrects an error.

    What I really hate is the dissemination of falsities and fallacies by people trying to pass this off as expertise. And then blowing lots of smoke when they get called on their mistakes. For quite some time I was friendly with, and corresponded with, John McCarthy, generally regarded as the founder of AI. I also do occasional (non-programming) work for an AI engineer. Also this year I presented a paper on the economics of AI to a tech conference in Bejing, So I have a modicum of inside knowledge on the subject and I plan to use it to perform some border control on interlopers.

    What I have argued is that the military-industrial complex sponsored the development of AI to give machine intelligence assistance to high-performance weapons systems (eg fighter planes) that are tasked to operate in hostile environments. This is a true statement and Tim Macknay’s endless digressions, diversions and disingenuties have done absolutely nothing to make a dent in it. But take heart, as Tom Hagen said to Micahel Corleone, this hit is just business, so dont take it personal.

    Tim Macknay said:

    This is true of modern aircraft of course, but it doesn’t change the fact that an HUD is not, and does not require AI, notwithstanding that contemporary avionics employ both technologies. My car has both HUD and AI, but it only uses AI tech for the GPS navigation system and the cruise control – the HUD is just a display.

    I already stated that “the HUD is a display”, so I don’t know why you are wasting everyones time by going over old ground or falsely implying that I stated that HUD is equivalent to AI.

    This does not change my fundamental point: that the rationale for installing HUD in fighter planes is to give the pilot a GUI – thats a user-friendly Graphical User Interface – to make mission critical decisions in real time. Whilst the AI systems – literally under the hood – do all the grunt work of processing multiple & varying streams of data in real time and rapidly performing routine decision decisions in regulating the flight controls. Employing high-speed machine intelligence is enough to fly a high performance fighter, but we insist on employing lower speed human consciousness when killing people.

    Tim Macknay said:

    Also, computers don’t require AI to “process data … at speeds much faster than human mental reaction time” – any old bog-standard digital computing can do that.

    You seem hopelessly confused about both imperatives of AI-run platforms operating in hostile environments and the nature of AI itself. Its true that computers “dont require AI to process data at high speeds”. But they do require AI to make real time decisions at high speeds and that is the basic point of having AI in fighters and then fitting a HUD to give the pilot an easier GUI for the making the critical decisions.

    Avionic AI is basically a expert systems software programs, usually coupled with high-end graphics processor. To fly a high performance fighter plane you require not just high-speed data processing, but also high speed decision making. The fastest computers in the world – Cray supercomputers? – can perform more FLOPS than youve had hot dinners, particularly useful in analyzing complex information such as meteorological data. But, as they stand, they would be useless as tits on a bull in flying a stealth fighter.

    Its true that many “bog standard computers” can process data at high speed. But such computers do NOT have the special expertise soft-ware to:

    – integrate multiple real world dynamic data streams in real time,
    – compose a comprehensive “world model” with alternative simulated decision making paths and then
    – make reliable high-speed autonomous decision based on on the agents putative utility function.

    The very expression “real time” is, I believe, one that evolved to describe the environment for which AI systems were developed.

    Tim Macknay said:

    Yes, as I said, its use in military aviation is relatively recent – Desert Storm took place in 1990-91, thirty four years after AI research began.

    So 25+ years ago, that is a human generation, is “relatively recent”. Gimme a break. The Gulf War Was. Not. Recent. We are talking IT cycles of development, which basically go in golf fish life-cycles of of two to seven years. So 1990 Gulf War is basically the Jurassic Age so far as computers are concerned. In any case, who cares how long the fruits of AI occurred after the initial AI research began, What counts is that AI is delivering the goods and has been doing so for much of the past generation, which is not exactly recent.

    Tim Macknay said:

    Also, the F15 Eagle entered service in 1976, around the same time that the first “AI winter” began, due to the general lack of progress in the field at the time. It did not have a head-up display. Were you referring to the later iteration known as the F15E?

    I was referring to “F15″‘s which implies at least one F15. When I tap in “F15 + Heads Up Display” I get 55,000,000 hits. All modern fighters have HUD. And the reason they have HUD is to relay mission critical data whilst the AI system performs routine flight operations. Stop blowing smoke and picking nits.

    Tim Macknay said:

    It depends what you regard as the “promises”. It’s true that AI technology is in far more useful products these days than it was a couple of decades ago, but there is some definitional sleight-of-hand at work here. Also most of the areas in which AI is now finding success were supposed to arrive decades earlier, according the the predictions of leading lights in the field. Contemporary AI applications are nothing like visions of the early AI thinkers, and their grandiose goals (and those of contemporaries such as Kurzweil) are as far away as ever.

    So the founders of AI, although immense benefactors of man-kind, were guilty of the sin of over-optimism as regards developmental time-horizons and product range. Big deal, so sue me. And them. At least, unlike side line critics, they actually achieved something in their own lifetimes, perhaps by aiming high. It should not be necessary to point out that the future still has a long time to run and that AI product development is now increasing its range and momentum of roll-out.

    As regards “grandiose goals” I assume you are referring to the early AI prediction that they would produce a machine with human level general intelligence (so-called “strong AI”, a HAL 900 if you like) within a generation. Its true that this momentous event has not yet gone through the formality of actually occurring – yet.

    But we have many specialised expert AI systems (so-called “weak AI”) that have achieved and exceeded human level (natural) intelligence in certain fields, eg AI interceptors, traders and maybe even translators. So really your criticism boils down to the fact that we now only have weak “special” AI, not strong “general” AI. All I can say to this is that when you are referring to the progress of the culminating intellectual achievement of mankind it may pay to be patient.

    But dont expect an endless wait. The relentless progress of Moores Law actually does most of the heavy AI lifting. And there are astounding software developments out of the pipeline. I am referring to Hutters Algorithim, commonly known as AIXI, which is considered the gold standard in contemporary AI science. This is convincing theoretical proof of “strong AI”. Stripped down versions of this formula are already being integrated into practical AI systems with some pretty impressive results.

    And the project of producing a human equivalent technological brain is progressing along several developmental pathways. Makrams “Blue Brain Project” which aims to virtually simulate a brain is out there getting some serious funding. There is also the Fast Analog Computing with Emergent Transient States project which aims at a direct hardware instantiation of the brain.

    Sp we have theoretical proof of strong AI, some promising applied experimental projects. and Plus lots of new specialised AI apps. Something is definitely afoot.

    Tim Macknay said:

    You are wrong to claim I have contemptuously dismissed them. Clearly those men (and AFAIK they were all men) had a huge influence on the development of civilisation, for both good and bad. I have a great respect for their intelligence and dynamism. But high intelligence and creative capability, and leadership of a major corporation, are both perfectly compatible with holding foolish and misguided beliefs. In fact, judging by the performance of the senior executives in many major corporations, such a combination is not uncommon.

    Well thanks for reminding us of the fallen state of man, including geeks. But this does not change the factual basis of the two points I have been making: AI was initially boosted by the M-I complex for use in areas such as fighter planes and rocket flight. And nowadays AI is starting to fulfill some of its early promise, being integrated into more and more higher end products and processes. Although it often gets disguised by terms such as “smart” or “intelligent” systems. Perhaps because customers get uneasy when something reminds them of HAL or Frankenstein s monster.

    Tim Macknay said:

    If UAVs still require pilots, then AI isn’t living up to its promise.

    Again, you miss the point of AI in avionics. The AI systems perform most of the routine flying operations, particularly when flying on-station or en-route in difficult environments when the operator is a long-way off. The UAV pilots make mission critical decisions, such as when to unleash a Hellfire missile on some unsuspecting insurgent.

    Tim Macknay said:

    And however brilliant the technology may be, I think any sane person ought to have misgivings about the rapid emergence of military drones.

    Sure, thats reasonable, especially if insurgents or “rogue states” get hold of the technology and start flying drones filled with packs of Semtex into your local Macdonalds. But the answer to that problem is to make sure the Good Guys keep a step ahead of the Bad Guys. I dont regard the USAF as Bad Guys, particularly given what the insurgents do the Malawis of this world.

    Tim Macknay said:

    I am contemptuous of the grandiose fantasies of some of the leading lights in the IT industry. Not only do I consider those fantasies implausible, but in my view, the realisation of such fantasies (which is not impossible of course, despite being implausible) is incompatible with the existence of a sane and just world.

    So firstly AI developers were wrong to make bold predictions about the power of AI because these predictions have not all eventuated in quick time. And now AI developers are wrong to make bold predictions about the power of AI because even if they did eventuate they would not be “compatible..with the existence of a sane and just world”. It seems no way open is available for AI developer to keep you happy with their intellectual performance either factually or normatively.

    As it happens I do share some of your reservations about the potential for harm posed by the threat of runaway AI development – the so-called Singularity. Which is a major reason I have been obsessively following the industry on and off for most of my post-adolescent life.

    I am largely convinced that human-scale general AI will come and will occur within the life-times of most people on this planet. So we should be ready for it, both the down-sides (mass anomie? Dr Evil?) and up-sides (healthy) immortality? (wealthy) abundance? (wise) omniscience?. But first we have to get our facts straight, and not indulge in either hyperbolic boosting or nihilistic knocking.

    Tim Macknay said:

    As for Kurzweil, virtually all his predictions have been unrealistically optimistic. Virtually none of his predictions for the first decade of the 21st century have yet come to pass (although some are now very close). In that regard, he is fairly typical of AI mavens. He may well be able to progress Google’s natural language project (or he may not – that is one area where AI has conspicuous failed to live up to its promise so far – time will tell). Kurzweil has a track record in delivering product, but that has no bearing on the veracity of his wilder prognostications.

    I see that most of your contempt for AI developers is reserved more for the latter versions such as Ray Kurzweil, rather than the earlier ones such as Herbert Simon or Marvin Minsky. Ray does seem to get under alot of peoples skins.

    I should think that “Kurzweil’s track record” is one of the first things to consider when evaluating his futurological veracity: has he delivered the goods in the past, both professionally and politically? On this reckoning Kurzweil scores well.

    Lets review his 1986 predictions out to the end of the century: the mass usage of intelligent machines? check. The Soviet Unions collapse? check The mass uptake of the internet? check. The internet would be ported by wireless rather than wired connections? check A super computer would beat a chess master? check Natural language text-to-speech converters? check. In 2005 Kurzweil predicted that a supercomputer would be able to simulate the folding and unfolding of a protein within five years. This was accomplished in 2010, bang on time. I would not call these predictions “unreasonably optimistic”.

    And this is not counting his predictions regarding his own professional achievements: Kurzweil was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first commercial text-to-speech synthesizer, the first music synthesizer Kurzweil K250 capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition

    In 2010 Kurzweil published an essay “How my predictions are faring”. He claims “89 out of 108 predictions he made were entirely correct by the end of 2009”. Even if he was gilding the lily a bit, we have to give him some credit for getting some major stuff right and for being accountable.

    Tim Macknay said:

    And Jack, do you seriously believe his prediction that he will be able to upload his personality into a computer and become immortal within the next twenty five years? Seriously?

    In my experience most of the gifted technologist futurologists typically over-estimate the speed of technological progress by a factor of two to three. So no, I dont believe that people will be able to “upload personality into a computer and become immortal within the next twenty five years”, that is by the year 2040. But I would not be bold about negating this prediction if the time horizon was stretched out to fifty or seventy-five years, say by the year 2100. If I could somehow stay alive till then I would not be hugely surprised to see mind uploading being performed. I certainly hope so.

    Tim Macknay said:

    For a useful counterpoint to the technological tunnel vision of Kurzweil and his fellow travellers, try reading “One half of a manifesto” by Jaron Lanier.

    Lanier is a smart guy and he means well. But I found most of his anti-manifesto incoherent. Okay, he has a point, intelligence is more than just hardware speed. And that software quality progresses much more slowly than hardware processor quantities. Thus what Intel gives Microsoft takes away and then some.

    I. Get. That.

    But you can go quite a distance quickly by putting more cylinders on the block. And software is getting more elegant and less bloated, no doubt due to competition from Apple, Open Source and perhaps biological analogs.

    It is striking that the bran uses so little energy and requires so few lines of genomic code to design and yet is capable of such computational wonders. So we obviously need to stream line computer “brains” or perhaps re-think part of the brain simulation paradigm.

    I dont have a problem with various people such as Lanier, Rennie and even the execrable P Z Myers having a go at Kurzweil. He has over-reached on occasions and has also made some howlers with his silly “Law of Increasing Returns” graph. I would merely point out that Kurzweil’s overall technological record, both as predictor and producer, is formidable. So watch this space.

    Anyway, Ive been invited to a swell NYE party, so I must fly. Toodle-loo.

  2. @Neil Hanrahan
    Jack was told to move his response from the other thread.

    @jack strocchi
    Jack, from your response it appears that you are in vehement agreement with what I said on the other thread. What all the pomposity was for, I really don’t know.

  3. What is AI? Can we define it as goal-seeking behaviour in a given “data-verse” according to a set rules? A chess program goal-seeks using algorithms and heuristics with legal move generators, position evaluations and tree-searching routines. Chess is essentially a mathematical matrix problem. If a program to play chess is AI then a program in a calculator to find square roots is AI too.

    If we set a higher benchmark for AI, we would want it to behave and goal-seek more like humans. Some define an intelligent agent as a system that perceives its environment and takes actions that maximize its chances of success. A genuine intelligent agent requires not only intelligent calculation and goal-seeking routines but also perception apparatus (to perceive and acquire data) and movement (servo) apparatus. It needs ways to get data from the environment and ways to move in and manipulate the environment. Furthermore, this needs to be the real environment and not merely a virtual environment.

    I am making the argument that real intelligence cannot be separated from the need for a real body be it an organic body or an automaton body (at this point of the argument). That is the operative intelligence (operative agent) cannot be separated from the “afferent” and the “efferent” (taking data from the environment and affecting or manipulating the environment in turn).

    Indeed, referring to the efferent specifically and feeling and emotion more generally, we might seriously locate the impulse for goal-seeking in feeling. Without the impulse for goal seeking and the impulses to seek particular goals, intelligence would never be activated and never have a goal. The initiator and generator of goals and goal-seeking behaviour is feeling. To achieve general true AI would require body and feeling in the real physical world. Given that only certain organic structures seem to provide feeling or sentience then true AI machines are probably going to have to be organic. How this would happen or if it could, I don’t know.

  4. Another way I could say the above would be this. Only when there is a body with needs is there a need for intelligence. The need for intelligence ranks with, not above, other adaptive imperatives.

    As for the things we currently call AI, well it’s a matter of definitional taste but I would call them complex, special purpose calculators.

  5. The notion of intelligence is almost synonymous with the notion of ‘human’ itself. What we are good at ,what we are the best at ,will be the goal of AI – the more human the more intelligent .(Thats Alan Turings test I guess. ). The thing we are best at compared to the other animals is manipulation of sign systems .
    Brains dont need to be very big to do good things .Some ants are now known to return to the nest and teach, or explain, to each other the way to a good site for a new nest .These ants do not simply follow a scent they use landmarks to remember the route. Our brains are smaller than Neanderthal ones .Also some dogs are 5 times bigger than other dogs but the bigger ones dont always seem smarter than the smaller ones .

  6. Tim Macknay @#3 said:

    Jack, from your response it appears that you are in vehement agreement with what I said on the other thread. What all the pomposity was for, I really don’t know

    Lets get the sequence straight: in your own words you “agree with [my] point that many major technological advances of the 20th century were driven by government-funded military research”. Of course every school boy knows that, Pr Q being a little behind the academic curve on this issue.

    On the subsequent debate over AI, you have achieved “vehement agreement” with me by the expedient method of having an each way bet on each point of contention. Thereby allowing yourself a pat on the back when you hit the truth and avoiding a kick in the pants when you make “elementary howlers”. This contemptible “bait and switch” ruse may fool some but not me.

    I will concede one point to you, I was wrong in my subsequent point about HUD. The introduction of HUD preceded the introduction of AI in military avionics. It seems that the plethora of straight forward analog controls required to fly super-sonic planes already exceeded human cognition and decision making speeds. So HUD was devised to give the pilot a way of getting his nose out of the dizzying array of the instrument panel and into the air where all the action was.

    Still whilst HUD does not require AI, my point is that any pilot who needs AI to manage the avionics will definitely need a HUD to make the command & control decisions. You say that HUD is not AI (which I never denied) whilst acknowledging that “contemporary avionics employ both technologies” which implies that that these technologies are complementary.

    You defend your use of the term “recent” successes of AI by reference to its use in Gulf War I. This you can only do by redefining “recent” as a long time (34 years) after its invention, as if the point of origin defines the degree of recentness. And ignoring the fact that the Gulf War was 23 years ago, an eternity in IT cycles.

    You insist that the fact that “bog-standard computers” have fast data processing speeds implies that they don’t need AI to produce rapid results. Thus side-stepping the critical issue which is that avionic computers are there to make decisions, not just cognition.

    You try to have it both ways with the leading lights of AI, saying the founders had “grandiose goals” (which is true). But that you had “great respect for their intelligence and dynamism”.

    You cant make up your mind whether the predictions of strong AI have “conspicuous[ly] failed to live up to its promise”, or maybe are “impossible” – as in the case of mind-uploading – or “implausible” or simply undesirable “in a sane and just world”. So you veer from one to the other, just to cover all bases. Make up your mind, stick to one story at least so that the interrogators dont waste every ones time trying to narrow down your wiggle room.

    You reserve your greatest vitriol for Kurzweil, despite the fact that he has actually realised some of AI’s goals, specifically OCR technology. You acknowledge that Kurzweil has a “good track record in delivering product” as if that has no bearing on the “veracity of his…prognostications”. And despite the fact that a fair few of his technology predictions made in the late eighties did come through in the nineties and noughties.

    As to whether or not I am “pompous” is for the readers not contenders to judge. I recall making at least one joke in my comment, which is at least one more than you.

  7. I’ve been reading a lot about GM foods lately. A good starting point for anyone interested is Nate Johnson’s series in Grist. Johnson dispels many of the myths peddled by his own delusional GM phobic colleagues.

    An interesting fact that gets scant media attention is that opposition to gm crops has led to an increase in the use of mutagenesis, an older technique for developing new plant strains that involves radiation and which is considered much more likely to result in unintended consequences.

    Would it be fair to say that GM delusionism on the Left is almost as bad as climate change delusionism on the Right? Remember, anti-GM thugs are threatening scientists, ripping up paddocks and destroying research facilities, with one of the latest attacks being the destruction of a Golden Rice test plot in the Philippines.

  8. @jack strocchi
    OK, Jack, For the record, I’ll reiterate my original remark from the “Mixed Thinking about Markets” thread, which set off your tirade of bluster and insults. I said (with respect to AI):

    Heads up displays are not AI, Jack. AI originated as a fantasy of the boffins who developed digital computers, and its use in military aviation is relatively recent.
    Despite having a growing number of practical applications, AI is still highly influenced by the boffins’ fantasy perspective even today.

    That said, I agree with your point that many major technological advances of the 20th century were driven by government-funded military research.

    That remark was in response to the following comment of yours:

    And of course AI was originally designed to deal with the management of fighter planes (heads up display) which operate at speeds much greater than human action.

    You then accused me of not knowing what I was talking about on the subject of AI.
    Since then you have:

    – conceded that HUD is not AI and that AI was not “originally designed to deal with the management of fighter planes”; and
    – conceded that the AI research project originated decades before the advent of its use in avionics.

    In doing so, you’ve also huffed and puffed, accused me of making ‘elementary howlers’ and ‘disseminating falsities and fallacies’, sneered and bloviated and insulted me for no reason, accused me .

    The funny thing is, it’s clear from your subsequent rantings that your original mistake (i.e. that AI was developed for military avionics) was unintentional, as you were apparently aware that AI was not originally designed for that purpose. All you had to do to clear that up was point out that you were aware that AI had a longer history than that, and clarify that what you meant was that military avionics was a significant successful applications of it. The subsequent nonsense would have been completely unnecessary.

    But no, evidently even the mildest, most civil criticism apparent sends you into a frenzy. So instead you raged, fumed, postured and threw in a few insults for good measure. All the while making it plain that, notwithstanding your claim to insider knowledge and having a couple of mates in the field, you are no more an expert on AI than I am.

    Apart from your unhinged reaction to my comment about HUD and AI, the main source of your considerable ire appears to be the fact that my attitude towards AI differs from your own. I am ambivalent about AI (particularly “strong” AI) whereas you are clearly an enthusiast.

    However, that is something on which reasonable people can disagree. The fact that you are unable to do so is strongly suggests that something other than reason motivates your fascination with AI. Could the motivation be quasi-religious? There’s certainly a curious resemblances between the anger you apparently feel at my lack of reverence, and the anger often expressed by certain Wahabists when somebody publishes a cartoon mocking Mohammed.

    But the biggest mystery is not why you are so offended by my opinions of AI, but why you are so uncivil.

  9. Mel, does the anti-GM crowd have a significant influence at the political level though? I mean I can’t see the anti-GM crowd causing a significantly worse outcome on the global level over the long term.

    Also, there are quite a few risks associated with GM that may need to be dealt with through regulation (which may or may not be in place in the current time). It’s probably something that needs to be dealt with as part of a larger consideration of quality and security of food supply though.

  10. desipis:

    Mel, does the anti-GM crowd have a significant influence at the political level though?

    Yes, heaps- hence:

    * various third world countries banning gm crops and turning back shipments of emergency food relief because of gm concerns

    * moratoriums on most gm in most Australian states, altho some of these have been lifted in recent years

    * the almost complete absence of gm food crops in the EU due to crazy regulations. Irish potato farmers for example would like to use a blight free gm crop but are banned from doing so. As a result:

    Farmers typically spray 10-20 times against blight; organic farmers use the dangerous Bordeaux mix, which releases toxic copper into the environment.

    Yipeeee!

  11. mel, do we have sufficient understanding (both in a scientific and a regulatory sense) of the risks involve in permitting GM food crops to determine that the bans/moratoriums are a bad idea?

  12. Desipis,

    The overwhelming majority of scientists appear to think that current regulatory arrangements are sufficient for current gm technologies. Given that literally trillions of meals with gm products have now been safely consumed I see no cause for alarm.

    To put things in perspective, food allergies claim one life each year in Oz and have an economic cost of $9.4 billion per year yet we’re seeing a major push to ban kiwi fruit, peanuts, carrots, eggs, sesame seeds and the dozens of other natural food products that cause so much harm.

  13. @desipis

    A degree of circumspection on GM doesn’t seem to be unwarranted.

    From the Hindustan Times March 2012:

    India’s Bt cotton dream is going terribly wrong. For the first time, farmer suicides, including those in 2011-12, have been linked to the declining performance of the much hyped genetically modified (GM) variety adopted by 90% of the country’s cotton-growers since being allowed a decade ago.

    Policymakers have hailed Bt cotton as a success story but a January 9 internal advisory, a copy of which is with HT, sent out to cotton-growing states by the agriculture ministry presents a grim scenario.

    “Cotton farmers are in a deep crisis since shifting to Bt cotton. The spate of farmer suicides in 2011-12 has been particularly severe among Bt cotton farmers,” says the advisory.

    Bt cotton’s success, it appears, lasted merely five years. Since then, yields have been falling and pest attacks going up. India’s only GM crop has been genetically altered to destroy cotton-eating pests.

    For farmers, rising costs —in the form of pesticides — have not matched returns, pushing many to the brink, financially and otherwise. Simply put, Bt cotton is no more as profitable as it used to be.

    “In fact cost of cotton cultivation has jumped…due to rising costs of pesticides. Total Bt cotton production in the last five years has reduced,” says the advisory.

    This could have larger implications for Asia’s third-largest economy where rural prosperity has been a key driver of overall growth.

    The note is based on observations from the Indian Council of Agricultural Sciences, which administers farm science, and the Central Cotton Research Institute, the country’s top cotton research facility.

    Yet, officials HT spoke to either denied or downplayed the advisory. Swapan Kumar Dutta, India’s deputy director-general of crop science, said he had no knowledge of the note and that Bt cotton continued to drive India’s cotton production.

    He could neither “confirm nor deny” that such a note had been sent, said Prabeer Kumar Basu, the agriculture secretary.

    Of the nine cotton-growing states, Maharashtra has seen the largest number of farmer suicides. In the state’s Vidarbha region, a cotton-growing belt comprising six districts, 209 farmers committed suicides in 2011 due to “agrarian causes”.

    In February 2010, the environment ministry put an indefinite moratorium on Bt brinjal, India’s first GM food crop, days after the country’s biotech regulator cleared it for cultivation. Among many reasons, the ministry said it was “necessary to review” the performance of Bt cotton first.

  14. I was wondering if the problem of Indian farmer suicides would be used as a political football by the anti-gm crowd on this site but then I thought, no, this site attracts a more classy clientelle …

    According to Indian Government figures, 270,000 Indian farmers have topped themselves sinced 1995 and that includes 11,000 including 14,000 in 2011. However a major peer reviewed study put the figure for 2011 at 19,000. Accordingly it appears likely that 300,000 to 350,000 might be the true range. (1)

    Also note this:

    Poverty and debt are likely a large part of the [farmer suicide] problem. As journalist Palagummi Sainath, who has long covered the issue, notes, four of these five states are in the cotton belt region of India, and the price of cotton in real terms is a twelfth of what it was thirty years ago. Furthermore, the government removed subsidies for cotton in 1997, around the time the suicide rate among farmers began becoming apparent.

    Boom bada boom bada boom …

    (1) www. bbc.co.uk /news/magazine-21077458

  15. @desipis

    There are a lot of pro-GM propagandists around (and many poorly-informed ‘anti-GM’ people, too).

    GM is a vexed issue – especially because, like climate change, there is an enormous amount of money and power interests involved on one side – and, as I said, circumspection is warranted.

    Rapidly falling crop yields in such a short time might be a cause for pause, for example.

  16. Also note that the Hindustan Times article #15 is from 2012. In that same year the Hindustan Times published a range of other farmer suicide articles that blamed everything from droughts, heavy rain, pests, to punitive interests charged by money lenders to climate change for spates of farmer suicides.

    But more importantly, a range of peer reviewed studies including this one point to the success of Bt Cotton in India.

  17. I think racism is clearly part of the issue.

    From a Forbes India article published less than two months ago:

    Lohiya’s neighbour Vijay Mahadev Niwal, 49, is a farmer-cum-motorbike dealer. An engineer, his family of four brothers owns 75 acres, two-thirds of which are under cotton. Niwal says Bt cottonseed has doubled and even tripled yields because of lesser weevil damage, while incomes have improved because of fewer pesticide sprays and the abolition of Maharashtra’s compulsory procurement at depressed prices. Suicide pockets, according to him, emerge from a “legacy of debt.” If Bt cotton was loss-making, “farmers would not repeatedly plant it”.

    Presumably the anti-gm crowd think India farmers are too unintelligent to be left to buy seed for themselves.

  18. Mel, I’m far less concerned about the health affects of eating GM food than I am about the environmental risks (e.g. potential for releasing something that turns into a herbicide resistant super-weed). The potential for patents to become as big an issue for food production costs as they are for health care costs is also a risk, particularly the capacity for large corporate entities to leverage themselves into a position of significant market power.

    There are other issues such as risks associated with homogeneity of food supply species, or the ability to optimize costs/appearance at the expense of nutritional content. While these are also present when traditional breeding practices are taken to a global/industrial scale, GM amplifies their potential impact.

  19. Desi:

    … potential for releasing something that turns into a herbicide resistant super-weed

    We already have herbicide resistant super-weeds and the problem is getting worse

    I read a lot about this stuff because I spend a huge amount of time spraying weeds on my acreage. Integrated Weed Management is recommended to slow the development of super weeds but this problem will nonetheless continue to get worse each year. This is a much bigger problem than your imagined but non-existent gm created superweeds and gm itself may well be part of the solution.

  20. Desi:

    There are other issues such as risks associated with homogeneity of food supply species …

    I wouldn’t worry too much if I were you. Also check out the Diggers catalog. And the Seed Savers networks. Etc …

  21. PS:

    Rosi-Marshall took the hits hard. “I experienced it in person and in writing,” she says. “These are not the kind of tactics we’re used to in science.” She was a few years out from her PhD, she did not have tenure at Loyola and her first paper in a prominent journal was getting trashed, along with her reputation. “She’s young and was getting picked on,” says Michelle Marvier, a biologist at Santa Clara University in California who attended the NAS November 2007 meeting.

    It was at least some comfort to Rosi-Marshall and Tank that e-mails and phone calls of encouragement came pouring in from other scientists. Some of their supporters had observed similar attacks on other biotech crop papers. “The most reassuring thing we learned was that it had happened before and by the exact same people,” says Tank.

    Again, good luck!

  22. It seems to me that it’s quite as specious to embrace any technology uncritically because some of its ostensible applications are useful as to reject a technology because some of its applications are flawed or open to misuse.

    GM describes an enormous array of things in a variety of settings. Moreover, its context is intellectual property within a capitalist mode of production. Before one can consider whether any specific application of the technology will be a net good, one must examine how this context shapes its implementation in practice.

    Many of the arguments one hears about GM are really proxy arguments about the conduct of companies such as Monsanto, much as arguments over nuclear power are often proxy arguments over big energy, localism, or nuclear weapons. In principle, there’s nothing wrong with GM but GM can’t just exist in principle. In practice, each application needs to be evaluated in its social and as regards agriculture, it’s environmental context.

    Panglossian boosting really is counterproductive IMO.

  23. Julie, or the role of religion in government and politics in general. Although apparently Bernardi just wants more kids in the world so he can make them starve:

    He also advocates removing Government welfare designed to assist children from families that experience a breakdown, arguing programs like the Red Cross’s Good Start Breakfast Club remove parental responsibility and create a mentality that the state will provide.

  24. Fran, going back to Mel’s original comparison the same probably applies to proposed climate change solutions. They shouldn’t be uncritically accepted either.

  25. Although apparently Bernardi just wants more kids in the world so he can make them starve

    Cory’s concerned about their souls desi, their souls. The body is just a vessel. 😉

  26. I think we can all agree that Cory Bernardi is disgusting. I wish the media hadn’t bothered publicising his book.

  27. Although I do not entertain pretty much any of Cory Bernadi’s ideas as to what makes for a good society, I will give him credit for one thing: he has been pretty clear to the point of bluntness in stating what he believes are the components of his version of a good society. In so doing, he has given those who would like to do so, an excellent opportunity to dissect and to rebut his stated position using rational arguments.

    Many of his colleagues, including our current Prime Minister, Tony Abbott, have some serious difficulties in stating what they mean, and meaning what they state—at least in the standard media interview on TV, or pre-election versus post-election, for example. Climate change politics is the classic example where Abbott will so often make one comment, but append to it some remark that comes across as a metaphorical wink to the denialist PR industry and supporters. Bernadi has no time for such flip-floppery 🙂

    If the ALP has any collective sense, they should muscle up and hit Bernadi with some well-reasoned counter arguments to his stated views, and avoid making it into an insult match, however much fun that may be. Of course, there is no problem with someone who is on the pointy end of Bernadi’s stick giving him a serve, it just isn’t productive in the long run. Still, I wonder if Bernadi knew or even cared that Bill Shorten was a step-father, when Bernadi tied step-parents into his tirade on the causes of kids becoming little Fagans and Irises; that was bound to be insulting. Even so, the emotional response from a leader of an opposition party, while understandable (for both personal and political reasons), misses a real opportunity to counter him in a TV interview. Surely the ALP have some staffers who are capable of constructing an intelligent, rational, and politically useful response, rather than knee-jerk reactions such as Anthony Albanese’s standard mantra?

  28. @Donald Oats

    Surely the ALP have some staffers who are capable of constructing an intelligent, rational, and politically useful response, rather than knee-jerk reactions such as Anthony Albanese’s standard mantra?

    I guess it’s possible, but I’m unaware of any evidence of it.

    It is my belief that the ALP are, collectively, devoid of anything other than a desire to continue the general neo-con project “whatever it takes”.

  29. Desipis

    Fran, going back to Mel’s original comparison the same probably applies to proposed climate change solutions. They shouldn’t be uncritically accepted either.

    Certainly not. There are people saying soil carbon schemes are cost effective abatement or who want to reward polluters for doing what they were going to do anyway. Some say nuclear power is a total solution — but of course it isn’t and can’t be in practice.

    We should be very careful in weighing the various options in pursuit of a best fit to problem approach.

  30. The first episode of “Persons of Interest” last night didn’t disappoint. It mainly focused on Roger Milliss and his father Bruce Milliss. Bruce was a true-believing Stalinist who became a Maoist after Khrushchev’s “secret speech” in 1956 and what the elder Milliss saw as a “betrayal” of the revolution by the post-Stalin Soviet leadership, while Roger, who was a young man at the time, became fiercely anti-Stalinist – which, of course, did not stop ASIO from engaging in the most despicably prurient surveillance and harassment of Roger and his wife.

    Roger Milliss’s memoir, Serpent’s Tooth, published in 1984, provoked the writing of one of the most ridiculous tracts in Australia’s Cold War history – a review of the book by Frank Knopfelmacher in Quadrant magazine that equated Roger Milliss with Adolf Eichmann. Professor Knopfelmacher was perhaps Australia’s most gifted and nuanced anti-communist intellectual, and was by no means simply a “right winger” in the sense in which that term is often understood, but on that occasion he did not show himself at his best.

  31. @Paul Norton

    Yes indeed. What I found awesome as well as the clear explanation of the way ASIO was involved in the politics of the time, was the integrity, honesty, the emotion and intellectual insights that Roger Milliss and his wife were willing and able to show.

  32. Julie @38, disclosure time. I got to know Roger Milliss during the 1980s, and I have never met anyone less Eichmann-like. Therefore I found Knopfelmacher’s review to be perhaps not so much scurrilous as comical.

  33. @Paul Norton

    I’d not have been able to laugh. I would have cried.

    But as low comedy, that is the only way to read Quadrant.

    I might have an ASIO file. I do remember once being at an anti-conscription rally and some next to me said to smile because ASIO was taking my photo because I was standing next to him.

  34. Tim Macknay @ #9 said Jack has been mean to me,

    huffed and puffed sneered…bloviated…unhinged reaction…considerable ire…insulted me for no reason…uncivil…frenzy…raged, fumed, postured and threw in a few insults for good measure

    If TMs nose has been put out by my occasional barbs then I do apologise. But he has a very thin skin because I wasn’t being all that nasty. This is a full-contact blog, not a “wellness centre” for precious, pampered petals. He needs to man up, get over himself and stop carrying on like a big girls blouse.

    TM is right that “the main source of [JS’s] considerable ire appears to be the fact that [TM’s]…ambivalent…attitude towards AI differs from [his] own.” Well duh! Any serious difference on a significant issue is bound to raise the temperature of debate.

    But it’s not just the difference of opinion that bugs me. What really gets on my goat is the equivocating each-way-bet, wanna-have-it-both-ways attitude of the Tim Macknays and John Horgans of this world. “Ambivalent” is a weasel word that is suited to art rather than science. It immunizes theories from critical scrutiny and testing. And immobilizes citizens from making preparations for the revolution.

    The AI skeptics oscillate between Nay-Saying impossiblism and Anyone-Could-Have-Been-That-Coming inevitabilism. Not to mention the mood swings from End-is-Nigh Jeremiahism to Best-Of-Alll-Possible-Worlds Panglossianism. Then they waste a lot of time fretting over the comparatively benign uses of AI, such as drone strikes on school girl shooters. Whilst ignoring the 800 lb gorilla squatting in the living room.

    Namely that AI will probably devastate the employment prospects of most manual AND mental professions within a generation or so. Its not just me, Robin Hanson and Ray Kurzweil thats saying this, although we have been banging on about it for over a decade. See recent works by Krugman, Summers, Martin Ford, Erik Brynjolfsson, Andrew McAfee, James Barrat. The recent uptick in AI developments have brought the strong AI event horizon much closer than you think.

    All TM’s brow-furrowing, hand-wringing and finger-wagging succeeds in doing is piling up the drifts of b.s. so high you need wings to rise above It. Then to muddy the waters even more TM gets us bogged down in an endless bout of pointless point scoring.

    I have been happy to acknowledge that my original comment about the role of AI in military avionics was hasty, sloppy and somewhat erroneous in details. Yes, HUD is not identical to AI, although for high performance manned platforms they go together her like ham & cheese in a sandwich. Yes, AI was not originally designed for military avionics. But of course it’s first major successful practical applications were in Cold War aeronautical and astronautical applications. So I grant, your nits were successfully picked.

    Sadly, TM seems to be equally happy to pass over his own blunders in stony silence. In particular he is dead wrong in denying or belittling the fact that the introduction of AI in avionics is not “recent” by IT standards, that “bog-standard computers” are not up to making rapid, utilitarian decisions in real time what ever their processing speed and the fact that recent progress in AI – in natural language processing, autonomous vehicle navigation, supply chain management and financial analysis – have been momentous by any standard. Except that of ignorant side-liners. This blinkered attitude does little credit to any proper standard of self-critical honesty.

    If that’s not enough TM then proceeds to pronounce a secular fatwah on me, comparing me to a Wahhabist fundamentalist for committing the unpardonable faux pas of being an “enthusiast” over the industry’s brighter prospects and scornful of a certain type of habitual fence-sitter. Well ex-c-u-u-u-se me for being optimistic.

    In any case the author of the following comment is hardly guilty of “quasi-religious” techno-fundamentalism:

    I do share some of your reservations about the potential for harm posed by the threat of runaway AI development – the so-called Singularity. Which is a major reason I have been obsessively following the industry on and off for most of my post-adolescent life.

    And BTW regarding my qualifications to speak with any authority on this subject: John McCarthy (RIP) was not just any old “mate” in the AI “industry”, he pretty much founded it. My AI contractor friend, for whom I assisted in occasional technology projects for over a decade, has been in the industry for 30 years. Since the advent of the tennies I have presented a number of papers on the economics of AI. I don’t say I am an “expert” in the field, just that I am more up to speed than the average half-baked drongo on the internet.

    All this point-scoring is pretty petty in the great scheme of things, All I want is for skeptics and opponents of AI to come straight out and make some testable predictions and intelligible evaluations instead of hiding behind a smoke-screen of equivocation. Put up or shut up.

  35. @jack strocchi

    If TMs nose has been put out by my occasional barbs then I do apologise. But he has a very thin skin because I wasn’t being all that nasty. This is a full-contact blog, not a “wellness centre” for precious, pampered petals. He needs to man up, get over himself and stop carrying on like a big girls blouse.

    And now you’re descending to schoolyard level. Extraordinary, Jack.

    I don’t think you got the point about my noting your propensity for using insults – It’s not that I’m offended, it’s that insults don’t constitute an argument. In your case, you’ve used bluster and insults to cover for the fact (which you now admit) that you were “hasty, sloppy and somewhat erroneous in details”.

    You don’t seem to get that the inability to argue a point without being uncivil is a failure, of reasoning and of persuasive communication. And yet you keep on piling it on.

    Sadly, TM seems to be equally happy to pass over his own blunders in stony silence. In particular he is dead wrong in denying or belittling the fact that the introduction of AI in avionics is not “recent” by IT standards, that “bog-standard computers” are not up to making rapid, utilitarian decisions in real time what ever their processing speed and the fact that recent progress in AI – in natural language processing, autonomous vehicle navigation, supply chain management and financial analysis – have been momentous by any standard. Except that of ignorant side-liners. This blinkered attitude does little credit to any proper standard of self-critical honesty.

    *sigh*

    You’ve already conceded that 1990-91 (twenty-odd years ago) was three and a half decades after AI began as a research programme (commenced by your late friend and correspondent, John McCarthy). The statement that twenty years ago is “relatively recent” compared with fifty-seven years ago is perfectly reasonable by any ordinary use of the term. I think it was pretty clear that that was the context of my remark. If it wasn’t sufficiently clear to you, you could have asked me to clarify. Instead you resorted to bluster, and accused me of an imaginary “howler”.

    As for my remark about the processing speeds of “bog-standard digital computers”, that was a direct response to your statement was AI was necessary to “process data … at speeds much faster than human mental reaction time”.

    Thank you for clarifying that what you actually meant was that AI is can make decisions at high speeds, which makes a whole lot more sense. However, it was unnecessary for you to accuse me of “not understanding the imperatives of [military AI] platforms or the nature of AI itself”, as all I was doing was pointing out another example of your reasoning being – how did you put it – “hasty sloppy, and somewhat erroneous in details”. However, I entirely agree with your statement “It’s true that computers don’t require AI to process data at high speeds”. Although since you were agreeing with my point, it’s difficult to see why you continue to insist I was wrong.

    Any serious difference on a significant issue is bound to raise the temperature of debate.

    Entirely untrue. The ability to engage in debate without immediately getting personal is a minimal requirement for participation in intellectual endeavours where there are differences of opinion. Most people are capable of at least training themselves to do this, if they don’t have a natural propensity for it.

    If that’s not enough TM then proceeds to pronounce a secular fatwah on me, comparing me to a Wahhabist fundamentalist for committing the unpardonable faux pas of being an “enthusiast” over the industry’s brighter prospects and scornful of a certain type of habitual fence-sitter. Well ex-c-u-u-u-se me for being optimistic.

    I think you’re a little confused about what a “fatwah” is. I have no problem with people being enthusiasts for various kinds of technology or futurology, or for being optimists. But in my experience “optimists” don’t immediately resort to angry personal insults when someone expresses their doubts. That’s more characteristic of religious believers – hence my comparison.

    Which was admittedly, a bit provocative. Obviously comparing you to Wahabists was rather insulting, and I apologise for the comparison, but not for suggesting that your angry response is more reminiscent of a religious sensibility than a rational one.

    I don’t say I am an “expert” in the field, just that I am more up to speed than the average half-baked drongo on the internet.

    Fair enough. However, you did imply rather robustly that you know a hell of a lot more about it than I do, and that I am an ignoramus on the subject. It’s entirely possible that you do know significantly more about it than I do, given your level of interest. Despite that, you haven’t yet established that anything I’ve said on the topic was wrong, although you’ve tried very hard to tell me I was wrong while indirectly conceding the point.

    All this point-scoring is pretty petty in the great scheme of things

    Amen to that. A word of advice: point scoring (petty or otherwise) can usually be avoided if one doesn’t take the first available opportunity to get personal.

    All I want is for skeptics and opponents of AI to come straight out and make some testable predictions and intelligible evaluations instead of hiding behind a smoke-screen of equivocation. Put up or shut up.

    In other words, you are upset that I am an “unbeliever”. I am unmoved.

    And one more thing: what’s with addressing me in the third person, instead of directly? Are you afraid to “look me in the eye”, Jack? 😉

  36. Tim Macknay @ #45 whined:

    And now you’re descending to schoolyard level. Extraordinary, Jack. I don’t think you got the point about my noting your propensity for using insults – It’s not that I’m offended, it’s that insults don’t constitute an argument. In your case, you’ve used bluster and insults to cover for the fact (which you now admit) that you were “hasty, sloppy and somewhat erroneous in details”. You don’t seem to get that the inability to argue a point without being uncivil is a failure, of reasoning and of persuasive communication. And yet you keep on piling it on.

    Of course “insults don’t constitute an argument”. they constitute entertainment. Barbs are not a “failure of reasoning”, they are a complement to it. Too much deadly earnest debate with no play makes Jack a dull boy. I throw in a few jibes to keep the punters laughing along with the passage of play. It also helps to ridicule an opponent as it throws him off his game. Comedy, like intellectual debate, is not pretty. The battle of wits (note the double meaning of the word) is meant to wound the enemy as well as inform the public.

    Tim Macknay said:

    You’ve already conceded that 1990-91 (twenty-odd years ago) was three and a half decades after AI began as a research programme (commenced by your late friend and correspondent, John McCarthy).

    That was not a “concession”, its public knowledge  that every school boy knows. I did not iterate and reiterate it in my initial comment because it seems pointless and time wasting to spell out every evident fact in mind-numbing detail.

    Tim Macknay said:

    The statement that twenty years ago is “relatively recent” compared with fifty-seven years ago is perfectly reasonable by any ordinary use of the term.

    Tim Macknay said:

    No, its not. TM completely misses the point. The term “recent” is an absolutist present-centric term, it refers to the proximity of an event to us in the contemporary “now”, rather than to them in the distant “then”.  Every event that succeeds another event is “relatively” more “recent”. So adding the modifier “relatively” is tautologous in general, and in this particular case mischievously misleading. Also, computer generations are much more rapid than human ones, they cycle through within a decade. So IT events that occurred 20 or so years ago are not “recent” relative to now, they are old hat. [Sigh, its like explaining chess to a dog.]

    Tim Macknay said:

    As for my remark about the processing speeds of “bog-standard digital computers”, that was a direct response to your statement was AI was necessary to “process data … at speeds much faster than human mental reaction time”. Thank you for clarifying that what you actually meant was that AI is can make decisions at high speeds, which makes a whole lot more sense. 

    I supposed that it would be bleeding obvious that the phrase “human reaction time” was in reference to pilots making mission-critical decisions. Obviously this was reckless of me and I apologise to all those out there who struggle to connect two closely positioned dots.

    Tim Macknay said:

    Entirely untrue. The ability to engage in debate without immediately getting personal is a minimal requirement for participation in intellectual endeavours where there are differences of opinion. Most people are capable of at least training themselves to do this, if they don’t have a natural propensity for it.

    If this were true then most of the great debates of History would have been forgotten shortly after they occurred. Fortunately most “intellectual endeavourers” have not followed TMs recipe for endless boredom. Social analysis is not exactly pulse-racing material in the first place. It needs every little bit of rhetorical advantage to propel it into the forefront of human consciousnesses.  “Getting personal” is entirely appropriate in debate, you can play the man and the ball in the same game. What is not acceptable is a debate conducted entirely on the level of personal abuse. I try, to the best of my ability, to include the salient facts in my comments and articles. But not just the facts.

    Tim Macknay said:

    in my experience “optimists” don’t immediately resort to angry personal insults when someone expresses their doubts. That’s more characteristic of religious believers – hence my comparison.

    Ignorance, stupidity and time-wasting make me angry. Doubling down with silly religious analogies doesn’t help.

    Tim Macknay said:

    It’s entirely possible that you do know significantly more about it than I do, given your level of interest. Despite that, you haven’t yet established that anything I’ve said on the topic was wrong, although you’ve tried very hard to tell me I was wrong while indirectly conceding the point.

    I certainly know more than TM about the plain English language meaning of “recent”, setting him straight on this was a Herculean effort all by itself. Oh, and then there was the comparatively easy task of taking his statement that “…contemporary AI applications are nothing like visions of the early AI thinkers, and their grandiose goals (and those of contemporaries such as Kurzweil) are as far away as ever” to pieces, which is pretty much the beating heart of this debate.

    Much more powerful AI systems are being put into place as we speak and the coming of strong AI is not “as far away as ever”. I re-iterate that “recent progress in AI – in natural language processing, autonomous vehicle navigation, supply chain management and financial analysis – have been momentous by any standard.”. This progress is, in significant part, something “like the visions of the early AI thinkers” who envisaged conversational computers (HAL anyone?) and self-navigating vehicles (for space flight and the like).

    More importantly, the key part of the debate is about time-lines, and these are palpably shrinking. Moores Law grinds on relentlessly in hard-ware. We now have Hutters AGI algorithim out there as a theoretical proof of the soft-ware summit. Plus wireless cloud computing and fiber-optic cabling covers all net-ware bases. And finally a series of killer “peri-ware” apps, such as dextrous robotic joints and 3-D printers, are now coming on line. All this makes strong AI ie human equivalent intelligent computers, if not the Singularity, a genuine possibility withing a generation or so. But all TM can give us is nit-picking and vague hand-waving gestures.

    Tim Macknay said:

    In other words, you are upset that I am an “unbeliever”. I am unmoved.

    To repeat again, a habit one gets used to when debating TM, I am not upset that he is an “unbeliever”. I am upset that he overlook basic facts in favour of squabbling over pedantic trifles. And uses rubbery (“ambivalent”) language in making assessments of AI in order to leave himself acres of wiggle room should things not turn out as expected or hoped. This goes against the principles of science which obliges participants to make predictions as firm as possible and stick by them, for purposes of accountability. It also makes debate with TM like swimming through the ink cloud of a frightened squid. One spends an inordinate time holding ones nose, thrashing about and peering into the inky blackness, rarely getting the chance to turn the thing inside out, slice it up and make a tasty dish of calamari.

    Tim Macknay said:

    And one more thing: what’s with addressing me in the third person, instead of directly? Are you afraid to “look me in the eye”, Jack?

    I am talking to the internet, not you. Stop taking it personally.

  37. @jack strocchi
    I don’t really think the Internet is listening, Jack. 🙂

    So all you have left now to sustain your assertions of my ignorance and stupidity is a bizarre contortion around the meaning of the expression “relatively recent”?

    I’m glad your insults make you feel better. I think you need to work on their quality though -, they might pass muster in a schoolyard, but they don’t exactly rise to the level of wit. 🙂

    All the actual “basic facts” you’ve thrown up so far have been consistent with what I’ve said. Speculation about if and when “strong AI” shows up is not a “fact”, nor, for that matter, is it “science”. It’s simply speculation. And even the Machine Intelligence Research Institute, which is as close to an actual AI cult as it gets, will readily concede that predictions, including expert predictions, about AI are notoriously unreliable.

    And Jack, after all your strident denunciations of “fence sitters” and accusations of hiding behind “rubbery” language in order to leave “wiggle room”, you now say that you merely think strong AI is only a “real possibility”? So tentative – you fence sitter, you! Still, at least you left yourself some wiggle room.

    Good night. 🙂

  38. @Tim Macknay

    As an audience member – I was particularly taken by Jack’s attack on Tim for being a Nancy-Boy in the vigorous cut-n-thrust of interweb chattospherics because Tim pointed out that sheer crankiness isn’t an argument, when Jack went on to explain (twice) why he loses it in crankiness when confronted with goat-getters.

    If I was remotely interested in Artificial Intelligence I would simply pick up any News Ltd publication, they are bursting with the stuff.

    It’s real intelligence we need!

  39. @Mel
    Mel verbals the ”cultural left’, whoever they are. On another note, Pyne has appointed fierce cultural warrior and failed teacher Kevin Donnelly (to quote a cliche ‘those who can do, those who can’t teach’, what does this say about Donnelly?) to review the national school curriculum. Apparently the minister thinks that there’s not enough taught about western civilisation and Anzac day. It looks like infantilisation will be the hallmark of this govt. I mean really, given that we have a budget emergency how can we possible justify spending hundreds of thousands if not millions on a review lead by an unqualified partisan. This escapes hard line LNP supporters who of course are saying ‘Jolly good show and about time’ and giving young Chris a paternal pat on the head.

  40. @Megan

    It’s real intelligence we need!

    Indeed. One of the potential pitfalls in the growth of artificial intelligence, which Jaron Lanier points out, is the likelihood of proportionate growth in “artificial stupidity”. Regardless of how smart machines do (or don’t) get, there’s a very good chance that people will hand more decisions over to machines anyway, and become stupider in the process. Jack mentions the use of AI technology in finance. I think it’s fair to say that there is room for debate on whether the performance of the finance industry has dramatically improved in recent years. 😉

    Clearly I touched a sore point with Jack, and I’ve been trying to figure out why.

    My latest take on it this: Jack is clearly passionately interested in AI, but lacks the technical knowledge to engage with the field on a deep level as he is not a computer scientist or engineer (apparently he is an economist). This has the potential to create a degree of insecurity, in part due to the old “hard science-soft science” dichotomy, which deems that hard sciences are “better”.

    The fact that Jack evidently idolises the leading thinkers in AI (to the point of ferociously attacking anyone who criticises them) supports this impression. By correcting a mistake of Jack’s in relation to a field he clearly loves but does not feel he adequately grasps, I inadvertently touched an insecurity.

    Of course, I might be completely wrong, but the theory at least at least does explain why my original, very mild remark launched such a tidal wave of crankiness.

  41. “Ignorance, stupidity and time-wasting make me angry.”
    Could self-loathing account for the crankiness? Either that or an inability to reflect on what one has written before posting and then having a frustrating forehead slap moment.

  42. Does anyone think Jack might be an alpha male perhaps?

    I have noticed that these sort of men think that they are entitled to have tantrums – they call it something different – because they are that sort of bloke. This type of ‘entitled’ behaviour means they are individuals and not sheep.

    Otherwise you could say it’s easy to ‘push his buttons’?

    Back to Bernardi and did anyone read the biography in The Monthly?

    This bit is intersting.

    “Sinead, with whom he has two sons, aged ten and 12, says they have the perfect marriage because they’re “both in love with the same man”. “Cory obviously has this huge belief in himself … If you didn’t love a guy who was so in love with himself you’d have a lot of trouble living with Cory.”

  43. PatrickB

    Mel verbals the ”cultural left’, whoever they are.

    It was a joke based on one of Jack’s obsession, sweetpea.

  44. A person commenting under the name of “Christian Kerr” has posted this at Catallaxy:

    I’m 99.95 per cent with Samuel J on this [defence of ASIO against the “Persons of Interest” series], but there’s some real PC Plod stuff in the Lee Rhiannon files I’ve spent the last few years working – some missing cross referencing that I would have though any public service shinbyum would fix, least of all our intelligence agency. It’s this sort of stuff that has let people who have been the friends of our enemies dismiss the security forces and create the culture that has let SBS’ silly program come about. ASIO haven’t exactly done themselves many favours by resisting the release of files that show just how the Brezhnev era Soviets indulged their local pets – and how those lap-dogs of tyranny loved having their tummies rubbed.

    Of course I am in no position to know whether this is the “real” Christian Kerr.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s