Looking for a better story
Updates for old friends, and for new followers some links back to where we've been
It seemed provocative at the time when I suggested that ‘getting better’ was not the inevitable trajectory of generative AI models, but now we hear regularly that businesses are unhappy and salespeople are being told to ‘dial down’ the expectations.
Gary Marcus has more to say about dialing down expectations, not only of future performance for users, but of ROI (return on investment). And Ed Zitron wonders, based on recent interviews with OpenAI’s top dogs, if we have reached peak AI. These two are well known AI-sceptics but they are also well informed.
So much hype, hope and capital has been poured into the AI story, it seems unlikely the bubble will burst. If Sam Bankman-Fried could not kill crypto, it’s hard to know what could kill AI. But I hope that policy leaders in HE, in their foresight planning, at least consider the possibility that AI may not be the whole future of graduate employment. Without some major breakthrough - beyond just scaling up - it’s possible that language models have already found their use cases, and there are really only a few of them: coding faster (though not more accurately or securely); flooding the zone with search-engine-optimised content; and generating college essays of a similarly ‘optimised’ kind.
Using generative AI to write actual research papers, it seems, has ‘unintended side effects that are largely detrimental to academia, knowledge production, and communicating research’. I had a lot more to say about harms to knowledge production in this earlier post.
When I wrote about how learning models are built on the labour of data workers, many of them precariously and exploitatively employed in the global south, one follower on X suggested that it was a good thing these people had work. The brilliant tech journalist Karen Hao has just reported that Remoworks has suspended all operations in Kenya, without notice, leaving many families suddenly without income. The same companies that employ data workers also ruthlessly use their work to build automated annotation models, and are now offering these to clients as cheap alternatives. Any labour that can be offshored can be automated when the costs of automation drop below the costs of cheap, exploited labour: what happens to those workers then?
More hopefully, Techworker Community Africa is supporting African data workers to organise, access training, and get a better deal.
In self-exploitation news: whole heaps of your favourite social media platforms are offering to sell user data to train future AI, and Reddit already has a $multimillion deal in place for its users’ content. It’s well known that the major models were trained on Reddit threads, so here is another content provider (like the major publishers that I reported on in ‘capturing content’) to calculate that it’s more profitable/less risky to get in bed with big tech than to sue them for the bedsheets. These business calculations are where we can watch a belief in ‘AI’ actually building the AI future, as big tech persuades content providers that whatever lawsuits they may lose along the way, in the end they are going to win. They are going to win because they have more money. And they have more money because they have persuaded venture capitalists that they are going to win.
In safety news, Microsoft CoPilot continues to produce potentially harmful responses, such as this one to a prompt about PTSD, and an AI engineer at MS has blown the whistle on its CoPilot Designer for generating violent, sexual and copyright-violating images. I wrote about Gemini’s guardrail problems recently, but other brands are available. Meanwhile the newly established US AI Safety Institute is facing a crisis as staffers protest over the increaasing influence of ‘longtermists’ and ‘effective altruists’ at senior levels in the organisation. These are people with such a zealous commitment to the ‘Future of Humanity’ that they are willing to put up with all manner of harms to people alive today to get there. Harms that less zealous staff members will keep insisting are relevant to the ‘safety’ mission.
Humanity in the abstract is also a distraction from the way AI harms different people in different ways. It’s emerged that even when they are asked questions in other languages, text-based chatbots process requests via English-language constructions, presumably because these areas of their data models are richer and more complex, and their trained weights push all the model’s ‘attention’ that way. The researchers conclude that this ‘could create cultural issues’. As I wrote in ‘who pays for authenticity’, speakers of minority and lower-resourced languages are being further disadvantaged by a ‘global’ technology that purports to ‘know’ the world when what it ‘knows’ is digital text from the minority world, mainly in English. Users of other languages are being offered bad machine translations of what is (even for English speakers) already a sub-optimal experience.
Speaking of the world, training synthetic models has been using significant fresh water reserves in areas prone to drought; and thanks to AI data centres, America seems to be running out of electricity. Environmental groups have warned that AI is likely to accelerate climate warming AND misinformation about it. The latter is indeed what I identified as the main climate cost of AI: failing to invest in more mundane technologies or in changing our political and economic system, while AI fills the horizon of what the future can be.
To end on a personal note: when I started a blog about critical approaches to technology in education, I never imagined that generative AI would fill my own horizon. It has not been entirely fun. A colleague recently described it to me as ‘the constant intellectual labour involved in having to take seriously the noise and free-floating anxiety’, and that labour feels increasingly pointless. Talking ‘AI’ down is still talking about AI, it still adds to the vortex of attention. There are other many more important things in the world to be anxious about (though ‘AI’ seems set to make all of them worse).
AI will probably give paying users a new interface on their work and play that will be fun for a while, and then invisible - part of an ever-more-immersive life online. When ROI falters there will be another story (or a newer, better, ‘smarter’ version of the AI story) to sell hyper-productivity and automation to businesses, and to keep driving capital towards the biggest platforms. I just keep thinking that the idea all this has something to do with knowledge or learning is so obviously detrimental to education, and so obviously stupid and wrong, that education will find a way of talking back. Or - because alternative stories are available - will tell these stories confidently, so I can think about something else.
Very selfishly I am delighted that AI filled your horizon Helen. Your writing on the wider context, considerations and consequences is an incredibly valuable reference and reflection point for me. Thank you
Thanks Helen for another great (if slightly depressing) post. Can you really see no positives for leaning and knowledge out of generative Al? I was a bit taken aback to hear you say this was so obviously a wrong idea.