Skynet scenarios and real world risks
What are the real risks of GenAI that people in higher education should worry about?
I suggested in an earlier post that the benefits of GenAI for learning are still weakly evidenced. I argued in the same post that opportunities and risks should not be treated the same. Opportunities are always waiting to manifest themselves, just off-screen. Risks are harder to see. They often take research to find, and collective action to deal with. As just one user, alone in front of my screen, it’s easy to feel disempowered or overwhelmed by a sense of risk. In fact, I have started to feel a bit overwhelmed by the way GenAI’s risks are paraded over and over again in reviews and policy papers. They are even (soft of) acknowledged by the AI industry:
But there’s something not quite right here. While ‘working to address’ the temporary ‘limitations’ of its technology, OpenAI is keen to shift responsibility for use onto ‘society’. After all, it is ‘society’ that is plunging ahead and ‘adopting’ GPT-4 despite OpenAI’s reasonable warnings. Education has a particular responsibility to deal any problems that might arise, through ‘user education, and wider AI literacy’. What responsibility should the education sector place on the AI industry in return, I wonder? Perhaps to keep problematic technologies away from the shared information commons we all rely on, and five billion potentially vulnerable and/or unprepared and/or occasionally malicious users?
I do think that education has a responsibility to develop critical users of technology, and to do that we have to witness the many ways we are being asked to become uncritical users of GenAI. We are called to accept a wide range of risks and moral hazards in return for ‘keeping up’, whether we are teachers trying to keep up with students, students trying to keep up with the job market, or citizens being told ‘we’ must keep ahead of the AI being developed in other countries.
The command to adopt GenAI gathers energy from all the harms it draws attention to. If users are willing to set aside such critical issues as misinformation, bias, copyright and personal data violations, carbon costs and worker exploitation… the benefits must be worth it. Or else the capture by GenAI of our economy and society must be inevitable, so users might as well get stuck in. Addressing us as users while naming the risks and harms of use reminds us that we have freely decided to ignore them: we have given up the right to critique.
The most blatant assault on our critical thinking is the ‘open letter’ to humanity issues in March by Elon Musk, Steve Wozniak and other industry insiders. Turning the doom dial to max, they warn of a future when ‘nonhuman minds … eventually outnumber, outsmart, obsolete and replace us’. This Skynet storyline is a barely hidden brag about the power of the AI industry to make or break ‘humanity’, with every techbro taking the role of Miles Dyson.
Leaving the psychology of the AI industry for another day, all this singularity/death drive material is clearly meant to disarm more prosaic worries about jobs and democracy. Below the ‘existential risk’ horizon are a large number of human beings facing everyday harms and losses.
In their response to the open letter, researchers at the Distributed AI Research Institute say we should focus our risk assessments on these people:
the very real and very present exploitative practices of the companies claiming to build [‘powerful digital minds’]... including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.
The role of higher education in naming the risks
Witnesses from within the AI sector have been speaking out about these issues for years (this list of key publications from Emily Bender is a good place to start). As educators we can amplify their voices. We can also point to the particular moral jeopardy GenAI models represent to higher education. They offend against values that are important to universities: equity and justice, democratic access to knowledge, truth claims based on evidence, and the rights of authors over their intellectual work. Their use also risks ongoing efforts to decolonise the university, and emerging efforts to decolonise its technologies. As pioneers like Dalia Gebriel, Gurminder Bhambra and Taskeen Adams remind us, universities are colonial institutions in their deep infrastructure as well as their cultures and curricula. Even progressive initiatives rest on these colonial structures – estates, investments, endowments, collections - though they can help to make the structures more visible. But here in GenAI is a brand new infrastructure that follows the same patterns of racial exclusion as earlier ones, that dispossesses people of their intellectual rights, that encloses and degrades the digital commons, and that exploits cheap labour, especially in the global south. Unlike historical infrastructures, this one is being developed, adopted and integrated into our institutions right in front of our eyes.
While these are harms that educators should care about, they affect all large organisations. I recently came across an article on AI risks and ethical concerns by Deloitte that which has a decent Trust Ethics Framework for financial institutions. Are there risks from GenAI that educators in particular should identify, and speak up about? I can think of three kinds (links are to more detailed posts as I write them).
Risks to teaching jobs and conditions, and to the work of teaching
Risks to knowledge values and practices, and the wider knowledge ecology
All of them fall unequally, so some teachers, learners and knowledge users may be relatively unaffected. In three new posts I will try to gather evidence about each area of risk, hoping this contributes to the collective conversation. Because while OpenAI can invite ‘people’ to give ‘input’, only powerful sectors like higher education can make collective demands.