And on we go
The truth is sacked, the elephants are in the room, and tomorrow belongs to tech
Welcome to 2025, subscribers old and new. I’m happy to start the year as one of 100 women and non-binary writers on AI who are considered worth a follow on substack. I’m less thrilled to be writing this in a week when Meta has officially sacked the truth and the mainstream media of Europe, or what is left of it, has become an echo chamber for the politics of Elon Musk. But this is only what enlightenment institutions can expect when they cosy up to the counter-enlightenment projects of big tech, and hello higher education, what are your plans for 2025?
I’m sorry you have heard so little from me in the last few months. Blame the conjunction of a new full-time post and a descent into despAIr. MalAIse. MiserAI. At some point in 2024 it all became horribly clear. No amount of skepticism from the business sector is going to stem the supply of AI-ready graduates, whether they are wanted or not. No amount of research evidence or critical analysis of the claims about AI is going to be read in a university sector that is determined to turn research and critique over to privatised data architectures, and reading along with them.
No, it doesn’t matter how destructive generative AI turns out to be for the environment, how damaging to knowledge systems such as search, journalism, publishing, translation, scientific scholarship and information more generally. It doesn’t matter how exploitative AI may be of data workers, or how it may be taken up by other employers to deskill and precaritise their own staff. Despite AI’s known biases and colonial histories, its entirely predictable use to target women and minorities for violence, to erode democratic debate and degrade human rights; and despite the toxic politics of AI’s owners and CEOs, including outright attacks on higher education - still people will walk around the herd of elephants in the room to get to the bright box marked ‘AI’ in the corner. And when I say ‘people’ I mean, all too often, people with ‘AI in education’ in their LinkedIn profiles.
I believe I have been right about the elephants. But I’ve been wrong about the people. Our willingness to ignore harms when they are happening to other people is less surprising, given human history etc, than our capacity to put up with crap. It takes a lot of cope to deal with the banality that is AI in the real world. To find every tool you reach for has already started off on its own chattering journey like a wind-up toy. To feel every online interaction being stripped of tone and style and personal meaning and to tell yourself that’s fine, it’s good not to care too much, it’s probably ‘just an AI’ reading it the other end. To give up on finding reliable information or sharing family news because that environment is now a bin fire, but it’s OK because your grammar is never going to let you down. To forget what was promised last year, or last month, because ‘the AI’ does a new thing now that’s also ‘meh’ but it’s new and it promises to be amazing. To pay the subs, to not worry about the student assignments, to pay the higher subs, to do the academic integrity training, to paper over the cracks, to call papering over the cracks ‘AI literacy’, to realise your career and reputation and working future depend on not finding any of this problematic. All of that. It is exhausting. And human agency is exhaustible.
If I had to diagnose my own malAIse more precisely, I don’t lack energy for the big issues, I just can’t deal with the daily thoughtlessness, the anti-intellect of the AI narrative. And mounting evidence about the impact of using generative AI suggests that thoughtlessness is one thing that will definitely be scaled up.
Luckily, just as I was running out of juice, the wonderful Audrey Watters - original Cassandra of edtech - re-opened her blog for business, and AI has been relentlessly in her sights. You could do worse than start the year with this post from her about reading, give her a follow, and read on.
Meanwhile on imperfect offerings you can expect some short posts about issues that poke me too hard to ignore. I have some longer pieces in development on AI as interface, AI at war, and what universities might be doing better - when the day job gives me time. And starting this week, an imperfect podcast for your listening pleasure. This turns out to be a fabulous way of having interesting people say interesting things about AI and posting them to my own credit and fame. The podcast started out as a syndication of Generative Dialogues, my series with Mark Carrigan, which continues alive and kicking whenever we have something to kick off about.
So, precious readers, if you have managed to read this far without synthetic support, I bring you interviews with Dan MacQuillan, Eamon Costello, Catherine Cronin and Laura Czerniewicz, on topics from the fascist histories of AI to the capture of voice data, neo-Luddism and zines. And there are more in the pipeline. If you’d like to suggest someone I should talk to, please get in touch. Meanwhile as 2025 shambles into focus and the spaces of imperfection become a little less tenable, please do like, subscribe, share, comment, read and now listen to this one.
Totally brilliant. Thanks, Helen. Solidarity.
Thank you Helen. This resonates so much. Valued and appreciated.