9 Comments

Some time after we (the general public) first heard about ChatGPT, I wrote a letter to The Guardian, in response to an article by Evgeny Morozov, titled: The problem with artificial intelligence? It’s neither artificial nor intelligent. 30 March 2023. In my letter I said:

AI’s main failings are in the differences with humans. AI does not have morals, ethics or conscience. Moreover, it does not have instinct, much less common sense. Its dangers in being subject to misuse are all too easy to see.


It seems that, one year later, these issues have not begun to be addressed and the tech companies basically have no clue how to address them. Nor the interest.

Expand full comment

Your line ‘AI’ is not a technology, it is a story about the future. ‘ - I’m in love with it!

I’m constantly reiterating in PD and conversations with colleagues that we are at a moment in time where what we do now really matters.

How we teach students to see AI as something physical (to echo Crawford) with economic, political, social implications. As extractive. This is crucial futures work.

But staying in this space of critique needs to be balanced by the practicalities of the genie that’s been released. No engagement with the products risks stasis. Not blind pragmatism of course, but as you say, futures work.

I do look forward to the podcast. Thank you! Such important reading.

Expand full comment

Hi Francesca, thank you! We don't tell students not to use social media or the internet but we do provide spaces where they can encounter some of the same functionality in a safer, more developmental way. And as you say resources for understanding the potential harms, including harms to beyond the context of use. I don't think we need to worry too much about non-use. These products are designed to be frictionless and compelling. It's resistance, 'drag' and use against the grain that requires the work.

Expand full comment

I am heading over to read your substack :-)

Expand full comment

A powerful analysis, Helen.

It resonates with me personally, as I've been working for years to get academics taking action on climate change.

Do you think there's an opportunity for academics to do good work with open source generative AI?

Expand full comment

Hi Bryan, I do think so, yes. i'm not a developer, but i know and trust people who are building small language models that you can fit on a laptop. of course they are all still using weights that came from the original model training so there are still dependencies there (and biases). And it is a technology that seems unfortunately to benefit from scale at every point. But academics are endlessly creative! It's just very, very sad that big tech has been able to capture so much of that amazing expertise in the last decade.

Expand full comment

Mainly I hope that young people will continue turning towards the foundation economy - realising that the real work is feeding, sheltering and caring for people while restoring the natural world we are part of. Building societies that make that possible. Tech has a role to play but it's a minor one, IMO, and too often a distraction. I'm off to my community allotment right now in fact :-)

Expand full comment

Thank you for this overview. The problem I have with AI is that the word "intelligence" implies some form of balancing real-world (live) inputs against a pre-existing moral code plus a body of (hopefully peer-reviewed) research , thus forming a balanced and logical summation of what's going on. I realise that the "wicked problem" issue pervades most public policy challenges, which in itself precludes the "helpful" addition of AI which is by definition backward-looking. In the interests of both the public and Conservative politicians, AI should be re-badged as "Pattern Recognition" technology and no more.

Expand full comment

Thanks Simon, I have a number of thoughts upcoming on 'intelligence' in education and automation - hopefully we can unpick some of this tangle! Those public policy challenges are only going to get wickeder, with Michael Gove determined to replace civil servants with his AI 'crack squad'.

Expand full comment