10 Comments
Feb 24Liked by Helen Beetham

Some excellent points, as ever. Amongst many things you have raised here, Taylor Swift giving a lecture reminded me of a university many years ago discussing the possibility of employing actors to give lectures to improve module feedback evaluations. #WatchThisSpace

Expand full comment
Feb 24Liked by Helen Beetham

Excellent meditation, Helen.

Question about "it’s harder to appreciate just how crap, how deadly dull, how utterly bereft of humanity all those AI-produced books and poems and narratives are, because you can just avoid reading them" - are you seeing any signs of human creators labeling their work AI-free?

Expand full comment
Feb 24Liked by Helen Beetham

You have hit on a number of my concerns about AI in education but taken the discussion to a deeper level, making me consider additional concerns and adding nuances to ones I already had. This is an excellent piece. Thank you for writing and sharing it.

Expand full comment
Feb 28Liked by Helen Beetham

Oh yes I realised you were joking about the second bit πŸ˜‚ though we could get there eventually. Maybe a topic for a final more conceptual webinar?

Expand full comment
Feb 27Liked by Helen Beetham

"If the pressures on students to auto-produce assignments are matched by pressures on staff to auto-detect and auto-grade them, we might as well just have student generative technologies talk directly to institutional ones, and open a channel from student bank accounts directly into the accounts of big tech while universities extract a percentage for accreditation."

I feel like we need a general theory of how these feedback loops take shape inside knowledge-intensive systems like university. Totally agree with this being on the horizon. I think the same is true of research comms and funding applications as well.

Expand full comment

When I first came to Substack I wrote quite a few comments making the case that AI is not a good move for humanity at this time, as it will serve as an accelerant to an already overheated knowledge explosion. The real danger may not come from AI itself, but from other powers which emerge from an AI fueled knowledge explosion. I still believe all of this. But...

My thinking took a turn when I faced the fact that nothing I have to say on the matter will change anything. I doubt anything anyone says about AI will meaningfully change the course of a coming AI era. We're traveling through a historic period, like the mechanization of agriculture, or the automation of the factories. Now it's the white collar world's turn. Our power to affect changes of this scale is extremely limited.

I now see further AI development as being like the weather. We're all free to complain about the weather, and may enjoy doing so, but our opinions have no effect. If that is true for AI too, what's the logical next move?

Whatever environment we inhabit it, makes sense to try to enjoy that which we can not change. If it's a rainy day, ok, so let me enjoy the beauty of the rain.

And anyway, there's a reasonable chance that nuclear weapons will make all of this irrelevant at some point. Perhaps it's not that rational to get all wound up about that which can so easily be swept away.

Expand full comment