Some excellent points, as ever. Amongst many things you have raised here, Taylor Swift giving a lecture reminded me of a university many years ago discussing the possibility of employing actors to give lectures to improve module feedback evaluations. #WatchThisSpace
Question about "it’s harder to appreciate just how crap, how deadly dull, how utterly bereft of humanity all those AI-produced books and poems and narratives are, because you can just avoid reading them" - are you seeing any signs of human creators labeling their work AI-free?
Thanks Bryan, that wasn't very clear. i meant that fewer people read for pleasure than consume (and care about) visual culture. But in answer to your question, I've come across some '100% human written' badges you can embed on a blog. They're cute but most of them seem to be from companies promoting AI for writing, so I prefer just to have in my tagline 'often imperfect but never autocompleted'. It's imperfect for a reason - ppl might hate the way I write and the rather mad things I write about, but the process is all mine and I think that comes over. In the end being wrong or out on a limb or thoroughly disliked are risks that real writers have to take.
I love Matt Novak's Paleofuture.com, that I often raid for 1950s images of the future - it's a brilliant site if you don't know it - and he just says '100% human-created content without the assistance of artificial intelligence'. I know it's true because there is so much love in what he does there.
I guess established authors can rely on readers' knowledge of their style and themes and don't have to protest too much, but it must be really hard for writers trying to establish themselves and find a voice.
You have hit on a number of my concerns about AI in education but taken the discussion to a deeper level, making me consider additional concerns and adding nuances to ones I already had. This is an excellent piece. Thank you for writing and sharing it.
"If the pressures on students to auto-produce assignments are matched by pressures on staff to auto-detect and auto-grade them, we might as well just have student generative technologies talk directly to institutional ones, and open a channel from student bank accounts directly into the accounts of big tech while universities extract a percentage for accreditation."
I feel like we need a general theory of how these feedback loops take shape inside knowledge-intensive systems like university. Totally agree with this being on the horizon. I think the same is true of research comms and funding applications as well.
I was exaggerating for effect. But I think the strange loops are real. In fact I wrote about a couple of them before, such as here https://helenbeetham.substack.com/i/139080460/cage-fight-for-the-future. You're right that it needs some theorising. It would be tempting to see strange loops as a symptom, like repetition (Lacan's automatisme de répétition??) if the economic advantages were not so manifest. Data wants to speak to data - any diversion through the human sense organs and sense-making system (GUIs, natural language models, immersive worlds) is costly. But only in where it touches human value-creating systems - that is, production and consumption - can value be generated to keep the data flowing. So the human processes of value-making have to be endlessly accelerated in the interests of capitalistic flow. Or something like that :-)
When I first came to Substack I wrote quite a few comments making the case that AI is not a good move for humanity at this time, as it will serve as an accelerant to an already overheated knowledge explosion. The real danger may not come from AI itself, but from other powers which emerge from an AI fueled knowledge explosion. I still believe all of this. But...
My thinking took a turn when I faced the fact that nothing I have to say on the matter will change anything. I doubt anything anyone says about AI will meaningfully change the course of a coming AI era. We're traveling through a historic period, like the mechanization of agriculture, or the automation of the factories. Now it's the white collar world's turn. Our power to affect changes of this scale is extremely limited.
I now see further AI development as being like the weather. We're all free to complain about the weather, and may enjoy doing so, but our opinions have no effect. If that is true for AI too, what's the logical next move?
Whatever environment we inhabit it, makes sense to try to enjoy that which we can not change. If it's a rainy day, ok, so let me enjoy the beauty of the rain.
And anyway, there's a reasonable chance that nuclear weapons will make all of this irrelevant at some point. Perhaps it's not that rational to get all wound up about that which can so easily be swept away.
Some excellent points, as ever. Amongst many things you have raised here, Taylor Swift giving a lecture reminded me of a university many years ago discussing the possibility of employing actors to give lectures to improve module feedback evaluations. #WatchThisSpace
Excellent meditation, Helen.
Question about "it’s harder to appreciate just how crap, how deadly dull, how utterly bereft of humanity all those AI-produced books and poems and narratives are, because you can just avoid reading them" - are you seeing any signs of human creators labeling their work AI-free?
Thanks Bryan, that wasn't very clear. i meant that fewer people read for pleasure than consume (and care about) visual culture. But in answer to your question, I've come across some '100% human written' badges you can embed on a blog. They're cute but most of them seem to be from companies promoting AI for writing, so I prefer just to have in my tagline 'often imperfect but never autocompleted'. It's imperfect for a reason - ppl might hate the way I write and the rather mad things I write about, but the process is all mine and I think that comes over. In the end being wrong or out on a limb or thoroughly disliked are risks that real writers have to take.
I love Matt Novak's Paleofuture.com, that I often raid for 1950s images of the future - it's a brilliant site if you don't know it - and he just says '100% human-created content without the assistance of artificial intelligence'. I know it's true because there is so much love in what he does there.
I guess established authors can rely on readers' knowledge of their style and themes and don't have to protest too much, but it must be really hard for writers trying to establish themselves and find a voice.
"Often imperfect but never autocompleted." I love that.
The love that goes into the work is important to me as a reader/consumer of images.
You have hit on a number of my concerns about AI in education but taken the discussion to a deeper level, making me consider additional concerns and adding nuances to ones I already had. This is an excellent piece. Thank you for writing and sharing it.
thank you guy, it's lovely to have your appreciation
Oh yes I realised you were joking about the second bit 😂 though we could get there eventually. Maybe a topic for a final more conceptual webinar?
"If the pressures on students to auto-produce assignments are matched by pressures on staff to auto-detect and auto-grade them, we might as well just have student generative technologies talk directly to institutional ones, and open a channel from student bank accounts directly into the accounts of big tech while universities extract a percentage for accreditation."
I feel like we need a general theory of how these feedback loops take shape inside knowledge-intensive systems like university. Totally agree with this being on the horizon. I think the same is true of research comms and funding applications as well.
I was exaggerating for effect. But I think the strange loops are real. In fact I wrote about a couple of them before, such as here https://helenbeetham.substack.com/i/139080460/cage-fight-for-the-future. You're right that it needs some theorising. It would be tempting to see strange loops as a symptom, like repetition (Lacan's automatisme de répétition??) if the economic advantages were not so manifest. Data wants to speak to data - any diversion through the human sense organs and sense-making system (GUIs, natural language models, immersive worlds) is costly. But only in where it touches human value-creating systems - that is, production and consumption - can value be generated to keep the data flowing. So the human processes of value-making have to be endlessly accelerated in the interests of capitalistic flow. Or something like that :-)
When I first came to Substack I wrote quite a few comments making the case that AI is not a good move for humanity at this time, as it will serve as an accelerant to an already overheated knowledge explosion. The real danger may not come from AI itself, but from other powers which emerge from an AI fueled knowledge explosion. I still believe all of this. But...
My thinking took a turn when I faced the fact that nothing I have to say on the matter will change anything. I doubt anything anyone says about AI will meaningfully change the course of a coming AI era. We're traveling through a historic period, like the mechanization of agriculture, or the automation of the factories. Now it's the white collar world's turn. Our power to affect changes of this scale is extremely limited.
I now see further AI development as being like the weather. We're all free to complain about the weather, and may enjoy doing so, but our opinions have no effect. If that is true for AI too, what's the logical next move?
Whatever environment we inhabit it, makes sense to try to enjoy that which we can not change. If it's a rainy day, ok, so let me enjoy the beauty of the rain.
And anyway, there's a reasonable chance that nuclear weapons will make all of this irrelevant at some point. Perhaps it's not that rational to get all wound up about that which can so easily be swept away.