Back to college with generative AI
Season of mists and microsoft integrations
News and views
My discussion with Tim Fawns is now available as part of his series ‘ten minute chats on generative AI’. Tim is a wonderful advocate for deep thinking about practice with tech, and has a knack for provoking people into saying interesting things, so I’m honoured to be in this company.
I’ve also enjoyed Tim’s blog post on ‘expanding the unit of analysis’ this week. Since learning always involves some relation to cultural artefacts – not only data and algorithms, but texts and tools of much older kinds - he argues for a way of thinking about pedagogy that puts these materials at the centre.
It does seem that everyone is a sociomaterialist now. I’m still wondering what kind of sociomaterialist I am, since I wasn’t a fan of the ‘discursive turn’ back in the 1990s, and if you just keep wearing your old materialism until it comes back into fashion, are you ahead of the curve or (as my daughter would say of my literal clothes) ‘embarrassing’?
Tim points to a 2015 article by Tara Fenwick that provides a helpful run-down of different sociomaterial styles. I’ve always found ‘cultural historical activity theory’ (CHAT) attractive, because it tries to untangle the structures and agencies, entities and relations that other sociomaterialists (perhaps wisely) leave entangled. So I’m pleased to see CHAT on Fenwick’s list, though it is a bit of an afterthought. I may I need to spend a bit of time rummaging in the theory cupboard to see if I can produce something more coherent.
I have a couple more interviews coming up on other people’s podcasts, and a short series of conversations with a colleague who has a different take on generative AI to my own. So look out for more announcements soon.
Back to school with Microsoft and OpenAI
Meanwhile the generative AI business model is emerging more clearly, just in time for the new academic year. With partners Microsoft, OpenAI owns the front-running foundation models GPT-4 and DALL-E2, and they have already been embedded into thousands of platforms and business enterprise systems. As well as stumping up the initial investment capital and providing the raw compute for training and generation, Microsoft is leading the business case for gen-AI-with-everything. Using Copilot, Microsoft has integrated GPT-4 into its entire Office suite - Word, Excel, Powerpoint, Outlook and Teams. Even at a higher price point than anyone expected ($30 per user per month) business users are falling over themselves for these new features.
If your organisation can afford a bespoke integration, there’s also ChatGPT Enterprise from OpenAI:
‘an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data’.
For an undisclosed sum (‘contact sales’) your organisation can buy a custom-trained model and the reassurance that ‘we do not train on your business data or conversations, and our models don’t learn from your usage’. Which does at least make clear what is going on with everyone else’s data.
Microsoft and OpenAI might seem to be competing for business users here, but in fact they are just covering the ground, as their entangled finances suggest they should. ChatGPT+ is generating revenue from people who want an enhanced experience of generative AI, through their chosen interface. The MS Copilot features appeal to business users who want familiar tools with added auto-complete. And ChatGPT Enterprise allows organisations to maximise the value of their own data while keeping it from mixing with everybody else’s. In each case there is a path to profit, and the cost of firing up all that compute every time someone clicks on ‘chat’ or ‘compose’ or ‘create content with Copilot’ is no longer being sucked up by Microsoft.
So what do these developments mean for teachers and students as the new year begins? Generative tools are now available not only as free-standing apps and interfaces that we can choose to use (or not), but deeply embedded into the Microsoft and Google work suites, behind the scenes in search engines such as Bing and Edge, and integrated into education-specialist platforms. Plug-in ChatGPT APIs are available for LMS platforms such as Moodle, Canvas, edX and more, and for quizzing, revising and language learning apps. Turnitin has a controversial gen-AI-powered gen-AI-detection feature, while Anthropology is partnering with OpenAI to develop native tools for Blackboard Learn, and Google is busy integrating its own latest language model, PaLM2, into classroom applications.
On the positive side, having chat functions pop up in the dullest of places may suck some air out of the hype bubble. The excitement of the spring and summer – the ‘sparks’ of computer sentience, the ‘revolutionising’ of economic life – may give way to an autumn of realising we have all the shortcuts we will ever need for writing excel macros, and search doesn’t work as well as it used to. Similarities with Microsoft’s famously annoying ‘Clippy’ assistant are hard to avoid: Clippy even has a GPT-powered tribute app.
On the other hand – and more seriously – with integration it may just get harder to notice what is happening, both to our practices of writing and thinking at university, and to the wider knowledge ecology, increasingly a limbo land of undead language and disembedded data.
Pop it in your pencil case
In AI literacy and Schroedinger’s Ethics I noted that the Russell Group Principles devolve a lot of responsibility to teaching staff for dealing with these challenges. How wonderful, then, that subject specialist Dr Emily Nordmann has done exactly this, providing clear, detailed and principled guidance to students for different assignments in pscyhology. I wonder how many teaching staff have this kind of expertise and patience, but the results are here for other teachers to use, in and beyond psychology.
A crowd sourced book, 101 Creative Ideas to use AI in Education, edited by Chrissi Nerantzi and colleagues, provides a general collection of ideas that is open to the possibilities of generative AI but always leads with the values of responsible, ethical and critical use.
I have mentioned before that the policy statement from the University of Edinburgh seems particularly clear and supportive. There is a more detailed and equally well considered guide for students from the Library at UCL.
Finally, I often go back to the guidelines for writers produced by Nature group of academic journals back in January and explored in my very first stubstack post on language, language models and writing. They follow two basic principles:
1. Generative AI tools are not authors and should not be cited or credited as such 2. Use of such tools as part of a research (or writing) method should be reported in that context (i.e. methodologically)
A happy new academic year from me, and check back for new interviews and post on:
A radical history of ‘artificial intelligence’ and why educators should care
Generative AI risks to the knowledge ecology
Fully functional female gynoids revisited, and a feminist take on the AI imaginary
Thanks for reading imperfect offerings! Subscribe for free to receive new posts and support my work.