In this post I recap on what I have written in the past about generative AI and working futures. If you have been following for a while, I have new evidence and some new thoughts. If you’re new, I hope you find what you’re looking for. Follow the subheadings for the tl:dr version, and dive in for more detail, track-backs to previous pieces, and an assessment of how my earlier thoughts are playing out. I conclude that we need AI resilience and suggest what kinds of educational research might help.
In this piece:
Employers are already falling out of love with generative AI
Generative AI might still restructure work - but not in a good way
Employers are also concerned about the legal and ethical issues
1. Generative AI has few really valuable use cases
Long ago, in ‘what is it good for?’ I defined generative AI as the automated production of digital content, optimised to a norm. Digital content production was to a large extent pre-automated before the arrival of generative capabilities, thanks to the ubiquitous use of certain productivity software (for example Adobe and MS suites) and standardised workflows around them. The content that was scraped to train large language models (for example from wikipedia, reddit and Common Crawl ) reflected the norms of the particular people and cultures that were over-represented in digitised content, and much of it was optimised for search algorithms, attention rents and click-throughs (that also have cultural biases).
The probabilistic methods that are used in training large language models, and that predict the most likely next element (word, phrase, cluster of pixels) during inference, weight its outputs even further towards cultural norms and biases embedded in the training data.
So pre-automation creates contexts in which large language and media models can seem to produce meaningful outputs. One obvious such use case is the production of search-optimised, clickable content. A second is short-cuts in writing code (highly standardised: already well documented via sharing sites such as HackerNews and StackOverflow). A third is providing natural language front-ends (chatbots) on standard interactions such as customer service. These uses have already been trialled extensively. None has so far produced a revolution in productivity, though staff have certainly been laid off in copy production (or have been transferred from direct production to improving the ‘AI’ output) and some boilerplate code is undoubtedly being outsourced to CoPilot, though with variable results.
In chatbots, the much-touted ‘success’ of OpenAI partner, Klarna, in laying off 700 staff turns out to be a little less than it seems. (Klarna outsourced customer support in 2023, leading to major problems with backlogs and customer satisfaction. The new ‘AI’ chatbot replaces an older, presumably phone-tree style system, but it is not clear whether the improvements claimed are in relation to the original service, or the degraded one, or whether they include the earlier chatbot system in the comparison.)
In none of these cases is the impact on work ‘transformational’ for the workers involved, except in a negative sense. The introduction of generative ‘AI’ is making work that was already repetitive and highly automated even more stressful and precarious. Its use has led to more quantity of copy and code, but as a result the overall value of search results and the quality of online content is degrading; same with the quality of code and its ease of maintenance (something we should all worry about in the immediate aftermath of the CrowdStrike outages).
Unfortunately for higher education there is another use case, and that is the production of student assignments. and similar kinds of writing to a rubric (cover letters, for example). This has many implications for the relationship students have to their learning, and to their teachers and universities and courses of study. It’s a far from simple problem that I and many other educators have commented on, and I will return to at the end of this post. But here, I want to make a distinction between the reasons students use generative AI and the reasons we are told that educators should actively promote its use. The rationale for the second is: ‘they will have to use AI in the workplace, so they (we) had better get used to it’.
This argument is flawed. Universities are not helpless bystanders to the economies of professional work but key stakeholders and policy makers. A university education is supposed to empower students to shape their futures, including how they relate to different techno-social configurations of work. And universities have responsibilities beyond employability - to justice, equity, the disinterested pursuit of knowledge, and having a planet to live on - that demand a more critical assessment of how these technologies might reshape work, and what harms they might inflict in the process.
But there are now more pragmatic objections to this argument. If by ‘AI’ you mean specialist applications of machine learning, these are being developed and adopted gradually, with expert input and as part of existing expert workflows. Students can get on with developing the expertise demanded by their field, confident that will then be able to participate in future ML developments (where they add value) as experts, rather than as data workers. If by ‘AI’ you mean the generic shitshow that is ChatGPT, there is no evidence that it is revolutionising work in general, or even making work significantly more productive. Here’s why…
2. Employers are already falling out of love with generative AI
Congratulations if you are so uninterested in the celebrity romance between big AI and big corporations that you haven’t noticed the cooling off that is in the air. So let me tell you that Goldman Sachs no longer sees AI as a game changer for business productivity. Cognizant, an IT consultancy that has just bet $1bn on generative AI infrastructure so really, really wants it to work, published a survey that found ‘up to 13% of businesses will have adopted the tech in the next three to four years’. Sorry, how many? The Harvard Business Review has also just dropped a study that shows using AI at work makes people unhappy (so less productive).
Companies are beginning to pull out of the enterprise versions of MS CoPilot because its outputs are ‘middle school’ level at best.
Other high profile AI projects that have been quietly dropped include Meta’s clutch of celebrity chat interfaces, an AI chatbot for Los Angeles school students, MacDonald’s AI interface for drive-thru customers, and Elon Musks’ Grok chatbot, shut down for spreading election misinformation. Actually it hasn’t been shut down, as the article makes clear, but it has spread misinformation. The Guardian recently listed other walk-backs, many from the creative industries.
Hedge fund managers are now telling investors privately that AI has been ‘overhyped’, and that its proposed use cases are ‘never going to to be cost efficient, are never going to actually work right, will take up too much energy, or will prove to be untrustworthy’. And the Guardian quotes a senior analyst at Forrester saying that:
a lack of economically beneficial uses for generative AI is hampering the investment case. There is still an issue of translating this technology into real, tangible economic benefit.
Tech stocks are falling, as analysts have realised there is a $600bn hole in revenue forecasts. Some of the biggest companies involved - Meta, Amazon, and Microsoft - now admit it will be ‘years’ before any of their AI products are showing a profit. Heck, Microsoft’s own Chief Financial Officer has just told investors that the company’s $56billion investment in AI data centres is not expected to make a profit from actual, sellable AI applications for more than 15 years.
The pillow promise in this love affair was that AI would make workers more productive, therefore capital more profitable. Back in March, Gary Marcus (one of the most prescient insiders) predicted that any productivity gains would be modest:
The same pre-automation of content production that makes generative AI quite useful, if you don’t care too much about quality, means that the contexts in which AI is quite useful were already quite efficient. Generic generative AI may provide shortcuts to simple tasks (think of MS Copilot as a Clippy update with some enterprise apps) but it makes some complex tasks harder. No doubt the potential will be talked up for a while longer, but no-one serious is talking about a productivity revolution any more.
3. Generative AI might still restructure work - but not in a good way
We should not see this moderation of the hype as a sign that AI will be less bad for workers in creative and ‘knowledge’ sectors. At every point in the hype curve it is possible to worsen working conditions and cheapen the price of labour. Brian Merchant puts it like this:
The design and build of large synthetic media models embodies a particularly toxic approach. As I wrote in ‘Labour in the middle layer’, models are built on the unacknowledged, unpaid work of content creators in the past (training data). But they also depend on the hidden labour of thousands of data workers in the present. First to clean, curate and prepare the training data; then to evaluate, annotate and refine model outputs to produce the desired norms. This process took eight months in the case of GPT4, and although it is a closely guarded secret how many human hours this actually represented, eighty percent of the paid hours on any AI project are estimated to be on data work.
In ‘Luckily we love tedious work’, I argued that graduates won’t necessarily find themselves occupying one of the two ‘expert’ groups in this labour sandwich - producing the original, valued content for models to extract, or using models to support them in their expert roles. In the sandwich economy, productivity and profit depend on paying as few experts as possible, and maxing out their value in the middle layer: this is what produces the model itself as a source of value for its proprietors. Growing at an estimated rate of nearly 30% a year, this work is typically outsourced, precarious and badly paid, but it is far from unskilled. Data outsourcing companies now specialise in sectors or industries; even large general crowdsourcing platforms prefer workers with specific expertise (observations from Muldoon et al’s 2024 Typology of AI Data Work).
Tasks such as these from ScaleAI may relate to generic data, such as images of public streets, or highly specialised data, such as artisitic works, medical or military images, engineering drawings or business spreadsheets. As this layer of labour becomes more specialised and segmented, it has even more use for graduates.
In researching the links between ‘AI’ and the military for another post (coming soon), I discovered that US Project Maven uses Community College students to annotate satellite images for potential military objects ‘for $15 an hour, with course credits thrown in’. These students are definitely ‘AI-ready’, and can move seamlessly from their college degree to the kind of work available through platforms such as ScaleAI. By the way, the UK Government recently celebrated ScaleAI choosing the UK as its centre of European operations, as though it were the next DeepMind instead of a gigwork company highlighted by the Oxford Internet Institute for failing to meet basic standards of fair work.
One way that AI companies are trying to make up for the hallucinations, biases, privacy risks and running costs of their product is by telling businesses to plug in their own data infrastructures. Companies are running MS Semantic Index on the heaps of documents they have accumulated in MS 365, or annotating valuable company IP and documentation to make it ‘AI-ready’. Generic AI models can then be ‘plugged into’ these Retrieval Augmentation Generation (RAG) sources to produce business-specific insights with greater accuracy and fewer privacy risks. It’s a neat way of getting businesses to do the data work that produces most of the actual value, while still extracting rent for the generative front end. But it also means that work is being restructured towards data accumulation and management inside the businesses that are buying the generative solutions.
RAG is basically Knowledge Management for the 2020s. In case you were not there at the time, KM was a business improvement process that was very profitably touted around by management consultants in the 1990s and early 2000s. It urged businesses to capture every thought and gesture of their staff in the form of digital records, supposedly allowing the business to leverage all that knowledge without the expense of employing the knowledgeable staff who produced it (sound familiar?). Mostly it failed, though arguably today’s integrated MS systems are a hangover from the fantasy of total knowledge capture leading to vast productivity gains. Only you can judge how happy, creative and fulfilled the MS panopticon makes you in your job.
In fact, many of the conditions that led to the failure of KM also pertain to Generative AI. A recent study by Upwork (as summarised by Cory Doctorow) found that:
96% of bosses expect that AI will make their workers more productive;
85% of companies are either requiring or strongly encouraging workers to use AI;
49% of workers have no idea how AI is supposed to increase their productivity;
77% of workers say using AI decreases their productivity.
Employers who want to make their workers more productive are almost comically susceptible to hype of this kind. Once they are bought in, the onus is on their staff to make the miracle happen, even though the idea of documenting everything you do and then being expected to do more of it in less time (or gracefully resign) is not an attractive one. But like any ponzi scheme, there is no benefit in telling the person above you that the promise they have just been sold is an empty one. The only option is to keep pushing the promise downhill, until it arrives with those least able to push back.
The irony that attempts to automate can actually hamper productivity is not a new one. It has been researched for many years in diverse industries. In ‘Ironies of Generative AI’, Simkute et al. revisit earlier findings concerning:
a shift in users' roles from production to evaluation, unhelpful restructuring of workflows, interruptions, and a tendency for automation to make easy tasks easier and hard tasks harder…
and find them playing out in full force in the GenAI adoption crisis. But the belief that they can be ‘replaced by AI’ is profoundly depressing and potentially undermining for graduates and professional workers, and only makes it more likely they will accept working conditions that are more precarious, less fulfilling, more isolated, less remunerated.
Educators should already have been asking: ‘is it really our purpose to prepare graduates to be the most productive humans-in-the-loop they can be?’ and even ‘whose productivity, for what greater human good?’ We can now add to that the question: ‘what productivity are you actually talking about?’.
4. Generative AI is not getting ‘better and better’
AI models carry on not getting exponentially better - not even very much better - exactly as I predicted here. There is no fix for hallucinations, as the CEOs of bigAI admit here, here and here. The sheer size and inscrutability of the trained data structure makes it impossible to fix all the connections that might spit out the ‘wrong’ answer, though this does not stop big AI spending $millions on data workers to keep patching up the worst examples. OpenAI has still not delivered GPT5, despite many promises and trailers. The people who are saying that we should be grateful for each iterative improvement are the same people who were saying a year ago that we were ‘only at the start’ of AI’s incredible capabilities, and that they would have improved beyond recognition in a year’s time.
Many cognitive scientists have argued from the beginning that generative AI has computational and theoretical limitations: read, for example: The Cognitive strengths and weaknesses of modern LLMs, Reclaiming AI as a theoretical tool for cognitive science, or Intelligence without reasoning. All of them find it intrinsically unlikely that there will be a major breakthrough in performance. There may of course be iterative improvements due to further scaling up of parameters, or improvements in training and post-training human reinforcement learning. But these are not only costly (and investors are losing faith): they are also pushing against other real limits.
For example, AI is fast running out of quality human-generated data. These projections do not include the impact of synthetic text flooding public sources of information, poisoning the culture of text production and diluting the quality of any human text that still gets out there. (A timely article in Nature also finds that training AI on AI generated text leads straight to model collapse - a shame because that is precisely the solution to a lack of data being touted by people who never thought that human content was up to much anyway).
AI is also running out of compute. There is a world-wide shortage of GPUs (AI chips) and Nvidia has delayed the launch of its next generation Blackwell chip due to production flaws. But it is probably a good thing there are not enough chips to power up the ambitions of big AI, because the world just can’t afford to power them. What Sam Altman calls an ‘energy breakthrough’, required to produce the ‘next generation’ of AI, most analysts are calling an ‘energy crisis’. Because…
5. The power, carbon and water costs are frightening
The full cycle carbon costs of generative AI models are only just starting to be seriously researched. But all the early indications are that they are far, far higher than anyone thought when I first wrote about this issue. There are the enviromental costs of chip fabrication, including the mining of rare metals and the use of largely coal-fired power in Taiwan. These costs are accelerating, as one of Nvidia’s strategies for staying on top of the market is to rapidly obsolesce and replace their highest end GPUs. There is the power required to run the data centres that the models are trained and housed on, recently predicted to rise by 160% before the end of the decade, and there is the fresh water required in all these processes.
Above all there is all the additional compute required by the whole connected world now that power-hungry inferential processes have been embedded into so many basic operations (Luccioni et al explain here why inference is so disastrously expensive). Emissions have soared at Microsoft and at Google, threatening any gains they have made in carbon reduction. Google alone is pouring $billions into new data centres. But most of the additional compute, and therefore carbon costs, is being fired up not by the big AI companies themselves but by the companies buying into their AI solutions:
Since the same companies that sell ‘AI’ also sell IT infrastructure, aka cloud computing and cloud services, and since they have an interest in hiding ‘distributing’ all that additional carbon cost away from their own balance sheet, we will almost certainly never know the true impact. But what is really sick about all this is that finance is now piling into power and utilities stocks. The AI boom they created by piling into AI stocks may not last, but the people convinced by it are still buying chips and cloud credits. So its win/win for finance,so long as the little people just keep believing, and so long as no-one needs a planet to live and breathe on.
I wrote that the greatest climate threat from ‘AI’ was its ability to divert attention, financial power and political will away from from the economic transition we urgently need, and into tech non-solutions like nuclear fusion or some climate engineering miracle designed by a future AI brain. But the capacity of ‘AI’ to delay the real solutions to climate crisis is even more dangerous if in the intervening years - and there’s a phrase to make any climate scientist despair - ‘AI’ is ramping up the carbon output to the max.
6. Employers are also worried about the ethics and legality of generative AI
Employers are concerned about the environmental costs, and also about the toxic biases of generative AI and the negative implications for their DEI agenda. There is plenty of evidence that the use of AI leads to bias in recruitment, for example, and companies are worried about the legal implications. The ‘big four’ accounting firms - that offer some of the top graduate opportunities in the UK - have ‘prohibited’ the use of AI in job applications on equity grounds, and on the grounds that it might select candidates who are disposed to ‘cheating’ - not the best start in an accounting career. Of course ‘humans are fallible’ in recruitment, but few human HR managers have the reach of an AI system, and they can always be held to account. When it comes to gender and race discrimination caused by AI, there is currently no clear legal framework, something that puts people at risk immediately, but also puts companies at risk of future claims.
The issue of potential copyright infringement in the training of generative AI models is still not resolved. Major content owners are rapidly being bought out by AI companies, and copyright law itself is being undermined by campaigns like this one from OpenAI, to the effect that if a government insists on upholding laws that are inconvenient to AI, they won’t get access to the sweetie jar. Still, all this is another source of uncertainty (and therefore legal expense) to any company contemplating whether and how to adopt.
And finally, never mind the factual errors and productivity fails, generative AI is becoming a toxic product. From deepfake image-based abuse to undermining democracy and manipulating elections, from polluting science to ‘cancelling emotions’ (especially useful in those new AI-supported call centre roles), from racially biased image generation to new forms of online addiction, companies are having a hard time persuading their customers that the ‘AI’ in their own products is squeaky clean. It doesn’t help that the biggest players are often the biggest offenders. Nvidia, for example, has just been caught scraping youtube videos for facial data and Meta garnering clicks from AI images of extreme human suffering. But, hey, celebrity chatbots everyone!
7. In the end it’s all about student learning
It is far from clear that employers want or need (generative) AI-ready graduates. All the things that corporations are concerned about right now are things that critics - many based in universities - have been saying from the start. So where are universities in this debate? At least they could be engaging with it as a contested zone that they might still influence for the better, rather than as a predetermined ‘AI future’ they most provide for.
Existing uses of machine learning and generative AI at work show that they can automate the routine parts of tasks. But you can only know how this automation will be useful if you are already an expert in that task. You can only initiate and guide the generative component if you are already an expert in that task. You can only correct for errors and refine the outcomes if you are already an expert in that task. You can only participate in the design and development of new workflows if you are already an expert in that task. This is true whether the task is writing prose or diagnosing cancer. So universities should continue to produce graduates with expertise, confident that they will be able to accommodate any efficiencies that computation may offer down the line. Technologies are designed for ease of use: expertise is hard to acquire.
As I said at the start of all this, the real issue is not what students might be doing with generative AI in some possible future, but what they are doing with generative AI now and how that shapes their individual learning and development. Also what they aren’t doing when they are using generative AI that might be more valuable to their learning (opportunity costs). And finally, what collective cultures and practices are being shaped by their use/non-use of generative AI that students will take forward into work and life.
I am confident there are uses of generative AI by teachers that are helping students to learn. The question is whether they need to be learning in this way. Are they learning in ways that our theories of learning tell us will be beneficial to them in the long term, as workers and as people with an intellectual life and culture beyond work? Are they learning in ways that accord with our values for university learning as a set of cultural practices? I have not seen evidence about this, but then I don’t think many people are asking these questions. What I have seen is a vast amount of work being done by teachers to devise tasks with generative AI, often ingeniously, and always with commitment to student learning. I believe that many of these tasks could be at least equally beneficial to student learning if they did not pass at any point through a synthesis engine.
To give some examples.
Prompt engineering, a skill that is already obsolescent. Generative AI is basically evolving into an interface, and interfaces need to be frictionless, so most of the ‘work’ involved in crafting prompts is already being absorbed in further layers of automation (think CoPilot suggestions, and any ‘AI-based’ app that is essentially poking prompts into a foundation model and serving you the results). What will never be obsolete is the skill of asking the right question, or engaging in a dialogue with genuine curiosity. So go ahead, have students devise questions (call them prompts if you like) to clarify their thinking. These can be used in discussion, or to carry out a literature search or some other iterative data query, or to frame the reading of a paper or the watching a video. Want to stage interesting debates and scenarios? Let students devise and enact them. Or have students write their own test and revision materials. Let them write detailed prompts to an imagined language model to ‘test me on this topic’, but then complete the instructions themselves. Because they know what it is like to be a student of this topic (and they have theory of mind), students will do this far better for each other than a chatbot can, but more importantly they will learn from every part of the process.
Breaking down writing tasks into component parts. This is always helpful and UK students do not get enough support with it IMO. But the components do not need to be practiced with the aid of an auto-complete word-generator. Ingenious prompts can be prompts for actual writing, not for synthetic production of text. Also, the components still have to be put back together again. At that point students realise that ideation is not separate from gathering evidence, and gathering evidence is not separate from summarising/annotating it, and notes are not separate from ideas and opinions, and ideas and opinions are not separate from the words and images they are expressed with. Writing involves iterating among all of these, and testing the results against facts in the world and with other thinking people. So to invite students to outsource all the parts while they take responsibility for ‘bringing it together’ is to misunderstand as well as to confuse students about the nature of writing. And the nature of thinking. Also, I notice that some guides tell students to outsource the planning and overview but be sure to write in the detail themselves, while others tell students they must own the planning but can use generative AI to help with the detail. Just who is most confused about writing here?
‘Doing research’. As everyone knows, generativeAI is prone to mistakes, hallucinations, non-referencing and false referencing, and bias to the norm. Many students now use ChatGPT/GLM etc for basic research, that is ‘finding stuff out’, and happily most research-based tasks designed by educators are meant to illustrate the problems with doing this. But we are relying too much on students already thinking like experts in their field? If students can identify a problem, what resources are they using to do this, and how are they judging the reliability of those sources? If students can provide a better answer, what knowledge and expertise are they drawing on? And if what they really need is to develop those alternative resources of knowledge and expertise, any search or ‘research’ they do with ChatGPT must have an opportunity cost. So ‘spot the AI mistakes’ might be a cautionary exercise, or a one-off revision test, but I don’t think it provides the motivation or structure students need to develop information skills or disciplinary practices or foundational concepts or epistemic judgement for themselves. The experience of many students in this situation, that the AI is ‘often right but sometimes wrong but you can’t tell when’, may lead to the worst possible combination of dependency and anxiety. Instead of focusing on specific errors, such exercises are surely better directed to revealing the generic mistake that is relying on an autocomplete engine at all, and explaining the biases, obscurities and injustices that make them so unreliable. And that leads naturally into a discussion of research strategies and methods that are trusted in the subject discipline, and why they are trusted (and so into epistemology, even if that word is never used).
If universities believe in the value of learning at university, they will offer spaces for collaboration and knowledge-building. They will provide students with models of self-development, and a culture of respect for the finite planet, and the different epistemic cultures and traditions that enrich it. They will be places where public knowledge and specialist expertise are actively being produced, not passively accumulated. Then they will develop not only ‘good workers’ but people who can build good workplaces.
8. Why AI resilience is so urgent
As I argued in ‘writing as passing’, student work is almost by definition the production of content, optimised to a norm. This makes the effects of generative AI on learning and assessment profoundly disruptive. Assessment is a performance, but we make a contract with students that by jumping through the hoops (usually content production of some kind) they will in the process be developing some practice or understanding useful to them beyond the scope of that performance. In other words, they will be learning something of value.
If a chatbot can jump through the same hoops, to something like the same normative standards, and if we can’t reliably detect the difference, every part of that contract breaks down. Students don’t have to go through the process we designed for them. Worse, we can’t follow what their process is. The models are black boxes that refuse to give up their secrets. How (for example) do the patterns they encode correspond (or not) to the schemas that experts use to organise their thinking? (Remember constructivism? This first concept of teaching 101 is that learning involves actively building conceptual schema, and patterns of meaningful activity. Learners have to do this for themselves, on their own terms, based on their own prior experiences. The schema and practices of experts can be valuable as models. The parametric structures and hyperparametric weights of a transformer model? Not so much.)
Because of the secrecy engendered by ‘academic integrity’ scares, students’ process with these models is a second black box on top of the first. We see what comes out, but we have very little idea (unless I am missing some new, more subtle research) what students are putting in. Worse still, students can no longer be sure that whatever process they are learning has value, if a computational process can (apparently) do the same. If they have followed guidance and only used generative AI for some parts of the process, what grade might they have got if they had used it for the whole process? What does that mean for the value of the part they did for themselves? What does it mean for the value of the part they didn’t do for themselves?
In February, a survey of nearly 500 students found:
preliminary evidence that extensive use of ChatGPT has a negative effect on a students’ academic performance and memory; [therefore] educators should encourage students to actively engage in critical thinking and problem-solving by assigning activities, assignments, or projects that cannot be completed by ChatGPT,
Tempting though it is to wave at these results and retire, I am not convinced they show ChatGPT is a cause of poor performance. But they do identify - at least for this sample - the kind of students, and the kind of learning situation, for which the use of ChatGPT becomes compelling, and they show that for these students the academic outcomes are poor. Another recent study on student perspectives found that the more confident students were in their own writing, the less they approved of the use of ChatGPT for writing. These studies are useful, but they raise more questions than they answer. We need more studies of this and more qualitative kinds; studies of different uses, different students, and different settings; we need far more longitudinal studies; we need more critical studies that look beyond immediate contexts of student use. But mostly we need collaborative work with students that go beyond surveys, inviting them to explore openly their processes of production, and their experiences, both personal and socio-cultural. (What are the pressures on students to which ChatGPT/ChatGLM (etc) is the solution? How do they value their own processes of reading, writing, note-making and knowledge construction? How does the belief that ‘everyone is using it’ change their perceptions of academic work? Such experiences will be complex and differentiated.)
Instead of the AI-ready graduate, this might give us a sense of how the ‘AI resilient’ graduate comes into being. How do they develop the expertise to critique and mediate the generative output, not just in class-based activities but in their own study and intellectual life? What skills and practices of production might help them to withstand future cycles of planned deskilling and automation? How can the promise of instant productivity and performance be mediated by other values and hopes, for their learning and for their working futures?
Thank you so much Helen. The hype, (particularly from that ignoramus, Elon) has been getting me down for some time.
I am in the process of designing an antidote - a collaborative platform for sharing ideas - specifically about Shakespeare's plays, using visualisation. If you'd like more information, let me know on roytwilliams@gmail.com. And just read Feeding the Machine. What I would describe as a 'micro-anthropology' of what goes on in the 'back rooms' of AI. Worth buying.
Best ...
Roy (Williams).
Can’t wait for the future where people are just moving AI content to different places so AI can read it, all the while using the power of a small sun to keep the whole party going.