14 Comments
Aug 11Liked by Helen Beetham

Thank you so much Helen. The hype, (particularly from that ignoramus, Elon) has been getting me down for some time.

I am in the process of designing an antidote - a collaborative platform for sharing ideas - specifically about Shakespeare's plays, using visualisation. If you'd like more information, let me know on roytwilliams@gmail.com. And just read Feeding the Machine. What I would describe as a 'micro-anthropology' of what goes on in the 'back rooms' of AI. Worth buying.

Best ...

Roy (Williams).

Expand full comment

Can’t wait for the future where people are just moving AI content to different places so AI can read it, all the while using the power of a small sun to keep the whole party going.

Expand full comment
Aug 8Liked by Helen Beetham

Thank you, Helen - so much to consider here. As a teaching fellow for medical undergrads, I feel conflicted and torn between the need to 'educate' students about GenAI (since they do use these tools already, isn't it our responsibility to provide guidance - especially on (patient/research) data privacy, equity, biases, carbon footprint... - but more importantly, so that they don't outsource their learning to these tools?) and the worry of becoming a GenAI 'enabler'/'incentiviser' (if that were a word) by providing them with guidance. Does that make sense?

Expand full comment
author

Completely makes sense, Andreia, and I hope my thoughts in response to Guy above also speak to some of what you are bringing here.

Expand full comment

Great overview of where we are now as we slide down into what the people at Gartner call "the valley of disillusionment. I especially like this line "the real issue is not what students might be doing with generative AI in some possible future, but what they are doing with generative AI now and how that shapes their individual learning and development."

Expand full comment
author

Thanks Rob. I know the learning is where many great critical educators have been putting their energies and their thinking. But I don't feel they are being supported by the overall narrative that any 'AI' use is good for the CV.

Expand full comment

Two comments:

1. The good and the bad news is that it is 'baked in' to 'science' that it's products are, in principle, able to be used by anyone, anywhere, anyplace, anytime. In short, science strips out all subjective presence and agency, and can only exist if it is 'context free'. A'I' is the epitome of commodification - it is as close as we'll ever get to a 'frictionless' commodity. And that costs. Science and AI are 'natural' bedfellows.

2. Particularly in our times (of climate 'change' / collapse) the suitability of planetary ecologies for life is increasingly threatened by commodification. We already have 'circular economies' and 'ecological' thinking. We (just) have to wean ourselves from the selfishness / selflessness of context-stripped data, let alone abstract knowledge.

Students need to be become adept at using science AND become aware of the costs and 'externalities' of science.

Expand full comment

What a great piece! My first time reading your blog, so the recap and the summaries were all very helpful. I particularly liked this, a brilliant articulation of the need for expertise:

"Existing uses of machine learning and generative AI at work show that they can automate the routine parts of tasks. But you can only know how this automation will be useful if you are already an expert in that task. You can only initiate and guide the generative component if you are already an expert in that task. You can only correct for errors and refine the outcomes if you are already an expert in that task. You can only participate in the design and development of new workflows if you are already an expert in that task. This is true whether the task is writing prose or diagnosing cancer. So universities should continue to produce graduates with expertise, confident that they will be able to accommodate any efficiencies that computation may offer down the line. Technologies are designed for ease of use: expertise is hard to acquire."

While I share much of your scepticism, I'm still looking for the opportunities where these tools - or variants of them - might be useful. I do wonder if coding (as you've mentioned a few times) might be one. I've run GenAI coding workshops, and to someone who has never coded before, the ability to create functional code from text prompts seems almost magical. For those who can code, it can speed up some of their work.

But even over the course of a couple of hours, the non-experts start to realise their limitations - they can't fix the code when it goes wrong, and spend ages trying to get the Chatbot to fix it for them. And the experts realise that they are drawing on their existing knowledge to guide the Chatbot - and to check the quality of the output. One person remarked that the code they'd created worked, but was 'not good code'. So expertise is still required here.

I'd be interested in another 'use' for GenAI that I wrote about in a piece for Cancer Research UK late last year, where I proposed 'ChatGPT Razor', a thought experiment something like: if GenAI can produce content indistinguishable from a person (say on some application form), what does that mean for the value of that question? Do we need to ask it if everyone has the tools to produce an 'ideal' answer - or should we eliminate this particular element from the process? Might we be able to filter out some burdensome bureaucracy from various systems by using this kind of thought experiment (without any energy cost, copyright violation or hallucinations)? https://news.cancerresearchuk.org/2023/11/07/research-with-integrity-what-you-need-to-know-about-generative-ai/

Just a thought!

Expand full comment
author

Great thoughts, Andrew, and thank you for the feedback - it means a lot.

I have written a bit about the coding use case, since many experts do seem to find efficiencies here (with all the caveats you also noticed about quality and expertise). But perhaps it is the exception that proves the rule. A lot of code was already boiler-plate, swapped and shared via e.g. HN and StackOverflow (or it used to be), so you'd expect CoPilot to be pretty good at slotting in those standard code sections. (Important to say here that the work of development is far more than coding - but that is a different issue). I'd point out that of all the media that have been modelled as transformer architectures, code is the most standardised. There are conventions about how you write code; the source code (e.g. Python) is a set of standard specifications for calling on machine code, and all these 'grammars' are far more tightly coupled than meaning in natural language or in image making. So although I think all digitisation is a kind of pre-automation - creating the possibility for deskilling and routinisation, as well as for efficiencies - code is probably going to be out in front.

Re. your 'razor' (is that Occam's??) there is a meme doing the rounds to the effect that 'if an AI can do your job, it should do your job'. This is problematic for education, since we ask students to do many things that don't have extrinsic value (we don't ask them to write essays so there can be more essays in the world) and they are still 'worth' doing in the context of learning. But even in the workplace I think it is a piece of misdirection. As I argue in my two pieces on the Turing test, to compare human with machine outputs on a level playing field, the human (social, cultural, embodied) context has to be stripped away. The answers have to pass through a computer system, or some standardised system, to be evaluated according to equivalent metrics. And those are not the metrics by which human activity or work is in the end valued, in economies that must serve human needs. I think the injunction should rather be: if you design work that AI can do, you should design it differently.

I think you are saying something more like: if AI can answer the question, ask a different one. This could usefully be applied to assessment design, but maybe it is even better applied to students themselves. This is the the move that I recommend in my writing on the Turing test, in fact, since the judge (who asks the questions) is the only mind that can really answer for itself as mind. Students asking questions of chatbots for the sake of asking questions is far preferable to students asking questions of chatbots for the sake of the answers.

A chink of light is perhaps that all this is demanding deeper and more urgent thought about thinking than we have been doing in education for a long time.

Expand full comment

This is a great piece, both for what you say and the articles you point to. Some recent comments I have heard from our professors make me wonder if we need to:

- Begin having some evaluation of incoming students to determine if they have come to rely too much on GenAI and not acquired skills they need.

- Have more remediation available to them and to existing students who are letting skills atrophy due to GenAI.

- Have some form of counseling for them if we believe the problems are in some way analogous to an addiction. (I am not saying that AI literally creates an addiction but that it does create a serious form of dependence for some people.)

I do not know if I am overreacting. It is just that this is the direction of my thought on GenAI in American universities at this point.

Expand full comment
author

Thanks, Guy, I think there are some recognisable echoes here of the arrival of 'the internet', and how we tried to support students in their use of this new media/technology/social system. As you may remember, there was great concern about students losing the ability to do things 'without the internet', that in retrospect seems to have been misguided (and it was certainly often conservative). AI, they say, is the same kind of proposition. I think this is right in the sense that the genie is out, and we will already now be living in a post-generatve-AI world. I think it is wrong for at least three reasons. First it forgets the enormous investment that went into students' information literacies and evaluative skills and study habits so that 'the internet' could be a gateway to valuable knowledge rather than a distraction from it. And even so, there are many studies that show a correlation with various kinds of inattention, compulsion, surface approaches to learning, and poorer outcomes. It isn't at all clear universities are willing to invest an equivalent amount in guidance this time, let alone the individualised attention you suggest. Second I don't believe generative AI offers nearly as many opportunities for learning, while it offers even more opportunities for distraction, compulsion and disengagement. (This probably requires another post to evidence and more fully explore.) And third, generative AI is a toxic product built on an exploitative labour model, and this was not true of the internet in the early years. True, the open, convivial network was taken over by proprietary platforms .But big AI is an intensification of those forces, not their return to year zero.

Personally I don't think the bad political economy of generative AI can be separated from its bad model of learning. But again, that is a longer post.

Expand full comment

This is fantastic. Thank you! Last week I posted about the folly of basing pedagogical decisions on AI predictions. https://www.criticalinkling.com/p/teachers-not-time-travelers-ai

Expand full comment

as for learning how to write and think, 12 years of school taught me nothing about this. the school system seems to be better in america, horrible in england and even worse in east asia.

when i got onto the internet in 1995, nothing changed. then, in 2002, i found a new website that was catered to photography critique. then i started writing seriously, even if 90% of the comments on the site were "great composition, please rate my photo +3".

eventually, in 2004, i finally got my first negative critique. it was from one of the 3 people which ran the site. this opened my eyes to not only flattering people, making things up to please them, and to honestly say what my mind was saying. this wasnt liked by the other admin (the one who critiqued me quit the site not soon thereafter), who only wanted more of the +3 sort.

then, after being banned there. i went onto an artsite, and had my first mental breakdown not soon later. it lasted 3 days. afterwards, i decided that i should forget all writing (and societal) rules which prohibited my creativity, and just build up from the beginning, and use the rules which made sense to me.

today, i have written for about 30,000 hours. writing comes naturally to me, but it was a long process. i have honestly no idea how other people do it. capitalization reduces my writing speed by at least 10x, and introduces many spelling errors as well, as i cant write at the speed im thinking. i started this already 10 years prior to my awakening, which made me lose 90% of my score on an english test, merely for not using a capital "i".

i also see this problem with others, and with the introduction of smarthphones, its gotten exponentially worse. if i have 50wpm with writing on a keyboard (about A4 page per hour), and capitalization reduces it to 5wpm, writing on a smartphone makes it even worse. people cant bother these days to write more than a sentence or two (that is rife with spelling errors), if youre lucky. mostly its just emoji or like spam.

this is true for all social media ive found, although slashdot and reddit is better. the problem with them is the moderation system, no matter how hard i try, i still get modded to the ground. (even if i read 100 comments for each one i write, even if i take hours writing a single one). on slashdot that entails your comments not even showing up for most people (and not in search results), and at -2 you are automagically called "troll" by the system, and for anyone bothering to read what youve said, thats also how you will be treated.

Expand full comment

AI uses everything ever posted on the internet (dating back to even ARPANET and CERN), using CIAs archive on magnetic tapes to train their LLMs. They have a security office right next to the biggest router in the world. This also includes everything captured on smartphones, webcams, and digital surveillance. Don't fool yourself, all the cameras and microphones on all your devices stream 24/7. Tiktok was the first social media site caught doing this, but everyone does it. Thanks to the nanochip in the corona vaccine, your bodily values (including thoughts and emotions) are transmitted to the closest device via bluetooth. Nowadays its "meth induced gangstalking" if you think of "hamburger" and then a moment later when scrolling facebook you get an ad for mcdonalds, but people talked about this all the time in 2020. I have asked around some companies how big my deep data profile is in terabytes / petabytes, and how much it would cost, but haven't gotten any good answers yet. I know it's traded on darknet. You can try searching for site:.cfd and your full name, when I tried it before I got 150k results on video sites, seemingly everything recorded where i was featured in it. Sometimes its also my deep data profile in tags (just a txt with tons of words), the most common ones are in the summary.

as for "the training process". i saw the jobs mentioned in a youtube video lately. workers are paid like $0.01 per identified image. im sure though most of them dont care at all about the quality they produce. sweatshops and specially trained bots doing it is surely involved. nightcafe.studio is still really sloppy. when they announce something new, its usually worse than the major one.

wrote this for slashdot (but cant post as im modded -1) https://slashdot.org/comments.pl?sid=23434962&cid=64739096 "Hobbyists Discover How To Insert Custom Fonts Into AI-Generated Images"

They can't. Some months ago Nightcafe.Studio launched a text generator. It's almost the same as before, it can barely get a few letters right. Yesterday we got a new one again, i tried the 5 free generations using prompt: text "för emma 79" cyberpunk dj, these were the results https://imgur.com/a/QR5CpIp as you can see, only 1 generated the text, if you look closely though https://imgur.com/a/09Yi7K4 it's based on a stolen 3d generated image (auto screenshot, see rejected slashdot article https://slashdot.org/~Tsofmia+Neptlith/submissions ) from artstation. (Strange, considering their controversy before https://www.theverge.com/2022/12/23/23523864/artstation-removing-anti-ai-protest-artwork-censorship )

Together with chatGPT, these generative AI models just scream "spaghetti code created to impress investors". I don't really get the problem with adding text, as most use the latest unreal engine for people and settings, why can't it add font support?

"The models are black boxes that refuse to give up their secrets."

actually, no. lately there was news that chinese hackers had stolen 77tb of data. not soon thereafter, a chinese chatbot was released. it had the exact same responses as chatGPT. when someone asked the company to do something about it, the bosses replied with "dont be racist".

so the information is free "for me but not for thee", just like how governments know everything about us, but were not allowed to know anything about them.

when i went to university in 2010, one of the teachers said he wanted to grade us based on a daily blogg we would write in, but couldnt figure out how to assess the value. after the term was over, we got the normal tests. i flunked on every single one, and dropped out. i asked later for my score, and got 3 out of 20 possible points, for all tests and essays, a complete disaster.

BTW, AI doesnt corrupt science, rather the opposite. google released an AI based on google scholar. it was shut down after 2 days, because it gave too truthful results (ie "conspiracy theories", which were obviously based on true scientific research).

Expand full comment