This theme really gets to the heart of my criticisms of generative AI. The ways that it has been developed and marketed and integrated into everyday life, how it is reshaping relationships at the most personal level as well as the big stuff like work and power and money. Which is of course what technology always does, but not in a deterministic way. So it is worth asking how a computational-mathematical technique like probabilistic modelling has been adopted for some social projects rather than others, and how it might still have different uses. Or perhaps, how it might restructure our world in different/better ways.
Privatisation is a good theme for this week. At the UK’s ‘safer AI summit’ party, giant technology corporations showed off their relationships with government, and it was clear who was in charge. Rishi Sunak, in a cringing and cringe-making interview with Elon Musk, seemed to make a bid to sell him the UK. Behind the scenes, while Rishi and Elon gurned happily at the idea of everyone having an AI friend, the Department for Education was shovelling more public £millions into the Oak National Academy as ‘a first step towards providing every teacher with an AI lesson planning assistant’. An idea that does not seem to have had any safety assessment or evaluation before it was funded, and is strongly resisted by teacher unions.
In fact, ‘artificial intelligence’ as a term has long been associated with privatising public goods. I’ve written about this in relation to generative AI and the knowledge commons – the way foundation models scrape content and information from public sources and turn them into private assets. Users - who might have contributed to the original assets as content producers or as data subjects – become clients. They may be paying directly in subscriptions, or indirectly as click bait and data providers. In return they get a new service, perhaps a chatbot front end or a ‘personalised’ recommendation system, that can be generated algorithmically at a very low cost. The new asset owners might also provide services that are costly to develop but allow them to capture whole businesses and markets. Trading strategies derived from financial data, for example, or risk models derived from healthcare data, are proving so valuable to finance and insurance companies respectively that partnering with the big AI models is beginning to look like the only way to stay in business.
This is how knowledge/power/capital gets concentrated in a few corporations. Not by some emergent properties of technology but through deliberate business strategies and buy-outs. Legal frameworks that might protect the public realm are actively undermined or evaded, whether these are anti-trust laws in North America or data protection laws and AI regulation in the EU, tax regimes or copyright laws everywhere.
Beyond knowledge: privatising public services
Knowledge matters. But so-called AI projects are involved in privatising even more essential public goods such as healthcare, education and government. When public bodies invest in data and IT systems, there is always a transfer of public funds into the profits and dividends of IT companies. There is also a transfer of data and control, away from sources of public accountability such as boards of trust and elected officers, into corporate structures and opaque technical systems. This has been going on since the dawn of commercial computing, as Nick Srnicek has documented in Platform Capitalism. It is difficult to know how much accelerant is being added by recent developments in ‘generative AI’, as everything IT now has to be described as ‘AI’ – which is telling in itself. But there are some distinctive new ways that public goods are being captured through machine learning and data at scale. I’m going to use healthcare as an example, as the sector is running ahead of education in the AI race, but there are plenty of parallels that will emerge as we go along.
Let’s think about two different ways ‘AI’ can bring private platforms into healthcare: through apps that privatise the patient experience on the edge, and through the capture of public data at the core. Healthcare apps are now big business, and increasingly used to manage conditions alongside or in place of medical care. This list of ‘other digital and healthcare services’ from the NHS, for example, include apps that track patients’ conditions, that manage access to services, and that triage symptoms to decide if they are urgent enough to need a medical appointment. Many of these apps use or claim to use machine learning across large numbers of patients and their data streams. All are paid for out of public funds.
Medical apps have been known to give dangerous advice, about eating disorders and sepsis, for example. They have shared patients’ menstrual cycle data with advertisers, and are known to have magnified health inequalities during the Covid pandemic. In fact only 20% of healthcare apps reviewed by an industry body this year (2023) met quality standards, and nearly 90% failed to keep sensitive data secure. It seems that few lessons were learned from an early partnership with Google’s DeepMind that resulted in the NHS being reprimanded by the Information Commissioner for exposing patient data. Just this year, the WHO renewed its calls for more rigorous oversight of AI in health, citing exactly these concerns.
But even when they give accurate advice and keep personal data secure, health apps have a pervasive effect on public healthcare. Through data streams and nudge behaviours they change patients’ relationships with their bodies. Healthcare providers are no longer just treating people, they are treating their data doubles: data that may be distributed among many private organisations. And since clinical appointments and treatments are expensive, apps all too easily become gatekeepers of real-world clinical care. The NHS recently published a pilot study on the use of AI-based triage systems, which found many inconsistencies in how they were being procured and used. Worryingly:
Digital triage tools are not fully clinically validated or tested by product regulators and notified bodies. We have learned that there is great variation in their clinical performance.
And in the absence of:
common safety netting advice… people are not clear about what they are using and how they should treat it.’
Patients who are well-informed and well-resourced may benefit from having more information about their condition. But vulnerable people in an under-resourced healthcare system are likely to find apps being used to manage and even to frustrate their attempts to access care. The NHS pilot study notes ‘concerns’ about this ‘possibility’, but the web sites of app providers are quite open about it, promising to ‘manage increasing patient demand combined with workforce and capacity issues’ and to identify ‘low acuity’ patients that can ‘diverted’ from the appointments system, perhaps with self-help materials.
There are parallels here with learning apps. For example, the way they arrive into the market as consumer goods that learners can buy into, and that then become part of the core offer. As with health care, there can be a lack of clarity about who owns learners’ data. As with healthcare, data, diagnostic algorithms and associated nudge behaviours become ways of knowing learners, and of learners knowing themselves, that can stand in for more collective processes of achieving identity. Data can also be used to manage access to face-to-face services. Diagnostics and triage are perhaps inevitable when it comes to allocating scare resources, but there should be transparency about how those decisions are made. Critically, people for whom self-help and self-determination are not working - the patients and learners who are most in need of the relational aspects of healthcare and education if they are to flourish - should not be left to their own devices.
As decisions about access are increasingly automated in private algorithmic systems, this becomes a subtle form of political influence as well. Who decides the thresholds of symptom ‘acuity’, of ‘risky learning behaviour’ that trigger an intervention? And then of course there is the work of integrating and managing it all.
“Just take the whole market”
This is where Palantir enters the game. Founder Peter Thiel and CEO Alex Karp are well-known cheer-leaders for zero regulation of AI. Originally funded by the CIA, Palantir has built a business worth $32billion on the provision of AI for missile guidance systems, wartime surveillance, battlefield decision-making, policing of political dissenters and immigration control. And yet the company employs more people in London than in Silicon Valley. Particularly since the pandemic, it has hoovered up a host of UK public sector contracts, some with its partner company Faculty (an outfit with close links to former number 10 advisor Dominic Cummings), and it is lobbying hard for more.
Why is this guns-and-spyware business so interested in the UK public sector? One answer lies in the most detailed and exhaustive database of patient records in the world, thanks to the unique history of the NHS as a public healthcare system. Recent investigations by Wired and Bylines have shown Palantir in pole position to run the NHS Federated Data Platform. Controlling this data – effectively becoming the default operating system for the NHS – would embed Palantir into every data-based service that patients rely on, now and in the future. Once it held the key to this treasure trove of genetic and healthcare information, Palantir would be the partner of choice for dozens of other AI companies (those it has not bought up already), looking to build data models for use in drug development, insurance underwriting and risk analysis, diagnostics, monitoring and surveillance, and no doubt other areas I haven’t thought of.
The NHS database is not, in practice, a complete and integrated system, as any NHS patient can tell you. The proliferation of private apps and services doesn’t help. But this is where generative AI may come in. McKinsey, consultants of choice to the private healthcare sector, explain that generative AI can:
‘take unstructured data sets and analyze them, representing a potential breakthrough for healthcare operations, which are rich in unstructured data such as clinical notes, diagnostic images, medical charts, and recordings’.
AI will not be fixing the problems alone, though. In a move that will be all too familiar to readers of this substack, McKinsey goes on to explain how the generative interface:
adds the patient’s information in real time, identifying any gaps and prompting the clinician to fill them in.
Ah yes, I didn’t think it would be long before we turned up professionals doing data work for free. And when the conscientious doctor has finished, a web site for private insurance companies explains how the upgraded data can be used:
for insurers [to] check clients’ history, decide on a suitable risk class, form a pricing model [and] automate claims processing.
The back end here is good old fashioned data capture, but generative AI provides a shiny new front end, ‘prompting’ professionals to enhance the value of data in the guise of helping the patient. It is not hard to see how educators might end up plugging the gaps in a similar platform for learner data, and for similar reasons of professional care.
Learner data is still a poor relation to healthcare data in terms of its scale and its capacity to be levered for profit, though it is not immune from being targeted: this week I was sent ‘AI-powered’ advice via LinkedIn to ‘automate some of the repetitive or tedious tasks [in my life], such as grading’.
But for now, big players like Palantir are focused on those parts of the public realm - like healthcare, policing and border control - where data is already managed at scale. Where the state already relates to people through algorithmic processes, categories and decisions. I hope it does not need saying that these processes are often biased, oppressive and racialised – these facts should be the starting point for every discussion of what AI claims to offer by way of public good. The point I am making here is that even if fairer and more user-centred data processes were put in place, the privatisation of public data and services would still be problematic. Private corporations would still be gaining data power and algorithmic control over citizens in the most intimate areas of their lives, without democratic oversight of how they use them, or constraint on their pursuit of profit.
From communities of practice to expert systems
What we are seeing in education (so far at least) is not the privatisation of learner data sets but the capture of expertise. Professional services such as mental health and wellbeing, IT support, admissions, and aspects of learning support (also called academic practice) are increasingly outsourced to commercial providers. These make extensive use of AI in their interfaces with students and their back-end diagnostics. To take an example – nicely situated between healthcare and education – Callard et al. (2022) report:
We have tracked ongoing outsourcing of counselling provision to private providers, and a significant shift towards the procurement of digital tools (including mental health and well-being apps) and data analytics… Digital tools are not only being used to deliver online counselling, cognitive behavioural therapy and self-administered therapeutic programmes, but are being rolled out across campuses to ‘nudge’ students towards healthy/productive behaviours… Little is currently known about the impacts of such tools on student and staff mental health, or on the university as a whole.
When it comes to teaching expertise, services like teachology.ai, education co-pilot, and many others have sprung up to take advantage of generative AI. They work by retraining a foundation model with lesson plans, learning activities, assignments and marking rubrics, sometimes with exemplary pedagogic exchanges and teaching materials. Education experts may be paid to provide examples, or to refine the model outcomes to make them more teacher-friendly. Users – teachers and their employers - then pay a subscription for access to the trained model, generating their own materials, and hoping that there is enough variety in the training data to meet their learners’ needs.
As use cases go, these are far from the worst. If these specialised models were owned and managed collectively - by subject centres, for example, or professional bodies on behalf of their members – they could be integrated into professional development in genuinely helpful ways. Teachers would not only take ideas away but would have every incentive to share their own practice, just as the open education community has done for years. Over time this would enable models to offer greater diversity, to improve and perhaps to innovate.
But this is a long way from the reality. What is emerging is a market place in teaching productivity tools, not a public space of knowledge exchange. One reason is the general impoverishment of the public sphere. The higher education subject centres were abolished in a fit of austerity 13 years ago, and communities of common interest do not come together or stay together (as the open education community knows only too well) without some sustainable funding and a strong public ethos. Another barrier is the vast concentration of data and computing power in the foundational models. Specialist versions may be (re)trained independently but they still rest on the foundational models, their business cases and development trajectories, their labour relations, their original training data, and their scalar effects. They will, in time, be vulnerable to several kinds of capture as the big AI players turn their eye towards this burgeoning market.
(To understand something about how small, specialist and open AI projects are compromised by their relationships to the foundation models, I recommend Widder, West and Whittaker (2023) Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI. Also a critical discussion of this paper by Warso and Keller (2023) Open Source AI and the Paradox of Open, who are more optimistic.)
I think this apparently flourishing marketplace will narrow quite quickly to a small number of winners. As I mentioned, the UK Government has just paid Oak National Academy (ONA) £2million to develop AI ‘teacher assistants’ for the schools sector. This is the same ONA with links to right-wing donors and (once again) to Faculty AI, that the DfE describes as ‘an arm’s length body to the Department for Education – focused on supporting teachers to deliver excellent lessons and building on its success to date’. It would be hard to invent a clearer example of AI-facilitated state capture.
The immediate effect of apps like these is that teachers become consumers of recycled expertise, rather than members of a community of shared practice. The medium-term effect is that teaching know-how is invested in the platform, so education funding is diverted to subscriptions, to Oak National Academy shareholders rather than for training, developing and supporting teachers. The long-term ambition of platform vendors, I believe, is to serve teaching materials directly to learners, backed up by automated feedback, with teachers relegated to a kind of support service for the learners who do not thrive with this approach. Online courses, as might be expected, are the wedge that can open up university teaching to full private management and automation - something that teachers are already resisting in Australia.
Emily Bender puts it like this:
instead of doing our duty as a society to provide health care and education and legal representation to everyone, people with means still get the real version of that and everybody else is fobbed off on these text synthesis machines like ChatGPT that give a facsimile of it.
Expert ‘humans in the loop’ will still be needed to train and refine the data models, while users – learners or patients – will need to adjust their expectations as they learn to interact with artificial agents in place of teachers or medical staff. But training of both kinds - training professionals for data capture, and training users as willing data subjects – seems to be well under way as the next, AI-powered wave of privatisation gathers pace.
A better kind of reason
Having rummaged around a bit in the murky relationships of AI companies with the UK government, including the blog posts of one Dominic Cummings, I’ve come to believe that privatisation is not an incidental effect of ‘artificial intelligence’, but essential to its technological re-ordering of society. The image of society it offers is one of individual users constantly working to maximise their fitness and their smarts. Interests and desires become quantified needs, demanding just-in-time upgrades for (those who can afford) the ideal self. And the public realm becomes the place of these transactions, a data field rather than a place of negotiated interests, viewpoints, identities and values.
I will pursue this thought in other posts, but for now, to end a week in which the prime minister of the UK advertised his own government as a franchise of X, I offer a piece of classic AI reasoning.
AI researchers aim to construct a synthetic homo economicus, the mythical perfectly rational agent of neoclassical economics… Theories of normative design from economics may prove more relevant for artificial agents than human agents, with AIs that better respect idealized assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people.
This comes from Cummings’ own list of essential reading for those aspiring to govern. It was compiled during his ‘management’ of the Covid crisis. However it may have turned out for the rest of us, Dom’s value-maximising approach to public life seems only to have been strengthened by his time in government: he now plans to launch a new political vehicle, the ‘start-up party’. Like Sunak’s rebranding of Number 10, Dom is at least transparent about his ambition to run the public realm entirely in the image of a rapacious AI platform. I certainly look forward to blogging about the ‘novel rules and incentive systems’ that will proceed from this enterprise, recognising that ‘people’ must make way for ‘better assumptions of rationality’ than we could ever hope to express through democratic means.
Thanks so much, Helen. Brilliant. I've shared it widely (including to a mail group which includes you so blushes allowed!).
From reading this, none of us can't say we weren't warned.
Your brilliant work reinforces the simple truth that the only sensible thing to assume is that big business (with its hands in governments pockets) will always put profit before anything ethical.