8 Comments

This is spot on. Is part of the problem that we, or at least governments, corporations, and institutions, understand humanity only in contrast or relationship with machines? BTW, I do love the bits you write. Everything you have posted in this blog to date has either articulated things about AI that I have pondered better than I can or sent me down new paths. Please keep it up.

Expand full comment

Thank you Guy, it's a labour of love that feels more worthwhile for comments like yours :-)

I tend to think of people as having a necessary relation to technology, however different our technologies and cultures of use might be. But as a system for designing and producing and selling technology, I think generative AI (companies are) intentionally disruptive of other knowledge cultures. I think they probably have to be to make a profit.

Expand full comment

I like the framing of the "double bind", and the way you've articulated the illusion act being perpetuated by AI hype contingent. It does feel really dissonant to hear the pressures to be both more human and more model-friendly at the same time. It hurts to see how readily AI is prized (mostly by its sponsors, of course) above human connection or participation, and how some of the AI systems incentivize the perpetuation of that value scheme.

Expand full comment

"Human intelligence" has meant something very different to the CIA for decades, lol https://www.cia.gov/readingroom/document/cia-rdp11t00973r000100350001-3

Expand full comment

I do warm to a political economy approach, but wonder if the notion of an 'AI industry' masks an AI field axis which is critical for AI literacy? For example, from an AI field perspective isn't it problematic to say "the project of ‘artificial intelligence’ serves to define what ‘intelligence’ is and how to value it". Of course, for Symbolic AI proponents, yes, that is their 'project', to replicate human and therefore define intelligence. But they remain decades (generations?) away from their goal. Whereas, isn't the Connectionist project about the emergence of as yet largely unknown forms and types of machine intelligence?

There's a great example of critiquing GenAI from a Symbolic AI perspective in last Saturday's FT - Benedict Evans, 'Where is artificial general intelligence?'

https://www.ft.com/content/4cecce94-48a6-4eba-b914-dd23d1e11ac9

Expand full comment

Technology is a subcategory of power, and power in the mammalian sphere is something that has traditionally (recently) tracked with being male (and generally, white). So technology primarily serves and reflects the needs and psyche of those in power.

AI is Elon Musk, but can it be Malala Yousafzai? Not without a lot of powerful people giving up a lot of power (and money). To quote George Harrison, "It's all in what you value."

Expand full comment

Loved the Hamlet quote.

Expand full comment

You always love the bits I didn't write...

Expand full comment