Have you had a conversation with chatgpt, marvelled at how well-constructed, even…friendly its output feels, then read about how it is a broadly unregulated global information machine that hallucinates facts and is known to be racist? Have you come across the (admittedly controversial) tech industry open letter calling for a temporary pause on AI development, and has anyone in your life idly reflected on whether AI will steal all of our jobs? I suspect at least some of this rings true, and I’m sure I’ve not been alone in hosting a tangled ball of thoughts over the last few months. Here are three reminders that have helped me engage more critically with the topic:
One: Large Language Models (LLMs) and deep learning are only part of the AI story
LLMs such as chatgpt and Bard are of course deep learning models, trained on vast amounts of data. They are able to convey information and generate content essentially by figuring out what should come next, in a sequence of words (or pixels). Doing this with the speed, efficacy and…can we say artistry (?) we are now witnessing is an undeniable technical feat. But this type of computational activity arguably has limits and is not a new way of trying to model intelligent systems.
Two: Using language doesn’t equal ‘being intelligent‘
This paper by Bender et al became famous in 2021 for its critique of LLMs. It coined the idea that these models – unlike humans – are ‘stochastic parrots’ (which also consume vast computing resource and churn out the biased content which exists online). Even if you are sympathetic to the view that machine learning is changing the very nature of language, and believe that machines ‘know’ what they are saying, by virtue of how effectively they put words together, I think Bender’s warning of ‘counterfeit humans’ and her view that humanity (vs ‘intelligence’) should be our moral axiom – deserves pause for thought.
Three: Intelligence is itself an elastic concept
It seems inevitable that operating at the intersection of fields across computing, neuroscience, linguistics, robotics and biology will change how we understand intelligence and the authors of this paper conclude that AI can help us ‘[understand] the core mechanisms of human cognition’. This is simultaneously upbeat and a little disturbing. It’s inviting to think that machines will help us to appreciate, and get the most out of, our own brains and capabilities. But we’d certainly want to think hard about how, and why, that kind of insight is used.
Leave a comment