In November, Chat GPT-3 was released and quickly garnered 100 million users around the world. Last month, Chat GPT-4 was released. Chat GPT-3 scored in the 10th percentile on the notorious bar exam. Chat GPT-4 scores in the 90th percentile.
In this same timeframe, I have started seeing AI-generated images and video in my social media feeds. Often, I cannot identify what is AI-generated without doing a further search or seeing a platform-issued disclaimer.
This escalation in both quantity and quality of AI-generated content has woken me up to the possibility of a seismic shift hitting us in the near-term.
One of the reasons that it’s difficult to comprehend such a shift is that we operate on an assumption that tomorrow will be similar today. But the pace of advancements of AI makes this increasingly unlikely. I think it’s now plausible to assume that within the next decade AI will be the cause of huge departures from our current norms across the board.
There are some really exciting aspects to this, which are already being realised. A system built by AI company DeepMind was able to predict the 3D structure of proteins to the extent that it was recognised by the journal Science as their breakthrough of the year.
But there are other elements that should make us very cautious.
Many of those working on AI do not know how the AI system they are developing truly works. Lots of these systems are essentially black box models, systems viewed only in terms of their inputs and outputs. The processes occurring in the middle are not necessarily decipherable. This is how we can be in a situation where an AI system can predict race from X-Rays and CT scans without us being able to explain how.
“We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.”
He quotes from Meghan O’Gieblyn’s ‘God, Human, Animal, Machine’:
“If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations, an ‘explanation’ that would be impossible to understand.”
Without being able to understand these systems, we cannot hold them to account. The decisions they make are utterly beyond our scrutiny.
The 2022 Expert Survey on Progress in AI makes for some really interesting reading. Experts were asked “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median response was 10%.
Can you imagine working on a technology that you believed had a 10% possibility of ‘permanent and severe disempowerment’ of humanity?
Inevitability is frequently being used as justification for the continued development of AI. I think that this is shockingly poor reasoning; inevitability can only be a justification if you are a fatalist. There is nothing inevitable about the changes we are anticipating – there are many different possibilities for how we approach and govern this development and it is critical that we take the time to assess these.
(As an aside: I think it is telling that many companies developing AI who use inevitability as justification (‘if we don’t, someone else [who is worse] will first’) are not thinking seriously about the mechanisms they use for protecting their work from being stolen from the ‘someone else’ that they are apparently worried about.)
We are now frequently hitting of exponential growth against a limited capacity to cope: if you wait until you see the impact, it is too late for prevention. We saw this with the coronavirus pandemic – we saw the impact of how non-linear growth creates greater and disproportionate impacts as we hit thresholds (e.g. the moment you run out of hospital beds) and the way that reactive decision-making frequently misunderstood risks and had unintended consequences (e.g. women giving birth alone and lasting, increased trauma outweighing the covid risk). We see it with climate change – we see systemic but simultaneously disorderly impacts (e.g. Spanish temperature records being broken by 5 whole degrees this week) that create sudden crises and which are outside our recovery systems (e.g. entire regions becoming uninsurable, climate-vulnerable countries increasing their debt to recover from humanitarian disasters), further increasing systemic risks in other areas.
We rapidly need to examine our capacity to work together to manage new domains of risk and danger. Even where we high amounts of scientific knowledge and modelling (e.g. climate change), we are not planning for what is predictable. We seem to have an inability to act now to forestall future consequences.
None of this is inevitable. I think a great portion of ‘inevitability’ is based on some of the (market) orthodoxies of our time, which prize individual freedom over collective safety, efficiency over resilience and independence over interdependence. Our unthinking belief in markets will kill us if we let it; the problem-solving that delivers profit and value for shareholders is not the same as the problem-solving we need to live and thrive.
My biggest concern about AI is not technological, it’s political. We cannot and will not develop AI ‘objectively’, but in our own flawed image including every bias, orthodoxy and (dis)value. We should be wise to this.
Excellent and detailed info on AI at the present and unpredictable future. That is why caution must be at the forefront.
As St. Paul said: “I know the difference between right and wrong, yet it is so easy to do the wrong.” ( paraphrase)
We see this in such social media as FB. When first out it was to be simply a way to keep touch with folks so often lost to time and distance. But as you said, the poison of politics ruins the best intentions every time. FBI as another.
Ben Franklyn said: “If we sacrifice independence for security, we find that we will have neither.”
I agree with Elon Musk. That there must be a hardware safeguard that can shut it down at any time. Can’t rely on such a power to police itself. Man, himself has shown to fail in that measure.
In any new idea brought to reality there is good and evil that can be made of it. Over time evil will overrun the good in this world. Just look about us.