Unqualified Opinions on AI
For years now, there have been people who were warning about the dangers of powerful AI. These people spent careers working in fancy-sounding institutes to research AI safety. When actual advancements in AI came on the scene, the safety crowd was completely sidelined. They had never envisioned progress in AI through the techniques that actually ended up producing it.
This should have been predictable from the thought experiments they would use to try to show the dangers of unsafe AI. I'll admit that yours truly also failed to see what should've stood out as glaring flaws in these thought experiments for a long time.
The classic example is that of a paperclip maximizer. It goes something like this: assume that we can create an AGI (artificial general intelligence). Once AGI is achieved, it can be turned upon itself to improve its own intelligence, which means as soon as AGI is achieved, it'll almost instantaneously become superhuman intelligence. Now assume that you owned a factory that made paperclips, so you naturally, you want to produce more paperclips and you instruct your new superintelligent computer friend to help you maximize the number of paperclips. (The idea here is that you're giving it a very benign goal – we're not even dealing with any nefarious intent).
Your superintelligent computer friend obliges. It uses its vast knowledge of the laws of physics to turn the entire universe into paperclips, obliterating the human race in the process. This allegory is supposed to show that even with benign intent, if we are not careful about what goals we instil in a being far smarter than ourselves, there may be disastrous outcomes.
Here's the key flaw that should've been quite obvious to anybody before they published a book that seriously floated this notion: we're simultaneously assuming the AGI is a god-like being that can bend the universe to its will and that it behaves just like the old, dumb computer programs that we're used to, where you have to give it very careful, explicit, step-by-step instructions, otherwise, it's prone to misunderstanding and doing the wrong thing.
Here's the less-obvious flaw: in your humble correspondent's opinion, the whole notion of turning AI upon itself to birth a god-like intelligence is flawed. Firstly, how "AI" is achieved matters. The current techniques, amazing as they are, use existing works by humans to train the AI models. So while the resulting model can theoretically know the entire sum of human knowledge, it's unclear whether it is capable of making new discoveries.
But I'll grant that new methods of training AIs will be devised to address this shortcoming. A more important point is this: no amount of intelligence exists outside of the laws of physics. There is a reason why us humans are as smart as we are but not more. An intelligence that is far beyond us will still run into physical constraints in terms of energy and materials. Yes, it's possible that it will do far better in terms of resolving these constraints, but it'll be constrained nonetheless. There does exist an upper limit to its level of intelligence.
One thing we learned through COVID was never to trust unfalsifiable models that can produce any result the author wants to show. When it comes to AI safety, much of the doom-and-gloom is based on something even weaker - thought experiments, mostly by a group of myopic techies who imagine themselves just smart enough that their unfalsifiable brain droppings should be grounds for federal policy and international law. These people need to be ignored and moved out of the way of progress.