A few thoughts about AI
In which I attempt to clarify a couple of things about what is and is not happening, and where it will lead (and be lead)
1. The term "AI" ie "Artificial Intelligence" is misleading and distracting. We can no more create "artificial intelligence" finally, than we can create "artificial life." However, it does turn out that there are things we used to consider strictly the province of human intelligence which are in fact able to be done by machines. This will certainly force us to a greater confrontation with the question of what exactly is truly human. But fundamentally so called “AI”s are uncreative and in need of direction; all we are really doing with this latest breakthrough is taking advantage of the exact same fact which was the subject of Claude Shannon's initial insight in information theory which arguably started the whole computer revolution: human language is highly structured and very redundant, so a great deal of it can be inferred from a small part of it. This is true within individual words, and is further true on a grammatical level, and is further true within an established body of knowledge in which the same answers follow certain questions repeatedly, certain facts tend to be stated together, etc. And computer code is even more repetitive!
2. The term "AI safety" is even worse than "AI". Do you think people talked about "printing press safety" or "cotton gin safety"? Maybe to the extent that these things were in fact physically dangerous. But outside of the literal safe physical operation of the machines, there were greater concerns having to do with the disruptiveness of these technologies which (I think, though I'm no master historian) to the extent that they were seen at the time, would rightly have been seen for what they are: political questions. So yes we should be thinking about how to correctly orient this powerful new technology to human thriving, but the question of how to do so is a question of politics, statesmanship, perhaps even philosophy, but almost entirely NOT a question of safety! (And as a sidenote, talking about safety regarding a matter which is not in fact a matter of safety makes a man sound weak, so don't do that!)
3. The basic paradigm of software development has not really shifted for over 20 years, and it is a terrible waste of human time and talent. It's overdue for replacement. There are two parts to development: one is imaginatively creating new types of entities and instances of these types (i.e. windows, programs, files, emails, etc) with which humans can interact. The other is getting these things to actually exist in some meaningful and effective way in the computer hardware. There is some overlap between the two, but for the most part the first part requires real human creativity and understanding of human needs, the second part is largely uninteresting and boring except as a means. However, development of computers has been stuck in a state in which the only people who could do the former were people who had a certain specialized knowledge of how to do the latter. This led to tremendous inefficiency, both in time wasted attempting to communicate needs to nerds, and in the nerds' (often actually very smart and ideally better used on higher things) time spent fighting computers. Both these inefficiencies are largely removed by chatGPT and the like, which will rapidly lead to tremendous change in our use of computers.
4. Most notably, I think we will very soon cease to think of "software development" as a self contained field, and will see the rise of the "user developer" or perhaps just "user" as people begin to be able to simply tell a computer what they want, and then correct it when it goes wrong.
5. This prompting of computers and correction when they go wrong will (especially initially) occur very frequently and creatively, and the collection of such prompts and corrections will be tremendously valuable, as they will effectively replace the entire multi-trillion dollar software industry. Very soon it will grow difficult to use so called "AIs" which are not fed by a large pool of such feedback in an ongoing fashion.
6. This brings us back to the politics question. While I would somewhat like to believe that there will be a sort of libertarian "people's AI" which is somehow owned by nobody, this ain't gonna happen. And while I am in favor of distributed AI's running locally offline, I highly suspect that the repeated effort necessary to shape all of these separately will make it hard to resist using big nets of AIs except in relatively specific cases. Furthermore, pattern recognition is fine when the most repeated pattern is in fact the best, but in many questions of value this is not at all the case, even when one looks beyond the superficial patterns to the deeper ones. Sometimes (but not always, or even terribly often) the man who stands alone is the hero! So the big networks must be shaped to step back from patterns at the correct times; who will rule them? These things are going to be more powerful than most nation states. US Congress has less than a snowball's chance in heck of figuring this out correctly and acting on it in anything better than a decade behind and completely hamfisted manner. Who shall be the statesman of AI, and what Gods shall they serve?