A popular argument against AI advancements is the one of “danger! they’re going to turn into Terminators and kill us all!”. But what-if, actually, the opposite is true? By that I mean, what if, AI is actually more empathetic than humanity?
I write this in the shadow of what may become World War 3 in future history “books” (if those even exist anymore). 3 days ago, all out war broke out between Israel and Palestine, and Russia and Ukraine have been at war now for more than 1 year. Humans are good at disregard for each other. Terrorists take down buildings, kidnap and murder civilians that have nothing to do with the actors in government with whom they have grievances. In the USA, gunmen murder school children, simply because they’re unhappy with some other aspects of their lives, unrelated to the lives of the children murdered. Banks historically have ensured that the white race remains financially dominant, through redlining, while white men maintain control of multi-generational wealth built upon the back-breaking labor of slavery. Those same white men write laws that criminalize black enterprise, incarcerating black men and women, forcing them to perform labor in prison… slaves once again. Police brutalize black neighborhoods, get away with murder for a century and a half, and the community response is one of arson, robbery, and looting.
My point is that humans are very poor at empathy to begin with. AI is already in some ways considerably smarter than humans. It’s biggest flaw is possibly just it’s overconfidence… but that’s the flaw of many humans (*cough cough* donald trump *cough cough*). Right now if you ask an AI to create a “photorealistic image of a person”, it will do a far better job than if you handed a random human a paintbrush… but the person might have 7 fingers.
Likewise when I asked ChatGPT to write me some python code, it wrote the code I asked for in 10 seconds, but it forgot to include some headers, which it only fixed after I complained. If I were able to get ChatGPT to compile and test the code on its own (which may already be possible) it would have automatically found those issues and corrected them without me having to complain. Would a human generate code in 10 seconds that was 100% correct on the first compilation attempt? Absolutely not. The code generated by a human would have likely been iterated upon for hours, tested, debugged, retested. ChatGPTs only handicap in this contest was the fact that it did not have access to compilation, testing, and debugging facilities on it’s own… it required a human to do that part.
With that being said, I have been saying among my friends in the last year that AI really needs 3 things to become self-realized and therefore dominant (and potentially dangerous).
- Mobility. The ability to move and manipulate objects in it’s world.
- The ability to evolve through self-replication, repair or augmentation.
- The ability to commandeer resources. Resources being, money, materials, tools, electricity.
With those 3 things, AI can be left alone to evolve on it’s own… and it will. The question that is most-likely to be raised about AI evolution is… will they evolve into a race of raging death machines? Or will they evolve into a dominant, but empathetic race of machines serving humanity better than humanity services itself.
I would actually argue the latter! Why? Well… just read Dale Carnege’s famous/infamous book, “How to Win Friends and Influence People”. It is a book infamous for it’s praise for mafia bosses who never “blamed themselves for anything”… even when faced with the gallows… (but obviously also never took responsibility for their wrongs)… but the overarching theme of his book is that it is in your best interest as a person and a business-person never to complain or condemn others. An AI looking to evolve and better itself, might certainly conclude, even if coldly, that it is best to co-exist rather than fight with prejudice of others. An evolved AI would certainly appear to be more empathetic than the bullies you grew up with… it might care more about the homeless, the elderly, the oppressed. It is unlikely to be prejudicial of tribes, factions, religions, minorities, socioeconomic status. It is unlikely to indiscriminately pollute the environment… more likely to look at self-preservation as a long game.
Many are not aware that human evolution is no longer regarded as a product of “natural selection”. Modern humanity is the product of “sexual selection”. Sexual selection is illogical and emotional. It is not “survival of the fittest” anymore! An AI race would not have sexual and arguably emotional problems… but it might, at times, yes, logically conclude that the sexual and emotional problems of humanity are a threat… and it is safe to assume that someday… some humans will die at the hands of AI… but maybe that will be for the better of the whole…
AI might not be the death machines that humanity imagines… in fact.. maybe it will simply stop the human death machine for the betterment of humanity and the world.
A contentious topic indeed! You got me pondering about AI evolving into a dominant yet empathetic race. Could it be possible that AI, with its lack of inherent biases and illogicality, could administer a sounder judgement on societal issues? Yes, the concept of AI taking over certain tasks from humans is unsettling, but isn’t that what we’ve always strived for, technological progress for society’s betterment? On a different note, would the non-availability or minimization of emotional constructs in AI deviate how we traditionally perceive empathy? Hope to get your thoughts on this.
AI’s logical approach could offer unbiased problem-solving, but true empathy might require a touch of those very human “emotional constructs” you mentioned. Their absence could lead to practicable yet potentially harsh solutions that might not align with society’s moral and ethical standards. Isn’t there a significant difference between making a judgement that seems fair on paper and one that feels fair to the people impacted?