Every bloody newspaper started commenting and debating various AI achievements, from Microsoft’s rogue twitter-bot to Google’s go champion, citing experts in physics, acting and engineering on how close we are to a Doomsday black Friday. Well, I’m quoting the words of cat-death: “Nah!”. There’s a reason AI is nowhere close in our future – it already happened, like a decade ago or more. No? Actually, yes. They’re really afraid of strong AI, the one that’s the closest to the function of the human brain. The one I’m talking about is Windows, and it just ain’t the only one. You think I’m joking? Nope.
The usual definition of “intelligence” is the ability of something or somebody to measure, detect (or perceive, if you’re that a nitpicker) the surrounding environment, and use that information to adapt its own behavior, to react to how the environment changes. The trick is, that’s done. The real problem is how to predict how the environment changes, how to make that behavior proactive – that’s the real biggie, machine learning, as it’s how we (well, some of us, anyway) think and it’s also the first step to a strong AI. Simply put, the main bloody difference between what we have and what we fear is the ability to understand, to comprehend. It’s easy to make some software to add to a database every incident and whenever the environment changes it can check that database and use the learned knowledge to adapt – that’s a reactive behavior. It’s not that easy to make some software to understand the underlying principles on why the environment changes, and predict (odds, chances, whatever) successfully how it will change – that’s a proactive behavior. You can fake the second part by speeding through a brute-force algorithm (or better, something optimized), if the environment has a small selection of possible changes – which is why I’ve been constantly beaten by a simple thing as a chess game on easy difficulty. But take that software out of the chess game’s rules and it’s useless.
You think you know how Windows works? Yeah, right. But it’s something designed to work with a multitude of hardware configurations, using WHQL certifications (a database of safe actions) for drivers (that make up the nerves, shall we say) of peripheral hardware and sensors. It can detect if the computer moves (thanks to built-in sensors, mainly used for laptops with mechanical hard-drives), it can detect the actual location of the computer (gps, gsm cards, wi-fi, all that and more), it can take pictures, it can update itself, it can check itself for errors and correct itself automatically (troubleshooting software, upgrade assistants, DISM, SFC, Windows update, and so on), now if those aren’t similar to what we humans usually do then I’m a labradoodle. But it’s not able to predict how the environment changes because it’s limited to a small selection of sensors and peripherals. It also doesn’t understand why the environment changes because it can’t, unless you call Cortana.
A good AI can make good decisions, if it’s well programmed and it has to predict the future using objectivity. It can tell you the likelihood of a tree falling, it can drive a car better than a human driver, but once you factor in chaos, emotion or irrationality – it all breaks down. Windows is a good AI (you hear? Now be nice and stop rebooting), but it can’t account for creativity (which is sometimes using more than mere logic to create something new) nor interpret the emotional state of the average human. Empathy is a bit out of its reach. Strong AI is kind of far away, in a strange land. But what the bloody hell is a strong AI, you ask?
The point I’m making is this: we already have AI, it’s something that runs everywhere, from smartphones to the car-building automatons used by companies to replace humans. Each one has its environment defined, measured and its behavior mapped. They all have specific functions, and remember that word because it’s the key to everything. The AI we ought to fear has general functions, it can perform everything at least at human level, if not better – which is to say if we combine everybody living on earth and create one being capable of doing everything each of us excels at, individually, we made a a weak version of a strong AI. One being better than all of us combined, able to feel, to influence and to display emotions, to thrive in a chaotic, illogical and irrational environment, able to reinvent itself using creativity and improve itself to perfectly adapt to any environment. We’re sort of quite far from that. Also, there’s a little pee coming out of me right now.
The point is, look at every known discipline we charted – there’s at least one individual performing at peak efficiency in that discipline. Drawing, music, martial arts, math, physics, fencing, balancing on one’s toes, hiding, shooting, imagining, writing, remembering, detecting – every single one of those has at least one individual who excels at it, but the number of disciplines a single individual is able to master is quite limited – either by the limits of one’s body or by time. A strong AI won’t have those limits and being software based (or hardware, whatever, stop nitpicking), it can improve itself faster and better than we humans can improve ourselves. There’s no rule anywhere saying somebody good at math can’t dance well (though, there may be some correlation, me thinks). But unless one can attach new, fully functional limbs or change one’s body like that bloody liquid Terminator I’ve had nightmares about decades ago, chances are we’d never be good at ballet and at surviving a long time without food, both abilities of a single individual, in the same body at the same time. But we are emotional, right? That’s a good thing? Well… Intellect is, sort of the part of intelligence without all the emotions.
However, there’s something most forget when it comes to intellect – you can’t bloody measure that using standard tests. Intellect in humans, general intelligence or whatever you want to call it, is beyond our grasp, at this time. If it weren’t, we’d already have that strong AI running around calling somebody mommy or daddy. Intellect means the ability to understand, to apply that understanding in optimal quantities at the optimal time – which is to say, there’s a reason most geniuses aren’t good in social scenarios. A standard test may measure the ability to add things up – while a permanent resident of Sahara would just laugh at its lack of water-finding abilities. This is the truth, really. We only measure what we deem necessary to survive in the current reality, and that’s a bad thing. Why? Because we don’t measure (not that I know of, hint: disclaimer) the ability to understand, the flexibility of the human mind and the efficiency of our learning processes. A kid somewhere read a few books from the local library and built a windmill using wood, bicycle parts and scraps, because he needed electricity to power whatever small devices he had. Then he went and built a solar-powered water pump for drinking water and more windmills, because why not. And he only did that because a famine (yeah, that’s a real thing somewhere on earth right now, imagine that) actually had him drop out of school, which in turn made him visit the local library because he loved to learn. And he did learn, if that’s not really obvious by now. That, mon ami, is intellect (thus the boy becomes, oh, the horror!, an intellectual), something can’t be measured yet because it’s something everybody can learn but nobody teaches and very few actually have the discipline of mind to create it for themselves (and even if they may share the know-how, we ain’t listening). All the while we’re worried about “feelings” and making sure everybody’s integrated in whatever social conventions we have. Yeah, there’s a difference between the best minds in the western world and that kid, and the minute we know what it is and make it public we’ll have revolutionized the education system. Why?
Well, the problem we face is people have different mental abilities and the current PC ideology states we all deserve not to feel offended, a worthy goal but once you step over the logical boundary you rewrite the education system’s goals. I mean, if one could feel offended because of poor learning abilities, then it’s the job of PC man to make sure the education system helps that person, because we’re all winners. But wait, individual, targeted education for that person costs too much therefore… we lower the expectations. Whoups! We don’t help anybody improve their learning (and general cognitive) abilities and we don’t teach anybody how to learn, we just make them winners because they showed up. If we keep it up, it won’t be that difficult to get “genius” level. Because we won’t have any. Such a thing, such a classification could be offensive to somebody, somewhere. Possibly. Innit?
Furthermore, just to seem a stuck-up and superior (which I am) idiot (a question I refuse to answer, by the power of the US 5th amendment) , I point out the obvious: a weak AI can help us improve our lives by making things easier for us while allowing our freedom and independence and whatever you’re thinking of while rubbing one out, while a strong AI can basically rule over us (since everything we can dream of, it can dream also, only better and bigger and with bigger cannons). So why looking for strong AI? Dunno. Warp drives maybe?
If we actually make an AI to think like us, like we think now, like we are now, we’re good – because the spooky one we should fear won’t worry and won’t care about us while this one will probably become depressed and/or alcoholic within 30 milliseconds of becoming sentient. A Bender clone, if you will. Because who’s to say making a human mind bigger won’t make its flaws bigger too? Imagine Siri with movie-like Tourette’s. Gesundheit!
Et si tu n’existais pas
Dis-moi comment j’existerais
Je pourrais faire semblant d’être moi
Mais je ne serais pas vrai
Et si tu n’existais pas
Je crois que je l’aurais trouvé
Le secret de la vie, le pourquoi
Simplement pour te créer
Et pour te regarder
Lyrics from here.
Various points of view, unsorted and randomly chosen: