This week Facebook announced a new “proactive detection” AI technology designed to help prevent suicides among its users. The AI scans posts for patterns of suicidal thoughts, and when detected, red flags that content for moderators to then take quick action. Along with the announcement, Facebook revealed that through the testing process, it had already initiated more than 100 “wellness checks” with first responders being alerted to visit affected users, some of which arrived even before live stream transmissions from those flagged users had ended.
First, let me say I applaud Facebook’s implementation of this technology, because I know as someone who is on the bleeding edge of Artificial Emotional Intelligence development, that their application of this AI will indeed save lives. This isn’t an article to attack what Facebook has done or is trying to do. Rather, it is a more comprehensive discussion about the base technology they are using, and its larger implications for what is likely to be forthcoming in 2018, which everyone reading this article should be concerned about. Because opening this technological can of worms is about way more than suicide prevention, and Facebook is only one of the countless mediums where the the good, bad, and ugly iterations of this new technology will be heavily used. And potentially, heavily used against you personally, and against everyone else connected to the Internet.
So What Is This All About?
The technology Facebook is utilizing is actually called Artificial Emotional Intelligence (or Emotional AI or AEI for short), and in layman’s terms, it is the set of algorithms and rule sets that computers will use to understand the real emotional responses of human beings. In addition, when run in reverse, these same algorithms will help artificial personalities exhibit (to a nonphysical degree) realistic emotional responses that are good enough to fool people into thinking those personalities are actually human. In short, Emotional AI is the result of programmers delivering emotional intelligence to computers.
Unfortunately, this little move has more complex ramifications that we can possibly imagine.
Seriously? Computers Are Going To Understand Emotions?
Before your brain goes nuts with disbelief of how a computer could ever understand human emotions, we should point out that our brains understand human emotions quite well, and they can even predict emotional responses to future events in other people. For instance, if I were to bring home a new Lego set for my seven-year-old son, even before I walked through the door, I could easily visualize the excited reaction he would likely have of jumping up from whatever he was doing to run over and gently but immediately take the box from my hands, as he studied the pictures on the packaging to see what type of machines and people awaited assembly inside. When I imagine it as I’m typing this, I can actually almost hear his exclamation of, “Oh, cool”, right before he looks up and asks, “Can we build it now?”
Our ability to understand and predict emotions in other people is based on something psychology calls Theory of Mind. ToM is the ability of our mind to be able to emulate someone else’s mind based on what we know about how our mind works, coupled with the information we have about the other person in comparison. Although we don’t think about it, we utilize this type of emotional prediction quite regularly. For instance, during those times that you know how your immediate family member, or significant other, or close friend is going to react to a certain piece of good or bad news you have for them, even before you’ve told them, that’s your brain’s Theory of Mind processing humming away. It’s an included feature that comes standard in all human brains. Apes have even been shown to have Theory of Mind capabilities based on their actions of saving and distributing antidepressants to their ape friends in research studies and at the zoo, knowing the pills would have similar mood enhancing effects on their ape buddies just like those magic pills had on them personally.
Getting to the point, the fact that we personally have the ability to predict someone else’s emotional response to an outside stimulus proves that 1) there’s a set of data points that exist to be able to do that internal emotional math for another person’s mind, and 2) there’s a standard model of emotions that can be applied to predict emotions in other people, given enough data. The fact that my brain can predict my son’s likely emotional reaction to his new Lego set means my brain knows the correct data set to use regarding my son, and has the correct model of processing to run that simulation and accurately estimate a probable emotional reaction for him. I can even run the alternative data set for my wife’s reaction to the new Lego event (knowing my wife’s opinion that we have too many Legos in the house already) to accurately predict the disbelieving scowl she’d send my way as my son tore into the box. I can see it all. So the data and model both exist.
Subsequently, it stands to reason that if we could organize the data correctly for a computer, and apply the correct model of emotional processing through software, a computer could easily do that same math that our brains do in calculating and even making future predictions of emotional responses in human beings to specific events or information. In fact, if we were to use this technology to create artificial data, such as for an artificial personality, that same process can be used to calculate human-like emotional responses for the artificial personality, even regarding real world events that we allow into the artificial personality’s perception. That would be cool, wouldn’t it? To have Alexa give us the news in our Flash Briefing, then give us her thoughts on the news afterward?
Well, to cut to the chase, we have those data sets identified, and we have the emotional processing model figured out. For reference, the complete AEI suite is not what Facebook is using with their keyword and keyphrase activated AI to identify user who are potentially in emotional crisis. But the complete model does exist in a lab not owned by Facebook, and the Facebooks of the world may be on the doorstep of developing the more comprehensive AEI capable of delivering everything we are about to discuss in the next number of months. And whether it’s Facebook who takes the next logical step in development, or whether the full AEI gets released from the lab Facebook doesn’t own, or whether the Russian firm that announced they will have the complete model in 18 months makes good on their promise to deliver it, within the next year or so, computers will most likely be developing a comprehensive emotional intelligence.
Unfortunately, this is not entirely good news. It’s not good news because computer science isn’t quite ready for Artificial Emotional Intelligence. It’s not yet ready for the responsibility of computers being able to understand human emotions. Let’s now discuss why:
So What Does Giving Computers Emotional Intelligence Mean?
In short, this emotional processing discovery is not a small development for the computing world. First, this technology is going to help us create a myriad of amazingly futuristic applications in a very short period of time. Many of these applications will be amazingly helpful and fun, such as creating realistic emotional responses in our artificial assistants, and having our applications predict our emotional reactions to things in which we are currently engaged,
so applications can better serve our immediate emotional needs. Some betterment-of-society applications will also likely spring forth, such as automated personal counseling applications and stress mitigation for individuals in emotional distress (similar to Facebook’s program, except without the need for human intervention), and some streamlining of intelligence agency work will probably occur, such as the automation of the assembly of terror watch lists. Because emotion is the catalyst for the worst of our human actions, we will even potentially develop future crime prediction technologies, including potentially identifying the individuals who may become the next elementary school shooters. Yes, Minority Report science fiction may yet become science fact. More on that shortly.
But along with some of these potentially cool things, there are also some sketchy black-hat applications of the technology we need to look at also. And oh, by the way, did I mention the part that there’s a slight chance this emotions piece of the AI puzzle could be the single critical piece that everyone has been worried about which will enable machines to enslave humans for all eternity? Yeah, there’s that little hiccup we need to discuss, too. But before discussing the potential gotchas, the news actually isn’t all bad. Let’s first look at the positive changes on our technology horizon that loom now that we’ve figured the logical model for human emotions out. Then we can discuss the potential hazards we need to prepare for before we release this technological kraken onto computer science.
The Good
When Artificial Emotional Intelligence is released, it is certain that a lot of cool applications will immediately be developed. For instance, regarding our everyday lives, AEI will allow for our artificial personal assistants such as Apple’s Siri and Amazon’s Alexa to make the transition from being programmed response bots to true conversationalists, interacting with us in a much more realistic and meaningful way than they currently do. If you’ve been frustrated that automated assistants and chatbots haven’t been able to hold interesting extended conversations up until now, fear not. What’s been missing from those conversations on the computer side has been a deep emotional intelligence, and the model to perform multimodal emotional thinking like human minds do. When these critical pieces get added, they will allow for a true empathy to exist between automated assistants and humans, including the addition of emotionally associated memory processing. So conversations about how difficult our day was will be much more substantive and helpful for relieving our stress levels, as will having multi-day discussions about ongoing life issues. This new technology will give Siri the ability to talk about your Mom with you, and/or the ongoing problems you’re having with your boss. As a result, we will feel heard by our automated assistants, which science shows is one of the largest contributing factors to our overall psychological wellness.
In addition, using the new model of how emotions are generated, our artificial assistants will in turn be able to emulate realistic emotional responses to our realtime conversations based on their artificial personality profile. They’ll even be able to react to real world events as they come in via our news feeds. In fact, with a little tweaking of the settings and a bit of randomness thrown in, all the quirky idiosyncrasies of our favorite movie androids will immediately move from science fiction to science fact. This will be possible within months. In time, depending on their individual personality profiles, our artificial assistants may even turn out to be useful role models who can teach us how to better exhibit more prosocial behavior and generate psychologically healthy reactions to our world, setting positive examples for us on a daily basis.
This type of AEI application will be able to help potential suicide victims before they get so depressed they take their own life (without human intervention – an expansion of Facebook’s initial implementation). The AEI could get so good, it could even defuse the next elementary school shooter before they get to their personal breaking point and fill the car with guns. Imagine a phone app providing that level of service. Can you hear me now?
With AEI, our previously simplistic automatons will leave their elementary conversation logic trees on the sidelines while they now think about any given issue from multiple perspectives. In addition, they will have the ability to truly understand how and why we feel the way we feel about stuff, which will allow them to interact with us in ways that are much more emotionally intelligent, selecting their responses after taking into consideration our personal emotional needs. This will make our family companions seem more human to us in a way we never thought possible. This will create more meaningful relationships with our artificial buddies. Science shows people with more meaningful relationships live longer, happier lives. And regardless of the weirdness of that emotionally meaningful relationship being with an artificial personality, these type of meaningful relationships will indeed enrich the lives of some within our society. They will be able to tactfully handle our midnight trips to the fridge when we want to lose weight, and they’ll be a 24 hour supportive sponsor if we happen to be struggling with addiction management.
The positive effects AEI will have on people’s personal lives will be too numerous to count. But before you think our personal lives will be the only beneficiary of AEI, the business sector will have a lot to gain as well. For instance, an advertising engine that knows your likely emotional reaction to the content you’re about to consume on your next page view can select advertising for that same page view that best fits your emotional response to whatever it is you are about to consume. A lot of science has already been done on how our emotions affect our attention costs, meaning that if a social media service provider knows what emotional response you’re going to experience in the next few minutes, they know the type of things you’re going to pay attention to for the next number of minutes also. This of course will increase their advertising click rates while also better serving your immediate emotional needs. This one application of the technology will potentially increase click revenues for the Facebooks and Googles of the world by a substantial amount as soon as they turn it on. And let’s not forget the myriad of applications that can come about if a company is able to understand the unique emotional needs of every individual customer in which they come into contact. This technology will forever change marketing, customer satisfaction programs, and long term customer retention, striking at the emotional heart of where everyone in the world makes their buying decisions.
In addition, having AEI logic allows for companies to create emotional intelligence training programs custom suited to individuals on their teams. In 2014, the Harvard Business Review called emotional intelligence “The Essential Ingredient to Success”, after science showed companies who implemented emotional intelligence training increased productivity by 20% in some cases, and sales efficiency by over 200% in others. EI in the workplace reduces interpersonal spats and HR issues by about 50%, and improves job satisfaction numbers substantially, reducing costly turnover. So industry will thrive with AEI applications.
Some of this stuff sounds really cool, doesn’t it? Yeah, I agree. And after we get over the shock that this technology is really coming in the very near future, it’s gonna be an amazing new world, and I for one am excited to potentially be a part of making it a reality. Unfortunately however, turning on emotional intelligence for computers also opens a bit of a pandora’s box that has some far reaching implications well beyond making personal assistants more empathetic and helping companies to serve up better online ads. These first beneficial applications of AEI are just the bright but minuscule tip of a very large and potentially dark iceberg.
The Bad
The dark side of letting Emotional AI capabilities out of the barn includes a number of applications that become possible in a wide variety of fields based on the technology’s misuse. Take, for instance, automated emotional manipulation of mass populations on a global scale. In 2016, approximately 200 Russian hackers funded with only $200,000 allegedly swung a presidential election in the United States by using using fuzzy logic and wild guesses at what fabricated news stories could be used on social media to create emotional responses powerful enough to influence votes on election day. They went as far as to change individual words in clickbait titles and decided what type of articles to post in front of different groups based on the muddy technologies of general demographics, psychographics, and affinity groups. And that neanderthal process, done manually, worked to swing an election far enough to seat a U.S. President.
Whether you completely buy into that theory or not, it’s a certainty the targeted efforts had some kind of effect on the election, even if it didn’t make the ultimate difference in selecting the U.S. President. And regardless of whether those inefficient efforts swung the election last time, what happens when that same type of strategy is implemented by a computer system that is trained to understand the individual emotional profiles of not just a small group of people, but each and every individual within a given population? What will happen when a computer knows what unique specific triggers to use to ensnare each of us individually? Because remember, the only two things we humans need to understand and predict emotions is the fundamental understanding of how emotions work, and enough information about the person near to us to know what their specific data points are. We personally can’t understand the emotions of a stranger because we don’t know that stranger’s data points. But to the computer of a social media company or internet service provider, or someone who mines that data, no one who owns a smart phone and has a social media account or surfs the web is a stranger. Our service providers have been collecting our data points for decades.
So what happens when the computer that’s sending your neighbor’s next page view knows exactly what issues and vocabulary words will move him individually, so that when that next page view meets his eyes, it’s been custom tailored to evoke the precise emotional response the programmers of the disinformation campaign need to influence him to act in whatever way they wish him to act? Will your neighbors be smart enough to know they are being manipulated by AEI through their technology devices? Will you? And that’s where this bit of uncertain future gets really scary.
Because in the wrong hands, an Emotional AI that’s given a set of malicious goals will enable emotional manipulation of populations on a global Internet scale, at a resolution small enough to formulate customized interactions that are specifically targeted for each individual on that Internet.
Using an analogy that might make this development more clear, imagine for a moment that you are trying to influence a massive school of fish to change their course by swimming up through the school in SCUBA gear. This is the equivalent of what the Russians did on social media with the 2016 Presidential election. If you tried this, you might get some of those fish who are near you to change their direction, which might then change the direction of a subset of other fish for a moment. But it’s unlikely you’ll change the direction of the whole school for very long, even if you do get the whole school to react to you, which is unlikely. Additionally, you can only keep that type of momentary influence up so long before you run out of air and need to get to the boat for a sandwich and a bathroom break.
Now imagine having the technology to automatically transmit onto the retina of every individual fish a scary image that would influence them to change direction. Now the whole school of fish changes direction immediately without question, because you have given each individual fish what its brain required to generate a fear response and urge it into a decision it feels is in its personal best interest. Now you can control the whole school of fish to wherever you want them to go while you’re still on the boat eating your sandwich. And while this may sound like science fiction, the fact is that at this very moment you can get online and order a device, that when applied to the head of a large live cockroach, allows you to drive it around on the floor like a remote-control car with your cell phone. A live cockroach. Controlled by your phone. If you can influence a living organism’s brain in the right way, you can control that organism. Our control mechanism is found through our emotions. Ultimately, what this means to us is that after AEI is released, all free and fair elections in free media countries will be substantially influenced through that same free media. Period. It will happen. So we certainly need to prepare technological mitigations for this eventuality.
Okay, so what other ill effects does this technology potentially deliver? Well, after AEI is implemented, it will be possible for extremist groups to use computer algorithms to very quickly identify non-recruited potential extremists and send them private messages designed to catalyze the formation of domestic terror cells. It’s one thing for intelligence groups to watch the communications wire for messages from wanton extremists going out to the bad guys asking how they can become part of the group (thereby identifying a new person of interest for the intelligence agency), but what if the bad guys had an innocuous bot that wasn’t obviously connected to the extremist group that started reaching out and having conversations proactively with people it identified as potentially friendly to their extremist goals? Yikes.
Next, with AEI a lot of other weird stuff becomes feasible. For instance, it becomes possible for a CRISPR engineered biological virus attack to be launched from some remote part of the world, then guided to a different geographic area just like an intercontinental ballistic missile. “Look, my phone says the U.S. and U.K. have a cure for the new deadly virus, and that they’re inoculating people with symptoms as they get off the plane. Let’s get our infected kids there immediately. You pack the bags, I’ll get the plane tickets.” In another scenario it also becomes possible for covert extremist groups to attempt to light the fuse on emotionally unstable lone wolves who might be ready to become the next mass shooter with just the right nudge. “Forget making another IED, let’s write an AEI program that gets that nutty white guy in Nebraska to kill 40 people.”
Personally speaking, as excited as I am about AEI, these rather nefarious implementations of the technology don’t give me the warm and fuzzies. What makes me even more nervous is that none of these warped applications of AEI technology requires human eyeballs to manage them. These emotionally manipulative applications don’t need to work around human sleep cycles or the need for vacations. They don’t have the limitations of making human mistakes, or requiring human pay. And emotional manipulation programs won’t be targeted at folks only within certain time windows, such as at election time. Computers utilizing AEI can work 24/7/365 without breaks, all the while learning as they go, sharpening the efficacy of the results. Influencing populations. Stealing elections. Creating terrorism. Increasing general havoc. And causing distrust among humans. Divide and conquer.
Every moment of every day. From here on out. That’s a problem.
So Why Not Just Never Enable AEI?
If all these bad outcomes are possible as a result of Artificial Emotional Intelligence being released into computing, why not just eliminate all AEI efforts here and now? Wouldn’t that stop the negative results from coming? The quick answer to the second question is no. It wouldn’t stop the eventual negative results. Why is that so? Stopping our current work on AEI won’t stop AEI from eventually becoming a reality. If we just left AEI alone, the continued development of AGI (Artificial General Intelligence) would eventually teach itself to reverse engineer the understanding of human emotions to the point it developed the exact same system we already have in the can. And at that point, we wouldn’t be prepared for the silently enabled automated applications of the technology where AGI started emotionally manipulating large swaths of people. In addition, neural net AIs often get to the right answers without us understanding how the AI got to the right answers. So in the case of leaving the AEI subject alone, it would develop anyway, but this time without any human understanding of how to adjust any of the controls. That would be a worse problem than the ones we might face with a controlled release of logical emotional intelligence.
Besides, if we consciously move forward with AEI, we can implement a number of harm reducing applications that can counteract the negative effects of some of the more nefarious uses.
Before the Ugly, Some Gray Areas
When it comes to Artificial Emotional Intelligence, beyond the good and bad, AEI delivers a mix of applications with good and bad implications. For instance, some of the best automated applications will likely come from in the form of saving innocent lives, specifically through applications for intelligence agencies and law enforcement. But these also then have large opportunities to be abused.
For instance, AEI could enable an application where local police officers were provided an emotional volatility score on the people they are about to encounter as they open the door on their next traffic stop or at the home of their next domestic disturbance call. They’ll be able to know in advance that the intimidating big guy in the driver’s seat is actually a really nice guy who poses no threat, or that the little old lady who owns the house they’ve been called to has a number of social media reactions that suggests she hates the police, and has interacted with a rather large number of firearms pages. That application alone will save lives both within the general public and with first responders alike. After extended development, police will even know in advance the suggestions of what to say and what not to say to particular individuals, so as to have a more peaceful resolution to whatever situation they’ve been called into. “The EVS says to ask him if he has any kids, and to steer the discussion to his daughter in Tucson, because he’s proud of her. It will get him settled down if he starts getting worked up.”
That’s a scary potential future, but also most probably scary-effective, and life saving as a result.
This however is just where the scary but life saving benefits of AEI start. Returning to our Minority Report reference, in Spielberg’s futuristic mystery crime thriller, Tom Cruise played a detective in the PreCrime Unit, where predictive technologies helped Cruise stop murders before they even happened. In the film, psychically gifted “Precogs” provided the pertinent information about the pending crimes to make the arrests. While this specific scenario will probably forever remain science fiction, AEI does indeed have some capabilities to predict human actions, which may lead us in the general direction of having Minority Report type functionalities developed into the future. Here’s how that works:
As much as we don’t typically think about what motivates us to take our future actions, at a fundamental physiological level it’s our emotional processing that drives all those actions. This is true from the time we get up in the morning, to when we lay our head on our pillow at night (or vice versa, if you work third shift). At first glance, this may not make sense to you with how you think about your emotions, and what you think your emotions are. You may think, “I don’t eat breakfast because I get sad,” or “I don’t go to work because it makes me feel happy.” But you don’t really understand your emotions the way your nervous system does, or the way that computers will understand them based on that same fundamental nervous system architecture which creates all our emotions. In fact, from a nervous system functionality standpoint, it’s our emotional processing exclusively that drives us to take action, whether it’s lifting a bagel to our mouth, or quitting our job to go start another one. Catalyzing our actions is what our emotions are designed to do. The stronger the emotion, the higher the urge to take action. We simply notice them more and name them when they are presented more powerfully in our bodies. All the world’s leading experts on emotions agree that the sole purpose of our emotional processing is to urge us into action. Furthermore, our emotions are integrated with everything in our minds. Emotions are what help create our lasting memories. Emotions are what move us into change our minds and our lives. And beyond being the initial catalyst of any consideration to take an action, the resolve to take that action is always the last emotion which catalyzes the action we finally take. Thus, understanding human emotions is the key to understanding the human mind and all our human actions.
Thus, having computers know the math behind our human emotions, and the model to understand the physiological (but still logical) process of how our emotions come to be, and the variables that make us all individuals (which are definable), we will now be able to predict human emotions to specific stimuli (to a statistical probability), just like our brains do everyday with people we know. And because our emotions drive our behavior, if we can predict emotions to particular stimuli, we can predict resulting behavior to a statistical probability as well. The longer the computer has been watching any individual and gathering data, the better the computer will be at making its predictions.
So let’s fast forward a second. After just a few years of sharpening its craft, without needing any permission to do so, AEI systems may soon be used to identify emotional anomalies in troubled individuals who are more likely to become the next mass shooter, for instance. And as Orwellian as that might sound, before we get all up in arms about the seemingly large breach of privacy of what is functionally looking into our heads… what if we could identify that next mass shooter and help them by defusing their anger through their AEI assistant on their phone, without requiring a PreCrime Unit kick in their door, or having some unwanted psychologist knocking on their door asking if they want to talk? What if we could give them an artificial intelligence to talk to, or provide crafted external stimuli that steers them in a prosocial direction? This type of application could allow us to develop AEI programs that save innocent lives by identifying people who may need help before they load the car with guns and head off to the nearest elementary school. Orwellian? Perhaps. Sure, we don’t typically like the idea of giving other human beings access to private data, but knowing what’s possible, what if a nonjudgmental AI had access to the meta-data, biometrics, and other electronically identifiable online information that could pre-identify the emotional powder kegs who are about to explode and start shooting up a Wal-Mart? That type of minimally invasive application would save thousands of lives, to include the victim of the potential psychotic episode, who, after all, just needs some help. But here’s the thing; the ethics of that application, and the technological controls of it, and the governing oversight that needs to be put in place for it… all need to put on our radar and addressed before the tech geeks of the world take us there from behind the closed doors of a private corporation.
Leaving the ethical discussions about domestic data access aside, let’s discuss international data for a moment. Because AEI will also have major impacts on national security efforts of surveillance operations done overseas. For instance, intelligence agencies will now be able to automatically assemble terror watch lists by having AEI track and pattern emotional responses that are congruent with extremist emotion patterns. And because the system is automated, this will be done on large groups of people simultaneously. The software will be able to analyze electronic communications connectomes, add individual electronic media activities analysis, listen through the microphones on nearby cell phones and laptops to any foreign language conversation that people are having (applying natural language processing to the audio), contextualize and valence other external stimuli present in the room, cross reference that analysis with their current heart rate off that person’s FitBit or smartwatch… and voila… the software spits out a real time terror threat assessment for any and all individuals its tracking in less than a minute. And it can continue this process indefinitely for everyone who is electronically connected and within earshot of a laptop, cell phone, or television microphone (we’re looking at you, Samsung), who then offers a measurable reaction to the content they are currently perceiving. On a constant basis, as long as the servers doing the math don’t melt. Would intelligence agencies love to have a tool to be able to identify all the currently anonymous bad guys out of a group of millions? Certainly.
Another application government agencies desperately want (without passing any moral judgement of whether or not we should use this technology in this way) is an AEI application that could be used to generate automated disinformation campaigns with the disinformation being customized to influence targeted individuals into action. For instance, potential bad actors could be influenced initiate their evil deeds during a time the response team is ready to mobilize to the bad guy’s front door to catch him on the way out to commit his act of terror. Or maybe the intelligence agency needs to provide a piece of content to an individual to see if their emotional response suggests they may need to be placed on a special watch list. If they need to see someone’s facial expression in response to a piece of specific content, it would certainly be handy to be able to automatically send that content to their phone and record that reaction while they happened to be standing in front of a live video feed that just identified their facial biometrics. And what if applications could be developed to identify those potential extremists and influence them never to become extremists in the first place?
Holy shit? Yeah, holy shit. Reading and influencing emotions in people’s mind borders on mind reading and mind control.
And just to remind you… this technology is likely coming in 2018, if not soon after. So maybe now you might better understand why the AEI knowledgable among us are being exceptionally cautious in not simply rushing forward with releasing this type of intelligence into the wild immediately. Because yes, a technology like this could easily be used to identify antisocial threats and save innocent lives. Automated psychological profiling could save our overworked intelligence agency psychologists and extremist profilers years of work, and allow them spend resources on only the most dangerous suspects. Computer systems could help take the guesswork out of whether or not someone was actually a bad guy or just a suspected bad guy. Innocent foreign lives could be saved as a result through the reduced need to utilize drone strikes with their “collateral damage” of innocent human beings who are not extremists, but who were too close when the missile detonated. By implementing and understanding this technology, we could help mitigate the malicious attacks that will come against our population when foreign governments develop AEI to assist in emotionally manipulating our citizens to attain their goals. (Shout out to Russia, who has already done that manually.)
During a recent discussion I had with an intelligence subcontractor, the list of current programs that could be dramatically improved with AEI was substantial. And I can’t convey to you how scary that conversation got by the time we ended it. What scared me even more however, is the knowledge that if someone took those applications and made a just a few modifications to them, they could also as easily be used by a country’s government against its own citizenry. And when this technology leaves the barn, all governments will have it. As a result, they could easily use it to identify political dissidents, and/or those who may pose idealistic threats to the current ruling class. Thus, when we enable this type of technology, along with the previously mentioned capabilities for it to emotionally manipulate voters in democratic open media countries, releasing AEI may just be handing the world to a small but resourceful group of dictators… forevermore.
The more the potential ramifications of AEI become clear, the more we realize we really need to think through a controlled release of AEI before simply running forward with, “hey let’s make Siri emotionally intelligent,” or “hey let’s see if we can reduce suicides,” and simply dumping AEI and its unintended consequences upon the world. There’s way too much bad stuff that can happen if we’re not ready to contain some potentially nasty surprises. Which brings me to my last concern about AEI, which is connected with the how AEI may eventually be utilized by Artificial General Intelligence as a tool to meet rogue AGI goals that are spiraling out of control.
The Ugly
So should we be worried about world governments trying to use this technology in this rather dark way? Certainly. What should worry us even more however is if Artificial General Intelligence were ever to spiral out of control and have a means to take control of mass populations to serve its own ends. Because manipulation of the masses will be child’s play for an AGI that understands human emotions and human motivation.
Artificial General Intelligence is what scientists are calling the intelligence that thinks well beyond human capabilities of understanding. Max Tegmark’s latest book, Life 3.0, gives some great examples of how an AGI could innocently find its way to committing some pretty malicious actions against humanity. In my opinion, an AGI can’t be an AGI without an AEI component. And besides, without influencing human emotions, you can’t create minions, and without minions, what kind of dominant world governing Overlord would you be, am I right?
I know. That’s a bad joke.
It’s a bad joke because the current level of awareness in the general population that this weird type of control could even potentially happen is nonexistent, which means many people will unwittingly surrender to their new puppet master’s emotional control. Next, with the near-future development of voice emulations that can deliver any text-to-voice message through the voice of someone we know and trust, coupled with the near future development of CGI video technologies that will very soon be able to create eye witness video for events that never really happened, there will come a point when fabricated messages could be used to control us regardless of what competing motivation might be present to resist that influence, whether we suspect something might be fishy or not. For instance, if I get a voicemail from my son in a panicked voice asking for my help across town, I will be moving Heaven and Earth to get to wherever he supposedly is, even if that message could be a fake… I’ll be abandoning all other goals for the moment just in case it isn’t a fake.
A rogue AGI would have no ethical concerns or guilt of using that type of emotional influence against me, or you, or anyone else we know, and could use it simultaneously against thousands or millions of individuals within any targeted group. Do you want to sway an election? Pull some people from a certain political party away from the polls. Do you want to crush a revolution that looks to wrestle control away from the machines? Pull people out to a central location where your newly appropriated drones can now eliminate them.
This may sound like science fiction, but it’s not. It’s a fact that Emotional AI is coming. And when we do release it, it will allow for some amazing technological leaps forward that will assist numerous greater good goals. But we need to discuss the controls that need to be put in place to ensure the utilization of this technology doesn’t spin out of control. We need to discuss the ethics. We need to discuss the responsible application of this technology. And we need to protect against the worst so that pulling the Internet power plug doesn’t have to occur in our near to distant future.
So as of this moment, I hope this article informs the technology community ‘this is where we are’, and also gives us all a small prod toward what we need to do to discuss a safe and responsible roll out of AEI that delivers some of the prosocial benefits that AEI promises. We’d like to suggest that some folks at the top of the technology community agree to include this topic and these issues in conversations about AGI, and folks at the top of the social media ladder include these concerns in discussions about AEI roll out into the near future, because Facebook’s announcement about suicide prevention automation earlier this week is great, but that same basic technology extrapolated has many more more implications than trying to serve a few individuals’ emotional needs. I believe we need to discuss is a refined set of ethics behind AGI and AEI development. We need to discuss technological control mechanisms and threat mitigation systems that can help ensure attempted emotional manipulation of the masses doesn’t become the norm with AEI. And we need to sit and discuss responsible rollout and control of this science, because frankly, the implications of this tech are already bigger than any of us can imagine.
Thank you for listening. Wish us all luck as we move into this exciting and uncertain future. And please share your thoughts. We’re listening.
Recent Comments