The 3 most alarming issues of modern AI
Why mention all this now?
Recently I came across an interesting article on Medium, through my LinkedIn feed. Although I rarely pay much attention to stuff I come across in the conventional Social Media platforms, since most of them serve a particular company's agenda, this one was different. In fact, not only did I read that article but I also shared it with my network, as well as a couple of other Social Media platforms, including beBee. The reason is that it touched a very important point, one that I had not delved into in the past, partly because I was biased on this matter so my perception of it was bound to be unreliable. However, seeing that other people, individuals who have done their due diligence on this matter, have expressed concern about this matter, made me realize that the world needed to be reminded of this problem. Namely, the issues with modern A.I. that may threaten our well-being as a whole.
I'm not going to talk about the danger of the A.I. systems becoming self-aware and taking over the world (there are plenty of sci-fi movies that cover this possibility extensively), nor am I going to raise an alarm about the possibility of an AGI emerging and manipulating us into doing what it deems best for us (again there are sci-fi books exploring this possibility in depth, including an ebook of my own). What I am going to talk about is the 3 most alarming issues right here right now, regarding this technology: complacency due to A.I., lack of transparency regarding A.I. systems, and the R&D of A.I. being carried out by amoral entities.
Complacency due to A.I.
This is the most obvious issue with A.I. and one that I've covered in my blog (link
). In a nutshell, it involves us outsourcing our thinking and decision making to A.I. systems, trusting them with our personal information, our property, and to some extent our lives. It doesn't even take a conscious A.I. for this situation to spiral out of control since it is our own complacency and blind faith in technology that is the culprit here, our own lack of consciousness. This may sound like a Black Mirror episode, but the chances of it happening may be greater than you think. After all, with so many activities taking up our time (sometimes for good reason), isn't it efficient and meaningful to optimize our day-to-day lives using A.I. technology, even if that means automating large aspects of it? Perhaps yes, at least to some extent, but beyond a certain point, we may want to have a say regarding what happens in our lives. Finding this exact point, however, may not be as easy as it seems, while it's doubtful that the A.I. will find it for us and alert us accordingly before it's too late.
Lack of transparency regarding A.I. systems
This is a serious issue that many researchers are aware of already. Still, with the myriad of resources available to them, they still haven't figured out a solution to it, which is why A.I. systems are still black boxes. In many cases that's fine, since these systems are accurate enough, but what about the cases when they are not? What if the false positive or false negative of such a system causes great harm to an individual or a group of people? Will the enterprise owning that A.I. system take responsibility for its actions? Anyway, the transparency issue is hard, but is it really that hard that no-one in the world, including all these A.I. experts (who are paid more per month than some people make in a year), cannot tackle it? Or is the problem solved already but no-one is aware of it, just like few people are aware of Deep Learning frameworks other than TensorFlow? The lack of transparency in A.I. systems may not be a life-threatening issue right now, but what would happen when life-of-death decisions are outsourced to these black-box A.I. systems?
R&D of A.I. being carried out by amoral entities
The final issue in this list is the one that is more aligned with the article that started this whole investigation on the matter. Although the article focused on one particular entity that doesn't seem to have any moral code whatsoever, other than the juvenile "don't be evil" motto, other corporations that are involved in the A.I. arena are also potential liabilities regarding this technology. Naturally, it would be immature to think of these or any other companies are "evil" or with the mindset of a movie villain, since more often than not; they just are not aware of the issues the technologies they develop may bring about. Take FB, for instance. Does its CEO look like someone who has a solid grasp of morality or ethics? Is he even capable of thinking on this level? Are the CEOs of the other Silicon Valley companies much better? What about the researchers involved in bringing these A.I. systems to life? Perhaps in academia, A.I. researchers can think in broader terms about this technology since they have a different set of values, but do you honestly think that a researcher with strict deadlines, earning a 6-figure salary, is going to ponder on the morality of it all, if the company he works for doesn't?
What are your thoughts on this matter? What other alarming issues of A.I. are there which you consider equally or more important than the ones I mentioned? Feel free to let me know of your thoughts in the comments below. Thanks!