Zacharias 🐝 Voulgaris

2 anni fa Β· 3 min. di lettura Β· visibility 0 Β·

chat Contatta l'autore

thumb_up Rilevante message Commentare

The 3 most alarming issues of modern AI

The 3 most alarming issues of modern AIWhy mention all this now?

Recently I came across an interesting article on Medium, through my LinkedIn feed. Although I rarely pay much attention to stuff I come across in the conventional Social Media platforms, since most of them serve a particular company's agenda, this one was different. In fact, not only did I read that article but I also shared it with my network, as well as a couple of other Social Media platforms, including beBee. The reason is that it touched a very important point, one that I had not delved into in the past, partly because I was biased on this matter so my perception of it was bound to be unreliable. However, seeing that other people, individuals who have done their due diligence on this matter, have expressed concern about this matter, made me realize that the world needed to be reminded of this problem. Namely, the issues with modern A.I. that may threaten our well-being as a whole.

I'm not going to talk about the danger of the A.I. systems becoming self-aware and taking over the world (there are plenty of sci-fi movies that cover this possibility extensively), nor am I going to raise an alarm about the possibility of an AGI emerging and manipulating us into doing what it deems best for us (again there are sci-fi books exploring this possibility in depth, including an ebook of my own). What I am going to talk about is the 3 most alarming issues right here right now, regarding this technology: complacency due to A.I., lack of transparency regarding A.I. systems, and the R&D of A.I. being carried out by amoral entities.

Complacency due to A.I.

This is the most obvious issue with A.I. and one that I've covered in my blog (link). In a nutshell, it involves us outsourcing our thinking and decision making to A.I. systems, trusting them with our personal information, our property, and to some extent our lives. It doesn't even take a conscious A.I. for this situation to spiral out of control since it is our own complacency and blind faith in technology that is the culprit here, our own lack of consciousness. This may sound like a Black Mirror episode, but the chances of it happening may be greater than you think. After all, with so many activities taking up our time (sometimes for good reason), isn't it efficient and meaningful to optimize our day-to-day lives using A.I. technology, even if that means automating large aspects of it? Perhaps yes, at least to some extent, but beyond a certain point, we may want to have a say regarding what happens in our lives. Finding this exact point, however, may not be as easy as it seems, while it's doubtful that the A.I. will find it for us and alert us accordingly before it's too late.

Lack of transparency regarding A.I. systems

This is a serious issue that many researchers are aware of already. Still, with the myriad of resources available to them, they still haven't figured out a solution to it, which is why A.I. systems are still black boxes. In many cases that's fine, since these systems are accurate enough, but what about the cases when they are not? What if the false positive or false negative of such a system causes great harm to an individual or a group of people? Will the enterprise owning that A.I. system take responsibility for its actions? Anyway, the transparency issue is hard, but is it really that hard that no-one in the world, including all these A.I. experts (who are paid more per month than some people make in a year), cannot tackle it? Or is the problem solved already but no-one is aware of it, just like few people are aware of Deep Learning frameworks other than TensorFlow? The lack of transparency in A.I. systems may not be a life-threatening issue right now, but what would happen when life-of-death decisions are outsourced to these black-box A.I. systems?

R&D of A.I. being carried out by amoral entities

The final issue in this list is the one that is more aligned with the article that started this whole investigation on the matter. Although the article focused on one particular entity that doesn't seem to have any moral code whatsoever, other than the juvenile "don't be evil" motto, other corporations that are involved in the A.I. arena are also potential liabilities regarding this technology. Naturally, it would be immature to think of these or any other companies are "evil" or with the mindset of a movie villain, since more often than not; they just are not aware of the issues the technologies they develop may bring about. Take FB, for instance. Does its CEO look like someone who has a solid grasp of morality or ethics? Is he even capable of thinking on this level? Are the CEOs of the other Silicon Valley companies much better? What about the researchers involved in bringing these A.I. systems to life? Perhaps in academia, A.I. researchers can think in broader terms about this technology since they have a different set of values, but do you honestly think that a researcher with strict deadlines, earning a 6-figure salary, is going to ponder on the morality of it all, if the company he works for doesn't?

What are your thoughts on this matter? What other alarming issues of A.I. are there which you consider equally or more important than the ones I mentioned? Feel free to let me know of your thoughts in the comments below. Thanks!

thumb_up Rilevante message Commentare
Zacharias 🐝 Voulgaris

I agree. Perhaps that's why transparent A.I. is something that requires immediate attention, in order to mitigate the risk of unknown biases in the A.I. system.

Franci 🐝Eugenia Hoffman, beBee Brand Ambassador

Undeniably, A.I. has advantages, however, seems there'll be issues with stereotyping, putting all one's eggs in one basket, and if one iota of info is incorrect, it will spread like wildfire.

Zacharias 🐝 Voulgaris

Thank you all for your comments so far! Ali \ud83d\udc1d Anani, Brand Ambassador @beBee, A.I. definitely has applications in the Insurance industry. However, the black box nature of most modern A.I. systems makes justifying the corresponding decision next to impossible. Personalization is probably one of the few applications that remains benign though. Good point. @Phil Friedman, indeed laziness is an issue that makes things worse in the A.I. world. Funny that you mention religion, albeit as a metaphor. There are people nowadays who look at the next step in A.I. with religious fervor, going so far as to worship it as a deity. @Bill Stankiewicz, A.I. definitely helps with logistics and warehourse management through robots. I wonder what would happen if these bots got hacked, however, or if there is an error in the A.I. system driving them...

Bill Stankiewicz, 🐝 Brand Ambassador

I have seen some great uses of AI in warehouses installing the new bots. They reduce walking for the pickers, check out GRAY0RANGE out of Roswell, GA. They are installing hundreds in XPO facilities.

Manuel Chinchilla da Silva

Thanks for the info!

Phil Friedman

Phil Friedman

2 anni fa #3

My primary concern is a variation on your "black box" alarm. It is that Ai is way, way over-represented and over-sold. And the scam of that is hidden behind the cloak of the black box, and abetted by the religious mantra that "If da says its so, it must be so." Most companies currently relying on Ai to make people-related judgments are doing so out of laziness... the search for the quickest and fastest way to parse clients, customers, job applicants, employees, etc.even if the results are far less than accurate. It's always a matter of forcing a square peg of quantitative and pseudo-quantitative data into the round hole of Intelligent Judgment. Cheers!

Ali 🐝 Anani, Brand Ambassador @beBee

Well-presented thoughts on A.I Zacharias \ud83d\udc1d Voulgaris and a very balanced approach yours is. Only this morning I read a report on A.I. and how it helps insurance companies to gauge into customers to tailor offers for them. But what guarantee do we have they would stop there I don't know.

Zacharias 🐝 Voulgaris

Thank you for the share, Pascal!

Altri articoli da Zacharias 🐝 Voulgaris

Visualizza il blog
2 mesi fa Β· 2 min. di lettura
Zacharias 🐝 Voulgaris

Seven reasons why beBee is better than LinkedIn for professional networking

Is it me, or does this look like a set of bowling ...

2 mesi fa Β· 2 min. di lettura
Zacharias 🐝 Voulgaris

Censoring Platforms

Source: Β· This article was inspired by ...

2 mesi fa Β· 3 min. di lettura
Zacharias 🐝 Voulgaris

Cognitive Dissonance and What We Can Do about It

Source: Β· What Cognitive Dissonance Is ...