When Playing It Safe Is the Best Strategy: The Case of A.I. Safety
Just like many other people involved in applied science today, I enjoy taking risks. In fact, I’m a big supporter of the idea that not taking risks is the riskiest thing of all. Nevertheless, when it comes to A.I. taking risks is not only dangerous for the individual, but also potentially catastrophic for the whole.
Still, chances are that A.I. is not going to one day become self-aware and decide to kill us all Terminator style! The safety concerns with it are far more subtle and because of that, easier to dismiss as flukes. After all, when it comes to safety, we are not the most cautious species, as history has shown. What’s worse, when it comes to potential profit, we often let temperance go out the window, since we focus too much on what the competitors are doing.
Perhaps that why we have chat-bots running wild in conversations we cannot comprehend. Granted that this is not a serious safety issue, the underlying problem is quite real, namely the fact that we are not fully aware of how A.I. works and how it could evolve as we set it on auto-pilot. And if you think I’m speculating, just take a look at Google’s self-reproducing A.I. systems. Naturally, these are still weak AIs, focusing on very specific tasks, but given how fond of automation big tech companies are, this whole endeavor is a more of a slippery slope than it looks like.
With the relentless propaganda on this matter by the people who have an unhealthy fixation on the wonders of this tech, many people are sold on the idea that everything is going to be fine because some super A.I. (aka ASI) will evolve once a certain level of A.I. is attained (AGI). Then, out of some strange urge to help us, this A.I. is going to solve all of our problems, including those caused by the A.I. tech itself. I don’t know about you, but I find this scenarios quite implausible. As an engineer I’ve grown quite confident of the empirical fact that “if something can go wrong, it will!” (aka Murphy’s Law).
But not all is doom and gloom. Despite the variety of potential issues A.I. brings about, there are many potential solutions that can help us avert any major mishap. Many A.I. researchers today are actually studying this topic seriously, while several A.I. professionals raise awareness of the fact that A.I. Safety is a matter that concerns all of us and that deserves close consideration. In my latest video on Safari I talk about all this, along with the idea of a safe advanced A.I. as it’s good to put some thought on what a positive A.I. can be like, instead of just dwelling on imagery of malicious robots.
What are your thoughts on this topic? Will A.I. be a useful power tool, or will it create more problems than it solves? Feel free to let me know in the comments.
Source: pixabay.com (after some processing work)Th ...
Source: pixabay.com · This article was inspired by ...
Non hai gruppi che si adattano alla tua ricerca