The Dangers of A.I. and Their Relevance to Data Science
Lately there has been a lot of talk about the dangers of A.I. and how if left unattended, it may be catastrophic to the world. Even high-profile people who have a solid understanding of technology, such as Bill Gates and Elon Musk have taken a stance on this matter, warning people about this. But how does all this affect us, as data science professionals? Do we need to be wary about A.I. in our work?
When people talk about this matter, they usually refer to the issue of A.I. getting out of control and wreaking havoc in our world, by either taking control of the world or by destabilizing the economy. Although the former scenario is quite unlikely (there are plenty of movies and TV shows exploring this possibility, thereby bringing it to our collective attention!), the latter possibility may hold true. However, all these cases apply mainly to one of two scenarios:
1. A.I. becomes super-human (aka Artificial Super-Intelligence, or ASI) and diverse (aka General AI, or AGI) and therefore, beyond our control, and
2. We outsource a lot of tasks to narrow AIs (specialized A.I. systems that are very good at a particular task only), thereby driving lots of people out of work.
From these two scenarios, the 2nd seems to be a more imminent danger, though the 1st is also highly probable in the years to come (assuming a continuous progress in the A.I. field). As narrow AIs are much easier to implement and cheaper to develop, it is quite likely that they are going to be in abundance as more people become aware of the benefits of A.I. However, this doesn’t have to be a destabilizing factor for our economy, or any other aspect of our society. It is possible to train people to undertake other tasks, such as maintaining and coordinating these AIs, so that they can still add value to their communities, even if the tasks they were doing before are outsourced to these AIs. This transition is something that requires some effort though, as no real development comes about automatically. If something is easy and comes about effortlessly, you can be certain that it is following the path of entropy, leading to some systemic issue, even if it benefits certain aspects of that system.
In data science, A.I. has never been so relevant as it is today. Even if you use some fairly obsolete A.I. framework, such as TensorFlow, you can still benefit from the Deep Learning systems that can make predictive analytics and other parts of data science, easier and in many cases more robust. Although these A.I. systems are by definition narrow AIs, they may still require some attention. As ANNs are black boxes by their nature, when training one, many biases in the data are bound to creep in its weight matrix (the collection of its key parameters, as a data analytics model). So, it would be rush, if not unethical, to use these systems haphazardly. It would be more prudent to have professionals manage them and communicate to all the stakeholders the assumptions and limitations of these systems. Such an approach may not resolve all the issues of these AIs, but it can definitely keep them under wraps.
The danger of A.I. left on its own is quite real, even if it’s not catastrophic in data science. However, it is much more subtle than it is portrayed in sci-fi films, which after all are works of art, not actual science. The real science behind A.I. is more sophisticated and renders these systems a liability, when it comes to critical processes. So, maybe it would be a good idea to keep humans in the loop, especially when someone needs to be held accountable for the outputs of these systems.
Source: pixabay.com (after some processing work)Th ...
Is it me, or does this look like a set of bowling ...
Non hai gruppi che si adattano alla tua ricerca