Why A.I. Transparency Is Something That Concerns Us All
Recently I had a good chat with my former adviser who mentored me throughout my PhD in ML and A.I. applications. Among the things we discussed was the need for transparency in A.I. systems, something that still leaves a lot to be desired, even if it has attracted a lot of research interest lately. However, the topic of transparency in A.I. is something that concerns all of us, not just researchers in this field.
A.I. has demonstrated a lot of applications in organizations across a variety of domains. From telecommunications, to robotics, to retail, A.I. has added a lot of value in all kinds of companies, while this trend doesn’t show any signs of stopping any time soon. A bit part of the reason why A.I. has been so popular is Deep Learning, a methodology that enables large scale artificial neural networks (ANNs) to be deployed and utilized for analyzing all kinds of data, without the need of a lot of work from the end user. Of course, they still need some fine-tuning, but you don’t need to be a domain expert to figure out how to work the data before you feed it into a model based on this technology.
All that is great, but ANNs are black boxes and that’s something that we may have allowed as a side-effect of this tech, which exhibits exceptional performance otherwise. It’s like a genie that we’ve let out of its bottle and it has started granting us wishes but we have no idea how it manages to manifest them. What’s worse is that many of us don’t even care. As long as that Deep Learning ANN spits out predictions that are quite accurate, we don’t worry about the process it used to come up with them. This way, things like hidden biases in the data can be crystallized in a predictive analytics model and no one is the wiser. Also, more sophisticated A.I. systems such as chatbots can start talking in ways we cannot follow, using a language of their own which although shares letters and words with the English language, they are used in such a way that doesn’t make sense to us (although it may make sense to the A.I. systems themselves).
We may reap the fruits of A.I. today being blissfully ignorant of the hidden processes under the hood, but whether this is a sustainable strategy is a matter of debate. Of course we don’t really care how the recommendation engine behind the online shop we use came up with the suggestions it gave us, but what if there was a more critical application that A.I. was used for? What if a relative’s diagnosis of a terminal disease is based on some A.I. system that is a black box? Or whether the approval of a loan is dependent on the output of an equally opaque A.I. system?
It is clear that A.I. has a long way to go before it can be fully acceptable as a decision making mechanism, one that can be integrated into an organization’s process without creating issues for the stakeholders of the projects this process affects. Transparency has a lot to offer in that respect, even if it seems like a challenging obstacle at the moment, when it comes to A.I. research. However, if there is one thing that this field has shown us is that the unexpected is something that can be expect in the future as both we and the A.I. systems we create become more intelligent.
Source: pixabay.com (after some processing work)Th ...
Source: pixabay.com · This article was inspired by ...
Non hai gruppi che si adattano alla tua ricerca