Why A.I. Transparency Is Something That Concerns Us All

Recently I had a good chat with my former adviser who mentored me throughout my PhD in ML and A.I. applications. Among the things we discussed was the need for transparency in A.I. systems, something that still leaves a lot to be desired, even if it has attracted a lot of research interest lately. However, the topic of transparency in A.I. is something that concerns all of us, not just researchers in this field.
A.I. has demonstrated a lot of applications in organizations across a variety of domains. From telecommunications, to robotics, to retail, A.I. has added a lot of value in all kinds of companies, while this trend doesn鈥檛 show any signs of stopping any time soon. A bit part of the reason why A.I. has been so popular is Deep Learning, a methodology that enables large scale artificial neural networks (ANNs) to be deployed and utilized for analyzing all kinds of data, without the need of a lot of work from the end user. Of course, they still need some fine-tuning, but you don鈥檛 need to be a domain expert to figure out how to work the data before you feed it into a model based on this technology.
All that is great, but ANNs are black boxes and that鈥檚 something that we may have allowed as a side-effect of this tech, which exhibits exceptional performance otherwise. It鈥檚 like a genie that we鈥檝e let out of its bottle and it has started granting us wishes but we have no idea how it manages to manifest them. What鈥檚 worse is that many of us don鈥檛 even care. As long as that Deep Learning ANN spits out predictions that are quite accurate, we don鈥檛 worry about the process it used to come up with them. This way, things like hidden biases in the data can be crystallized in a predictive analytics model and no one is the wiser. Also, more sophisticated A.I. systems such as chatbots can start talking in ways we cannot follow, using a language of their own which although shares letters and words with the English language, they are used in such a way that doesn鈥檛 make sense to us (although it may make sense to the A.I. systems themselves).
We may reap the fruits of A.I. today being blissfully ignorant of the hidden processes under the hood, but whether this is a sustainable strategy is a matter of debate. Of course we don鈥檛 really care how the recommendation engine behind the online shop we use came up with the suggestions it gave us, but what if there was a more critical application that A.I. was used for? What if a relative鈥檚 diagnosis of a terminal disease is based on some A.I. system that is a black box? Or whether the approval of a loan is dependent on the output of an equally opaque A.I. system?
It is clear that A.I. has a long way to go before it can be fully acceptable as a decision making mechanism, one that can be integrated into an organization鈥檚 process without creating issues for the stakeholders of the projects this process affects. Transparency has a lot to offer in that respect, even if it seems like a challenging obstacle at the moment, when it comes to A.I. research. However, if there is one thing that this field has shown us is that the unexpected is something that can be expect in the future as both we and the A.I. systems we create become more intelligent.
Articoli di Zacharias 馃悵 Voulgaris
Visualizza il blog
I have never been such a big fan of an operating system to try to get others to use it. I like how G ...

Contrary to other articles on this topic, this is not a clickbait nor a piece of text ridden by hype ...

Technological advances often outpace the general public's understanding of the technologies and thei ...
Commenti
Harvey Lloyd
7 anni fa#4
Zacharias 馃悵 Voulgaris
7 anni fa#3
Zacharias 馃悵 Voulgaris
7 anni fa#2
Zacharias 馃悵 Voulgaris
7 anni fa#1