So, How Does the Novel Intelligence Paradigm Apply to Artificial Intelligence?
In the past month, I introduced a new framework I devised to model intelligence. This 3-fold-turn-5-fold taxonomy looks at intelligence from a practical perspective and how it tackles problems, as well as learning. I didn't talk much about the learning part, but this is something that you can infer as it's inherently linked to asking questions, like "why?", "how?" and "what?" These questions constitute the backbone of this paradigm, as they correspond to three of the intelligences that comprise it. Specifically, they deal with the philosophical, mechanical, and mathematical intelligences respectively (corresponding article).
Augmenting these three intelligences are the communicative and the collaborative intelligences, which enable harmonious coordination of the aforementioned three, and the ability to join forces with other individuals in a mutually beneficial way, usually towards a common goal. I also talked about the role of Ethics and Morality in all this (corresponding article). In this article, we'll look at how all this applies to Artificial Intelligence (AI) and how it can mitigate the inherent risks this technology exhibits.
This article is not my first (or even my last) attempt to write about AI, as I'm very passionate about this topic. I even co-authored a book on AI and how it applies to data science, my field of expertise. My co-author is currently working as an AI researcher at a research institute in Germany, and recently we were invited to a panel about the business aspect of AI, which was part of the Customer Technology World conference which took place online. Anyway, enough about all this, since all the traction in the world can only scratch the surface for a subject like this. The field of AI is more like an ocean, and we have just expanded the coastline a bit. To proceed further, we may need a bigger boat!
Enter a different paradigm of intelligence, one that can help us first and foremost understand it in ourselves, before we start training machines to develop and use it. Otherwise, AI can be quite dangerous, as thought leaders in this field have warned us repeatedly (e.g., Elon Musk). Whether these people are right or wrong, I don't know, but it doesn't take a superintelligence to see that we are treading on thin ice here. I recently had a very insightful chat with a Canadian engineer/scientist who works in the field of Quantum Computing and who is one of the few people who have access to state-of-the-art AI. When I asked him why not enable access to this technology to other people via an API, for example, he responded that it was too dangerous. So, when I talk about AI (and Artificial General Intelligence, which is the logical next step of the state-of-the-art), I have all this in mind.
AI currently excels at mechanical and mathematical intelligence. It has little to do with philosophical intelligence, although it is bound to get there too. Communicative intelligence is also getting better though it's doubtful it will reach high-level communication like what we see in sci-fi films anytime soon. There isn't enough business value for this yet, plus the performance trade-off may not be justified in many use cases (e.g., AI systems that deal with back-end tasks, where no one asks any questions as to how the results come about). As for collaborative intelligence, this is fairly developed, at least for when it comes to other machines. As for the collaboration with humans, that's possible but not sufficiently developed due to the lack of a common framework and the prowess gap between man and machine. However, for particular niche applications, it is a reality (e.g., chess teams comprising of a human and a chess program), and there is potential for other applications too.
So, how can AI be developed/guided to embrace other aspects of the five-fold framework of intelligence and bridge the gap of understanding between the intelligence machines and us? Well, first and foremost, we could educate everyone on the topic of intelligence and the limitations of AI so that we are all on the same page. At the very least, we can have a reasonable expectation of AI and not be swayed by the marketing of the futurist movement, which is overly optimistic about technology in general, IMHO.
What's more, we can design AI systems that are well-rounded in terms of intelligence, instead of super-experts in one particular kind of intelligence and hopeless idiots in other aspects. This potential solution is significant when we start looking at the objective functions (aka fitness functions) that such systems try to optimize. To ensure a sustainable and safe evolution of an AI's functionality, we need to take baby steps instead of allowing it to accelerate uncontrollably, as it would if left untethered to reality.
Finally, we can detach AI development from organizations with vested interests that don’t represent the whole. An advanced AI is inherently dangerous, but in the hands of someone who doesn’t care about the rest of the world, it’s even more dangerous. We may not be able to control how an AI system thinks, but we can control whether it is exposed to objective functions, reflecting objectives of questionable morality.
Perhaps keeping the human in the loop of the whole AI development and maintenance process is a great rule-of-thumb solution, at least for the time being. However, we need to think ahead, intelligently, and in the most holistic way as we work on this technology. Nature may be forgiving of our mistakes, but it's doubtful that AI would do the same if we fail to instill the right values in it…
If you enjoy articles like this but have a penchant for the technical side of things, you are probably going to like my technical blog, where I write about artificial intelligence and other data-related topics. Cheers!
Articoli di Zacharias 🐝 Voulgaris
Visualizza il blogIn a world where information is abundant, being able to process it and do so efficiently is a valuab ...
Not-so-technical intro · Anyone who has delved into computers has heard and probably experienced pro ...
Introducción no tan técnica · Cualquiera que se haya adentrado en el mundo de la informática ha oído ...
Professionisti correlati
Potresti essere interessato a questi lavori
-
Artificial Intelligence Manager
Trovato in: beBee S2 IT - 6 giorni fa
Selection 4.0 Roma (RM), Italia A tempo pienoIl nostro Cliente è un'azienda multinazionale e un player di riferimento per la fornitura di servizi/prodotti IT e Cloud e per loro siamo alla ricerca di: · Artificial Intelligence Manager ObiettivoIl Candidato selezionato, inserito nella divisione Enterprise, sarà responsabile ...
-
Artificial Intelligence
Trovato in: Talent IT 2A C2 - 2 giorni fa
Randstad Divisione Full HR Services - MBDA Metropolitan City of Rome Capital, ItaliaMansione · Randstad HR Solutions per conto di MBDA Italia, è in cerca di un: Artificial Intelligence & Computer Vision Engineer · MBDA è l'unico gruppo europeo in grado di progettare e produrre missili e sistemi missilistici per rispondere alle più svariate esigenze operative, pr ...
-
Artificial Intelligence Manager
Trovato in: Talent IT C2 - 6 ore fa
Selection 4.0 Rome, ItaliaIl nostro Cliente è un'azienda multinazionale e un player di riferimento per la fornitura di servizi/prodotti IT e Cloud e per loro siamo alla ricerca di: · Artificial Intelligence Manager · Obiettivo · Il Candidato selezionato, inserito nella divisione Enterprise, sarà respons ...
Commenti
Joyce 🐝 Bowen Brand Ambassador @ beBee
3 anni fa #7
The proof is out there if you look for it. Most people don't bother.
Zacharias 🐝 Voulgaris
3 anni fa #6
Unfortunately there is no clear-cut answer to this question since we'd usually say something like "it depends" and an AI may not be able to use this as an acceptable response. In the world of tech, ambiguity isn't viewed in a positive light and as AI is a technology, it's confined by these rules. Perhaps these lines of life and death are better drawn on a case-by-case basis like an experienced and fair judge would do. However, how well they are drawn depends on the data used to train the AI and that's a very slippery slope. Perhaps this topic deserves its own article. Cheers
Zacharias 🐝 Voulgaris
3 anni fa #5
I'm not saying it hasn't. Perhaps the general public isn't aware of it though. Also, since I have no proof it has happened, I'd rather maintain a speculative stance on the matter :-)
Harvey Lloyd
3 anni fa #4
Joyce 🐝 Bowen Brand Ambassador @ beBee
3 anni fa #3
Joyce 🐝 Bowen Brand Ambassador @ beBee
3 anni fa #2
Zacharias 🐝 Voulgaris
3 anni fa #1