Zacharias ๐Ÿ Voulgaris

10 mesi fa ยท 3 min. di lettura ยท visibility 0 ยท

chat Contatta l'autore

thumb_up Rilevante message Commentare

So, How Does the Novel Intelligence Paradigm Apply to Artificial Intelligence?

So, How Does the Novel Intelligence Paradigm Apply to Artificial Intelligence?

In the past month, I introduced a new framework I devised to model intelligence. This 3-fold-turn-5-fold taxonomy looks at intelligence from a practical perspective and how it tackles problems, as well as learning. I didn't talk much about the learning part, but this is something that you can infer as it's inherently linked to asking questions, like "why?", "how?" and "what?" These questions constitute the backbone of this paradigm, as they correspond to three of the intelligences that comprise it. Specifically, they deal with the philosophical, mechanical, and mathematical intelligences respectively (corresponding article).

Augmenting these three intelligences are the communicative and the collaborative intelligences, which enable harmonious coordination of the aforementioned three, and the ability to join forces with other individuals in a mutually beneficial way, usually towards a common goal. I also talked about the role of Ethics and Morality in all this (corresponding article). In this article, we'll look at how all this applies to Artificial Intelligence (AI) and how it can mitigate the inherent risks this technology exhibits.

This article is not my first (or even my last) attempt to write about AI, as I'm very passionate about this topic. I even co-authored a book on AI and how it applies to data science, my field of expertise. My co-author is currently working as an AI researcher at a research institute in Germany, and recently we were invited to a panel about the business aspect of AI, which was part of the Customer Technology World conference which took place online. Anyway, enough about all this, since all the traction in the world can only scratch the surface for a subject like this. The field of AI is more like an ocean, and we have just expanded the coastline a bit. To proceed further, we may need a bigger boat!

Enter a different paradigm of intelligence, one that can help us first and foremost understand it in ourselves, before we start training machines to develop and use it. Otherwise, AI can be quite dangerous, as thought leaders in this field have warned us repeatedly (e.g., Elon Musk). Whether these people are right or wrong, I don't know, but it doesn't take a superintelligence to see that we are treading on thin ice here. I recently had a very insightful chat with a Canadian engineer/scientist who works in the field of Quantum Computing and who is one of the few people who have access to state-of-the-art AI. When I asked him why not enable access to this technology to other people via an API, for example, he responded that it was too dangerous. So, when I talk about AI (and Artificial General Intelligence, which is the logical next step of the state-of-the-art), I have all this in mind.

AI currently excels at mechanical and mathematical intelligence. It has little to do with philosophical intelligence, although it is bound to get there too. Communicative intelligence is also getting better though it's doubtful it will reach high-level communication like what we see in sci-fi films anytime soon. There isn't enough business value for this yet, plus the performance trade-off may not be justified in many use cases (e.g., AI systems that deal with back-end tasks, where no one asks any questions as to how the results come about). As for collaborative intelligence, this is fairly developed, at least for when it comes to other machines. As for the collaboration with humans, that's possible but not sufficiently developed due to the lack of a common framework and the prowess gap between man and machine. However, for particular niche applications, it is a reality (e.g., chess teams comprising of a human and a chess program), and there is potential for other applications too.

So, how can AI be developed/guided to embrace other aspects of the five-fold framework of intelligence and bridge the gap of understanding between the intelligence machines and us? Well, first and foremost, we could educate everyone on the topic of intelligence and the limitations of AI so that we are all on the same page. At the very least, we can have a reasonable expectation of AI and not be swayed by the marketing of the futurist movement, which is overly optimistic about technology in general, IMHO.

What's more, we can design AI systems that are well-rounded in terms of intelligence, instead of super-experts in one particular kind of intelligence and hopeless idiots in other aspects. This potential solution is significant when we start looking at the objective functions (aka fitness functions) that such systems try to optimize. To ensure a sustainable and safe evolution of an AI's functionality, we need to take baby steps instead of allowing it to accelerate uncontrollably, as it would if left untethered to reality.

Finally, we can detach AI development from organizations with vested interests that donโ€™t represent the whole. An advanced AI is inherently dangerous, but in the hands of someone who doesnโ€™t care about the rest of the world, itโ€™s even more dangerous. We may not be able to control how an AI system thinks, but we can control whether it is exposed to objective functions, reflecting objectives of questionable morality.

Perhaps keeping the human in the loop of the whole AI development and maintenance process is a great rule-of-thumb solution, at least for the time being. However, we need to think ahead, intelligently, and in the most holistic way as we work on this technology. Nature may be forgiving of our mistakes, but it's doubtful that AI would do the same if we fail to instill the right values in itโ€ฆ


If you enjoy articles like this but have a penchant for the technical side of things, you are probably going to like my technical blog, where I write about artificial intelligence and other data-related topics. Cheers!

thumb_up Rilevante message Commentare
Commenti

#5
The proof is out there if you look for it. Most people don't bother.

#4
Unfortunately there is no clear-cut answer to this question since we'd usually say something like "it depends" and an AI may not be able to use this as an acceptable response. In the world of tech, ambiguity isn't viewed in a positive light and as AI is a technology, it's confined by these rules. Perhaps these lines of life and death are better drawn on a case-by-case basis like an experienced and fair judge would do. However, how well they are drawn depends on the data used to train the AI and that's a very slippery slope. Perhaps this topic deserves its own article. Cheers

#3
I'm not saying it hasn't. Perhaps the general public isn't aware of it though. Also, since I have no proof it has happened, I'd rather maintain a speculative stance on the matter :-)

Harvey Lloyd

10 mesi fa #4

My fear with AI, if one could call it that, maybe concern is about where i am, is the things that make us human. Truth is something that will have to be coded into any intelligence. Where does the truth claim come from, who picks it? Currently we are experiencing a crisis of truth within humanity. We can reduce the truth divide down to the individual vs group. Where in AI do you cut off mathematically that the death of the individual is needed for the good of the group? Who (programmer) makes that choice? With in that mathematical equation how does forgiveness fit in, when we are forgiving something against a known truth? https://www.yalescientific.org/2020/05/an-algorithmic-jury-using-artificial-intelligence-to-predict-recidivism-rates/ One would think that historically the human could figure out this question. Yet we tend to repeat history as a matter of learning within generations. The bell curve of human development is not smooth. AI, mathematically would seem only to be able to stay on a smooth and calculated trajectory, without the "right" truths and a backstop of mercy. Thank you your posts you bring forward some very interesting questions about things i know very little about.

You're welcome. I was trying to formulate a response. "An advanced AI is inherently dangerous, but in the hands of someone who doesnโ€™t care about the rest of the world, itโ€™s even more dangerous. " I'm pretty sure the above has already happened.

You're welcome. I was trying to formulate a response. "An advanced AI is inherently dangerous, but in the hands of someone who doesnโ€™t care about the rest of the world, itโ€™s even more dangerous. We may not be able to control how an AI system thinks, but we can control whether it is exposed to objective functions, reflecting objectives of questionable morality." I'm pretty sure the above has already happened.

Thank you for the share Joyce \ud83d\udc1d Bowen Brand Ambassador @ beBee

Altri articoli da Zacharias ๐Ÿ Voulgaris

Visualizza il blog
1 settimana fa ยท 3 min. di lettura

Facebook's Recent Issues and the Need for More Privacy

Source: pixabay.com ยท Brief Overview of What's Hap ...

1 mese fa ยท 3 min. di lettura

Can We Transcend Binary Thinking?

Source: pixabay.com ยท โ€œWhile binary behaviour is s ...

2 mesi fa ยท 2 min. di lettura

Seven reasons why beBee is better than LinkedIn for professional networking

Is it me, or does this look like a set of bowling ...