Saavutettavuusseloste

AI – Déjà vu, jamais vu or presque vu?

After I retired and became an emeritus almost a year ago, I have faced both oddly familiar-looking but actually entirely new and familiar-looking but still novel matters. Moreover, some things that I should remember well from the past and be able to carry out as simple routines, have required extra effort. These have reminded me about the déjà vu, jamais vu, and presque vu phenomena.

Déjà vu is a feeling that one has lived through the present situation before, although it should be impossible. Jamais vu, often described as the opposite of déjà vu, means experiencing a situation that one recognizes but that still seems unfamiliar. Presque vu alias tip of the tongue indicates failing to remember something but with partial recall and a strong feeling that retrieval is about to happen.

The three phenomena seem to fit perfectly also with the present emergence of Artificial Intelligence, alias AI. Personally, I am closest to presque vu in my understanding of AI, and there is a simple reason for it. I am a well-seasoned AI researcher myself, from the early era of knowledge-based systems, as they were called that time. In the following I will, however, discuss the once again booming AI by making use of all the three phenomena.

Why do we believe having already exploited the present AI, although it is impossible?

There have already been several waves of AI, stemming at least from the sixties. Strangely enough, all these have emerged globally, but with rather many regional and contextual developments, too. Although this is common in the case of many other innovations, AI has always gained interest and raised hopes for radical changes for the future. In contrast, one could claim that many of the enormously important computing, communication and data management developments have been rather well understood and forecasted in advance, not only Moore’s law.

In other words, it seems that intelligence in computing, as opposed to mere effectiveness, is one of the familiar-looking elements in AI. In fact, the mere phrase Artificial Intelligence states exactly this.  As in the case of déjà vu, one should however realize that what intelligence in computing has meant, means now, and will mean in the future, are completely different. One may have a strong feeling that the present AI is the same that was experienced earlier, and that it will be known what happens next. This is simply false.

One could ponder, if this feeling is due to the envisioned effects of AI rather than the thing itself. In fact, many of the promises of the former AI waves have not been met, there have been remarkable fallbacks and even total failures. Just as one example, most if not all past extremely expensive dedicated AI computing platforms are now gone with the wind, and some of the biggest AI development programs were cancelled very early on. However, although knowledge-based medical assistants did not make initially any breakthroughs, many intelligent industrial systems have emerged. The solutions on which they are based have become widely adopted. Therefore, one can also claim that there is some “truth” in déjà vu of AI.

What is there in the present AI that we should recognize and remember?

When I was in the late eighties busily gathering, organizing, and applying knowledge for the intelligent embedded systems development environment called Spade, it was obvious that “knowledge” was in the heart of everything. In other words, the whole idea of intelligence was based on the availability of data and its organization and “artificial” use in a manner that would positively affect embedded systems development work.

That data was mainly put in use in the form of justified suggestions for human system designers, who would still carry out their expert work as earlier. Moreover, the data was gathered manually from individual persons, documents, and alike, and then organized and formulated into “machine-readable” format.

Later on, however, intelligent industrial systems were developed by automatically collecting and analyzing data from the systems’ target environments and feeding it in real time to the system, in order to manage and explain the current situation, to recover from problems and to make educated guesses for future needs.

In both cases the intelligent systems were, basically, as good or bad, as their data was and the means to make use of the data were. I believe that at present we should also keep in mind that without the very same AI will not work or can even cause severe problems.  This question has become more essential because of the fact that data is acquired and used in enormous volumes. It has become difficult to trace and qualify data, and the algorithms to make use of it have become complex and are invisible to most of us.  

In terms of jamais vu this implies that we must look outside of the AI domain to understand and support its emergence. As an example, analytics concepts, methods and means must be developed and adopted also for business needs, when AI-based solutions are taken in use and applied. One false intelligent decision may cause a business catastrophe. As a related remark, it is not a surprise that the ethics of AI and the responsibility of AI-based decision making have become important topics among both researchers and practitioners.

How can we push the present AI out from the tips of our tongues for the future?

My favorite phenomenon of the three for AI is, as said, presque vu. In addition to my past as an AI scientist, I like the promise of presque vu for remembering, understanding, and making use of what is already on the tip of the tongue.

What I would especially like to push forward is generative white-box AI.  Correspondingly to the increase of the amount of digital data, the number of us individuals involved in data has also dramatically increased. Mainly two people, the young engineering student Marko Heikkinen and me myself, were involved in the design of the knowledge-based Spade environment. The related data gathering included perhaps only a dozen other people.

Marko basically implemented and test-used the whole system in a few weeks by himself. He was therefore the one who generated the AI-based solution for use and knew its secrets. After a few years somewhat bigger teams, but typically only three to five people, developed intelligent industrial systems. Even these systems required human operators to make final process control and recovery decisions. Perhaps twenty or thirty people knew what was inside the systems and how to develop the systems further, as required.

If we want to get AI out of the tips of our tongues, we must let way more people, if not everyone, understand and be involved in AI. It can include creating and providing data, as it is mainly today, but in an open manner, or being associated with generating AI-based services, solutions, and new knowledge. And more precisely, I do not mean only wide use of ChatGPT or us all taking elementary courses of AI.

Veikko Seppänen