By akademiotoelektronik, 30/04/2022
2016, the year of artificial intelligence?
The development of artificial intelligence will probably be one of the key elements of this famous “fourth industrial revolution” that the Davos Economic Forum has just celebrated. Little is known, but the consecration of this still brand new science, despite the fantasies it already conveys, is the culmination of a long process which has not been linear. If 2016 is the year of artificial intelligence, declared as such at least by Microsoft, it will also mark the sixtieth anniversary of the Darmouth seminar which, for many specialists, consecrated the birth of this long journey of research and experiment in which scientists sought to endow the computer with capabilities similar to those of the human brain. On August 31, 1955, four then-nascent computer experts, John McCarthy of Dartmouth College, Marvin Minsky of Harvard University, Nathaniel Rochester of IBM and Claude Shannon of Bell Telephone Laboratories, decided to organize a quite a new genre. They suggest that for two months, with ten of their fellow researchers, they meet in Dartmouth during the summer of 1956 and thus set the purpose of the meeting: since all aspects of the phenomenon of learning and other manifestations of human intelligence are so precisely described, then a machine could simulate them.
Proposed remuneration: $1,200 plus fees for researchers; supported by their company (IBM, Bell, Hugues Aircraft, Rand Corp) for the others... John McCarthy (1927-2011), born in Boston to parents who were both immigrants - Irish father and Lithuanian mother -, is considered today 'today as a pioneer in artificial intelligence. He was gifted with math, which he first learned on his own, before being admitted to Caltech, where his knowledge allowed him to go directly to third year. In 1962, he created the first artificial intelligence laboratory at Stanford University, where he taught until his retirement in 2000. Another great figure in the discipline also took part in this seminar, Herbert Simon (1915-2001), Nobel Prize in Economics in 1978, born in Milwaukee, of an engineer father, who had emigrated from Germany in 1903. Unlike McCarthy, he was not a pure mathematician, but an economist and a specialist in political science and organizations. From 1943, he became interested in the question of the modeling of decision-making in organizations (a subject in which the American military then granted the greatest interest). His research on this subject will naturally lead him to computer science and artificial intelligence. The two men also claimed the heritage of Alan Turing, this English mathematician who had made himself famous during the Second World War by designing an "intelligent machine" capable of deciphering German codes, and who was convinced that Like a child, a computer should be able to learn.
Deep Blue, the computer stronger than Kasparov
It will take many years for the intuitions of the founding fathers to translate into concrete progress. In the 1960s and 1970s, research funding priorities in the United States went to the development of the strike force and the development of increasingly powerful computers. We were then much more interested in the speed of calculation than in the intelligence of the machine. Then in the 1980s, it was the Internet that mobilized most of the research, with colossal investments in data transport networks. It was not until the early 2000s that artificial intelligence returned to the forefront, with the help of Hollywood scriptwriters who considered that robots with superior intelligence had become characters capable of moving crowds in movie theaters.
For many experts, the second birth of artificial intelligence dates back to 1997, when the Deep Blue computer, designed by IBM, defeated world chess champion Gary Kasparov. The machine then weighed 1.4 tonnes and required the presence of around twenty computer scientists. The second major breakthrough was that of Watson, also designed by IBM, an intelligent computer which, in February 2011, beat the best human specialists in the Jeopardy game in the United States. This experiment showed that a machine could understand complex questions asked in natural language, avoid traps, give answers in a few seconds and calculate an index of reliability of the answer.
There is no single definition of human intelligence. But Yves Coppens has dated its first manifestation: three million years ago, a "hominid" had the idea of grabbing a pebble, then another and hitting the first with the second in order to to transform it. For Yves Coppens, it was this event, the first sign of intelligence, that changed the history of humanity. There is also no very precise definition of artificial intelligence, but we know that it consists in endowing software with a certain number of skills and know-how, of comparable or even greater efficiency. to that of human intelligence, all based on mathematics, algorithms and semantics.
Process big data and learn endlessly
Beyond the fantasies created by the still very distant prospect of seeing the intelligence of machines replace that of man, the development of artificial intelligence today relies on the conviction of researchers but also of the companies that implement, that artificial intelligence software will facilitate the resolution of two essential problems: the understanding of data and the implementation of a "natural" language between men and machines. These are two of the main avenues of research that aim to move from the Internet of questions to the Internet of answers (the Semantic Web).
In terms of data production, we are now entering the era of the zetta-byte, in other words a volume so extravagant that it is inaccessible to human processing or by conventional computers. Artificial intelligence will allow machines to draw from this data elements allowing it to provide a clear and rapid answer to the question that will be asked. This is naturally a huge project since the machine needs extraordinary information processing power, sufficiently efficient algorithms to allow it to bring information together, to find correlations, to build relevant data structures, before to provide one or more answers with a good probability that they are correct. It is an elaborate stochastic process, based on mathematics. IBM's Watson artificial intelligence software is thus capable, in a given field (for example the treatment of certain types of cancer), of "reading" unstructured data drawn from scientific and medical publications, drawn up by doctors on the cases of their patients, the research of pharmaceutical companies and the documents relating to experimental treatment protocols drawn up in a certain number of hospitals, to provide an answer to a doctor's question about the nature of the treatment apply to his patient. In this case, the "intelligence" of the machine lies in its ability to embrace a considerable volume of information, to extract from it what makes sense in relation to the question asked, to provide relevant and rapid answers (which the human intelligence would be unable to do), but also to deepen his knowledge as the questions are put to him. However, this implies determining a relatively precise reference domain. This ability of the machine to "learn" opens the way to "deep learning" (see next page), a process in which the machine becomes increasingly competent in a given field and therefore constitutes a decision-making tool based on in-depth analysis of a very large amount of information.
We can see the interest of this type of artificial intelligence for companies that have to process a large amount of data, health, insurance, banking, financial services. It is no coincidence that a number of investment funds and banks (Bridgewater, BlackRock, Two Sigma, Deutsche Bank, etc.) are snapping up the best artificial intelligence specialists at exorbitant prices, at IBM, Google or elsewhere, to develop autonomous quantitative management algorithms, capable of searching through the immensity of big financial data for the combinations (“patterns”) of information that will be the basis of unbeatable investment strategies.
Eliminate the risk of human error
Similarly, IBM, based on the technologies implemented by Watson, has developed artificial intelligence software, M & A Pro, whose objective is to eliminate the risk of human error in mergers and acquisitions process. The machine analyzes thousands of pieces of information on the target companies, built on a repository of around a hundred acquisitions already made, and calculates the probability that the planned acquisition will produce the expected results. The other major area that is opening up to artificial intelligence is that of language.
The machine does not treat words as data, but it understands their meaning. This implies that she has a dictionary of words, that she is able to analyze their structure (root, suffixes, prefixes, declensions, conjugation...), in a process of lemmatization (lexical analysis), which consists of grouping together the different forms that a word can take (masculine or feminine, singular or plural, mode, etc.). In short, to be able to tell the difference between two very similar requests, such as "a loan for a car" or the "loan of a car". This must combine mathematical skills and functional understanding of the language.
Scalable dictionary and "digital assistants"
The French company Davi is a pioneer of the Semantic Web. She has built a self-learning dictionary of 1.3 million entries, which updates every 48 hours on the Internet, and the artificial intelligence with which she is equipped allows her to understand the context and the meaning.
Davi has thus developed “digital assistants” capable of supporting a call center or the assistance service of an Internet site. The digital assistant understands the question put to it and answers in natural language. This requires extremely high-performance algorithms and very large processing capacities. But these virtual call centers allow the companies that install them to control the information given by digital assistants and improve customer service by preserving human intervention in the most complex questions. We can clearly see the potential applications of natural language to enrich the dialogue between man and the many connected objects he uses every day, computers and telephones, but also connected objects in the home or office.
The next step will be the emotional intelligence of machines. For now, the digital assistant cannot tell the difference between a satisfied or dissatisfied person. Through vocal and morphological analysis, the machine will soon be able to detect the personality of its interlocutor, to label its emotions and to adapt its response to them, but also to agree with the level of language of that or that with whom she "talks". The machine will therefore have to integrate a library of the different expressions that a human face can return in order to recognize them and take them into account in the nature of the language it will use. Similarly, the machine will have to decipher the springs of the human voice (speed, spectrum, etc.). These technologies are currently under development. They will probably pave the way for the "affective intelligence" of machines, capable, through the choice of words and tone of voice, of creating on their own initiative a "positive" climate in their dialogue with human beings, or even of show a sense of humor. While waiting for machines capable of "lying", a function for the time being still unattainable for developers.
In a recent interview given to La Tribune, Jean-Gabriel Ganascia, one of the main French specialists in the field, explained that “artificial intelligence is present everywhere in our lives”. It is certain that in the years to come, it will play an increasingly important role in all of the company's processes, whether in terms of decision-making mechanisms, product intelligence and services themselves, of its relations with its customers. But we are still only at the beginning of an evolution that certain experts consider as having to radically change the nature and the functions of human intelligence. It is in any case the first time in the history of humanity that the question of competition between man and machine has been so clearly posed.
_______
[La Tribune LAB] Debate: What place for humans in the digital transition?
Francois Roche11 mins
Share :
Related Articles