Free Shipping on orders over US$39.99 How to make these links

LaMDA, the machine that is like “a 7-year-old kid”: can a computer have consciousness? | Science & Tech


If we give Isaac Newton a smartphone, he will be completely captivated. He has little idea how it works and one of the most thought-provoking science in history is likely to start talking about witchcraft. He may even believe that he is in the presence of a thoughtful being, if he discovers the function of the voice assistant. That same similarity can now be obtained with some advances made in artificial intelligence (AI), which have achieved such a level of sophistication that can sometimes shake the foundations of if what we understand as imaginary thinking.

Blake Lemoine, a Google engineer who works for the tech firm’s AI department in charge, seems to have fallen into this trap. “If I didn’t know what it was, that this computer program we just created, I’d think it was a 7-year-old, 8-year-old kid who knew. physics, ”Lemoine said The Washington Post in an interview published last week. The 41-year-old engineer talked about LaMDA, a Google chatbot generator (a program that carries out automated tasks via the internet as if it were a human). Last fall, Lemoine took on the program’s communication task to determine if it was using discrimination or inciting hate language. The conclusions he drew shook the scientific world: he believed that Google had succeeded in creating a thoughtful program, with the power of independent thinking.

Is that possible? “Anyone who makes this kind of claim shows that they have never written a single line of code in their life,” said Ramón López de Mántaras, director of the Artificial Intelligence Research Institute at the Spanish National Research Council. “Given the current state of this technology, it is completely impossible to develop self -conscious artificial intelligence.”

However, that does not mean that LaMDA is less sophisticated. The program uses neural network architecture based on Transformer technology, which replicates the function of the human brain to complete written conversations. It is trained in billions of texts. As Google vice president and head of research Blaise Agüera y Arcas said The Economist in a recent interview, LaMDA considered 137,000,000 parameters to decide which answer had the highest probability of being most appropriate to the question being asked. That allows it to create sentences that can be passed on as written by someone.

But as López de Mántaras points out, even if it can be written as if it were a human being, LaMDA does not know what it is saying: “None of these systems have semantic understanding. They do not understand. in the story. They are like digital parrots. We give meaning to the text. ”

Agüera y Arcas’ essay, published just days before Lemoine’s interview in The Washington Post, also emphasizes the remarkable accuracy with which LaMDA is able to generate responses, but the Google executive offers a different explanation. “AI is entering a new era. When I started such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift in “under my feet. I feel more and more like I’m talking about something intelligent. As such, these models are far from the infallible, hyper-rational science fiction robots that lead us to anticipate,” he wrote. LaMDA is a system that has made impressive advances, he said, but there is a world of difference between that and talking about a conscience. “The real brain is more complex than these highly modeled neurons. , but perhaps in the same way the wing of the bird is more complex than the wing of the first plane of the Wright brothers. ”

Researchers Timnit Gebru and Margaret Mitchell, who lead Google’s Ethical AI team, issued a warning in 2020 that something similar to Lemoine’s experience would happen, and signed an internal report that led to their removal. In their recap of a piece of opinion on The Washington Post last week, the report highlighted the risk that people could “blame communication intent on human -like objects,” and that these programs could lead people to understand a ‘mind’ when their real visible is pattern matching and string prediction. ”In Gebru and Mitchell’s view, the underlying problem is that because these tools are fed up with millions of unfiltered texts taken from the internet, they can copy sexist, racist or discriminatory spoken of in their operations.

Has AI become sensitive?

What was the reason Lemoine was attracted to LaMDA? How did he come to the conclusion that the chatbot he was talking to was sentient? “There are three layers that unite Blake’s story: one of them is his observations, the other is his religious beliefs and the third is his mental state,” a renowned Google engineer who worked closely with Lemoine told EL PAÍS, under condition of anonymity. “I thought Blake was a smart guy but it’s true he didn’t have any machine learning training. He didn’t understand how LaMDA works. I think he was carried away by his ideas.”

Lemoine, who was placed on paid administrative leave by Google for violating the company’s confidentiality policy after the public, describes himself as an “agnostic Christian,” and is a member of the Church of the SubGenius, a post- modern parody religion. “You could say Blake is a bit of a character. This is not the first time he has attracted attention within the company. Honestly, I would say that another company would have fired him a long time ago, ”said his colleague, who was unhappy with the way the media was bleeding Lemoine. “More than madness, I’m glad the debate is clear. Of course, LaMDA has no conscience, but obviously AI can be more capable of going on and on and we need to rethink our relationship with it.

Part of the controversy surrounding the debate has to do with the ambiguity of the terminology used. “We’re talking about something we don’t agree on yet. We don’t know what exactly constitutes intelligence, consciousness and feelings, or if all three elements must be present in order for an entity to know itself. .We know how to differentiate between them, but we don’t know how to accurately define it, ”said Lorena Jaume-Palasí, an expert in ethics and philosophy of applied technology law who works as an adviser to the Spanish government and the European parliament.on matters relating to AI.

Attempting to anthropomorphize computers is a human trait. “We do it all the time with everything. We can even see the faces of the clouds or the mountains, ”said Jaume-Palasí. As for computers, we also drink from the cup of European rationalist heritage. “Consistent with the Cartesian tradition, we tend to think that we can surrender the mind or rationality to machines. We believe that the rational individual is supernatural, that it can dominate it,” said the philosopher. “I think the discussion is whether an AI system has a conscience or isn’t filled with a tradition of thinking in which we try to extrapolate the characteristics of technologies that they don’t and can’t. ”

The Turing Test is out of date now. Formed in 1950 by the famous mathematician and engineer Alan Turing, the test consists of asking a machine and a person a series of questions. The test is passed if the interlocutor cannot determine whether the person or the computer is responding. Others have been put forward recently, such as the 2014 Winograd schema challenge, which is based on commonsense reasoning and the use of knowledge to answer questions. To date, no one has created a system that can overcome this. “Maybe there are AI systems that can cheat the judges who ask. But that doesn’t prove that a machine is intelligent, but it’s well -programmed to cheat,” López de Mántaras said.

Will we one day witness total artificial intelligence? That is, AI that is equivalent to or more than the human mind, which understands the context, is able to connect elements and anticipate situations as humans do. This question itself is a field of traditional speculation within the industry. The consensus in the scientific community is that if such artificial intelligence occurs, it is less likely that it will be possible before the end of the 21st century. However, it is likely that the constant progress made in the development of AI will lead to many reactions like Blake Lemoine (although not necessarily as histrionic). “We have to be prepared to have debates that are always uncomfortable,” said his former Google colleague.





Source link

We will be happy to hear your thoughts

Leave a reply

Info Bbea
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart