Free Shipping on orders over US$39.99 How to make these links

Why we talk about computers having brains (and why the metaphor is all wrong)


It is a fact, recognized by all, that machines are taking over. It’s less clear if the machines know that. The recent claims by a Google engineer that the LaMBDA AI Chatbot could have thoughtfully made international headlines and sent philosophers into a tizz. Neuroscientists and linguists are less enthusiastic.

While AI has made many victories, the debate about technology has moved from the hypothetical to the concrete and from the future to the present. This means that a wider cross -section of people – not only philosophers, linguists and computer scientists but also policy -makers, politicians, judges, lawyers and law academics – needs to form a more sophisticated view of AI. .

After all, how policy makers talk about AI already shapes decisions on how to regulate that technology.

Consider, for example, the case of Thaler v Commissioner of Patents, which was launched by the Federal Court of Australia after the patents commissioner rejected the application naming AI as the inventor. When Justice Beech disagreed and granted the application, he made two findings.

First, he found that the word “inventor” simply describes a task and can be done by man or something. Think of the word “dishwasher”: it could describe a person, a kitchen appliance, or even an enthusiastic dog.

Nor does the word “dishwasher” necessarily mean that the agent is good at its job …

Second, Justice Beech uses the metaphor of the brain to explain what AI is and how it works. Arguing by comparing human neurons, he knew that the AI ​​system in question could be considered autonomous, and therefore could meet the requirements of an inventor.

The case raises an important question: where does the idea that AI is like the brain come from? And why is it so popular?

AI for the mathematically challenged

It is understandable that people without technical training may rely on metaphors to understand complex technology. But we hope that policymakers can come up with a little more sophisticated understanding of AI than we got from Robocop.

My research considers how law academics talk about AI. An important challenge for this group is that they are always maths-phobic. As legal scholar Richard Posner said, the law

Provides a haven for bright young people with a “math block”, even if it usually means they avoid math and science courses because they can get higher grades with less verbal work field.

Following Posner’s insight I examined all uses of the term “neural network” – the usual label for a common type of AI system – published in a series of Australian law journals between 2015 and 2021.

Most of the papers have attempted to explain what a neural network is. But only three of the nearly 50 papers attempted to engage in underlying mathematics beyond a broad discussion of statistics. Only two papers used visual aids to aid their explanation, and none used computer code or mathematical formulas to center the neural network.

In contrast, two-thirds of the explanations refer to “mind” or biological neurons. And most of what is done a directly analogy. That is, they propose AI systems that actually mimic the function of the human mind or brain. The metaphor of the mind is clearly more appealing than engaging in basic mathematics.

It is not surprising, then, that our policymakers and judges-like the general public-make such heavy use of these metaphors. But metaphors mislead them.

Where does the idea that AI is like the brain come from?

Understanding what produces intelligence is an ancient philosophical problem that has finally been taken over by the science of psychology. An influential statement of the problem was made in William James ’1890 book Principles of Psychologywhich set early scientific psychologists the task of identifying the one-to-one correlation between a mental state and a physiological state of the brain.

Working in the 1920s, neurophysiologist Warren McCulloch tried to solve this “mind / body problem” by proposing a “psychological theory of mental atoms”. In the 1940s he joined Nicholas Rashevsky’s influential biophysics group, which attempted to bring the mathematical techniques used in physics to solve nueroscience problems.

The key to these efforts is attempts to create simplified models of how biological neurons might function, which can be refined with more sophisticated, mathematically rigorous explanations.



Read more: We are told that AI neural networks ‘learn’ how people do. A neuroscientist explains why that is not the case


If you have vague memories of your high school physics teacher trying to explain the motion of particles by comparing billiard balls or tall metal slinkies, you’ve got the big picture. Start with a few simple assumptions, understand the basic relationships and work out the complexities later. In other words, consider a spherical cow.

In 1943, McCulloch and logician Walter Pitts proposed a simple model of neurons aimed at explaining the “heat illusion” phenomenon. While this is ultimately an unsuccessful picture of how neurons work – McCulloch and Pitts later abandoned it – it is a very helpful tool for designing logic circuits. Early computer scientists adapted their work to what is now known as logic design, in which naming conventions – for example “neural networks” – continue to this day.

That computer scientists still use terms like these seems to have fueled the popular misconception that there is an intrinsic link between certain types of computer programs and the human brain. It seems that the simplified assumption of a spherical cow has become a useful way of describing how ball bearings are designed and has left us all believing that there is some necessary connection between the playing equipment. children and dairy farming.

It is nothing more than a curiosity of intellectual history if not the case that these misunderstandings shape our AI policy responses.

Is the solution to force lawyers, judges and policymakers to pass calculus in high school before they start talking about AI? They will definitely oppose any suggestion. But without better mathematical literacy we need to use better analogies.

While the Full Federal Court reversed Justice Beech’s decision on Thaler, it specifically noted the need for policy development in this area. Without providing non -specialists with a better way to understand and talk about AI, we are likely to continue to have the same challenges.



Source link

We will be happy to hear your thoughts

Leave a reply

Info Bbea
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart