Free Shipping on orders over US$39.99 How to make these links

why are AI researchers teaching computers to recognise irony?

What was your first reaction when you heard about Blake Lemoine, the Google engineer who announced last month that the AI ​​program he was working on developed consciousness?

If, like me, you are naturally suspicious, it might be something like: Is this guy serious? Does he believe what he says? Or is it an elaborate hoax?

Put the answers to the questions to one side. Focus instead on the questions themselves. Isn’t it true that even QUESTIONS they are to expect something significant about Blake Lemoine: namely, she feel guilty?

In other words, we can all imagine Blake Lemoine as delusional.

And we can do this because we believe that there is a difference between his inner conviction – what he really believes – and his outer expressions: what he claims to believe.

Isn’t that difference the mark of consciousness? Can we think the same about a computer?

Read more: A Google software engineer believes that an AI has become sentient. If he is right, how do we know?

Consciousness: ‘the hard problem’

It is not for nothing that philosophers have taken to calling consciousness “the hard problem”. It is very difficult to explain.

But for now, let’s say that a conscious being is one that has the ability to have a thought and not reveal it.

This means that consciousness is essential for irony, or saying one thing while meaning the opposite. I know you’re joking when I catch your words do not corresponds to your thoughts.

That most of us have this capacity – and most of us often express our unspoken meanings in this way – is something that, I think, should surprise us more often than it does.

It seems almost discretely human.

Animals can be funny – but not on purpose.

What about the machines? Can they cheat? Can they keep a secret? Can they be ironic?

a kitten emerges from among a pile of textiles
Animals can be funny, but not on purpose.

AI and irony

It’s a widely recognized fact (in academics at least) that any research question you can cook up with the letters “AI” in it has already been studied somewhere by an army of idiots. -are computer scientists – often, if not always, funded by the US military.

This is certainly the case with the question of AI and irony, which has recently attracted a lot of research interest.

Of course, since irony involves saying one thing while meaning the opposite, making a machine that can detect it, let alone create it, is not a simple task.

But if we potential creating such a machine, it has many practical applications, some worse than others.

In the era of online reviews, for example, retailers have become very interested in so-called “opinion mining” and “sentiment analysis”, which uses AI to map not only the content, but also the mood of reviewer comments.

Knowing whether your product is praised or made the joke’s butt is important information.

Or consider moderating social media content. If we want to limit online abuse while protecting free speech, wouldn’t it help to know when someone is serious and when they’re joking?

Or what if someone tweets that they’ve just joined their local terrorist cell or that they’re packing a bomb in their suitcase and heading to the airport? (Don’t ever tweet that, by the way.) Imagine if we could tell right away if they were being serious, or if they were just being “ironic”.

In fact, given the proximity of irony to lying, it is not difficult to imagine how the entire machinery of government and corporate surveillance that has grown up around new communication technologies could find hope in an irony-detector. which is very interesting.

And that goes a long way to explaining the growing literature on the subject.

Read more: ‘Weaponized irony’: after fictionalizing Elizabeth Macarthur’s life, Kate Grenville edits her letters

AI, from Clippy to facial recognition

To understand the current state of AI research and irony, it helps to know a little about the history of AI more generally.

That history is usually divided into two periods.

Until the 1990s, researchers sought to program computers with a set of hand-crafted formal rules for how to behave in predetermined situations.

If you used Microsoft Word in the 1990s, you may remember the annoying office assistant Clippy, who endlessly popped up to offer unsolicited advice.

Since the turn of the century, that model has been replaced by data-driven machine learning and neural networks.

Here, large caches of examples of a given phenomenon are translated into numerical values, where computers can perform complex mathematical operations to identify patterns that cannot be discovered by humans.

Furthermore, the computer doesn’t just apply a rule. Instead, it learns from experience, and develops new operations independent of human intervention.

The difference between the two methods is the difference between Clippy and, say, facial recognition technology.

Read more: Facial recognition is on the rise – but the law is a long way off

Research sarcasm

In order to build a neural network capable of detecting irony, researchers focused on the origins of what some consider its simplest form: sarcasm.

The researchers started with data taken from social media.

For example, they can collect all tweets with the label #sarcasm or Reddit posts with the label /s, a shorthand used by Reddit users to indicate that they are not serious.

The point is not to teach the computer to recognize two different meanings in any given sarcastic post. In fact, meaning has nothing to do with it.

Instead, the computer is instructed to look for recurring patterns, or what one researcher calls “syntactical fingerprints” – words, phrases, emojis, punctuation, errors, context, and more.

On top of that, the data set is strengthened by adding more streams of examples – other posts in the same threads, for example, or from the same account.

Each new individual example is run through a battery of calculations until we come to a determination: sarcastic or not sarcastic.

Finally, a bot can be programmed to respond to each original poster and ask if they are joking. Any answer can add to the growing mountain of computing experience.

The success rate of the latest sarcasm detectors is approaching an astonishing 90% – greater, I suspect, than most people can handle.

So, if AI continues to advance at the rate that took us from Clippy to facial recognition technology in less than two decades, will ironic androids be far behind?

What is irony?

But isn’t there a qualitative difference between sorting through the “syntactical fingerprints” of irony and actually understanding it?

Some would suggest not. If a computer can be taught to act like a human, then it is immaterial if a rich world of meaning hides beneath its behavior.

But the irony may be a rare case: this trust on the difference between external behavior and internal beliefs.

In the 1994 film Reality Bites, Ethan Hawke’s character famously defines irony (in very simple terms).

Here it may be worth noting that, while computational scientists have recently become interested in irony, philosophers and literary critics have been thinking about it for a long time.

And perhaps exploring that tradition will shed light, as it were, on a new problem.

Of the many names one might use in this context, two stand out: the German Romantic philosopher Friedrich Schlegel; and the post-structuralist literary theorist Paul de Man.

For Schlegel, irony does not only consist of a false, external meaning and a true, internal one. Rather, in irony, two opposing meanings are presented as equally true. And the resulting indeterminacy has devastating implications for logic, especially the law of non-contradiction, which states that a statement cannot be both true and false.

De Man followed Schlegel on this score, and in a sense, universalized his insight. He noted that every effort to define a concept of irony must be infected with the phenomena it is meant to explain.

Indeed, de Man believes all The language is infected with irony, and includes what he calls “permanent parabasis”. Because people have the power to hide their thoughts from each other, it’s always possible – permanently possible – that they don’t mean what they say.

Irony, in other words, is not a form of language among the masses. It structures – or rather, haunts – every use of language and every interaction.

And in this sense, it surpasses the order of proof and calculation. The question is whether this is the same for people in general.

Source link

We will be happy to hear your thoughts

Leave a reply

Info Bbea
Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart