Free Shipping on orders over US$39.99 How to make these links

Quanta Magazine

Kanan has been toying with machine intelligence nearly all his life. As a kid in rural Oklahoma who just wanted to have fun with machines, he taught bots to play early multi-player computer games. That got him wondering about the possibility of artificial general intelligence — a machine with the ability to think like a human in every way. This made him interested in how minds work, and he majored in philosophy and computer science at Oklahoma State University before his graduate studies took him to the University of California, San Diego.

Now Kanan finds inspiration not just in video games, but also in watching his nearly 2-year-old daughter learn about the world, with each new learning experience building on the last. Because of his and others’ work, catastrophic forgetting is no longer quite as catastrophic.

Quanta spoke with Kanan about machine memories, breaking the rules of training neural networks, and whether AI will ever achieve human-level learning. The interview has been condensed and edited for clarity.

How does your training in philosophy impact the way you think about your work?

It has served me very well as an academic. Philosophy teaches you, “How do you make reasoned arguments,” and “How do you analyze the arguments of others?” That’s a lot of what you do in science. I still have essays from way back then on the failings of the Turing test, and things like that. And so those things I still think about a lot.

My lab has been inspired by asking the question: Well, if we can’t do X, how are we going to be able to do Y? We learn over time, but neural networks, in general, don’t. You train them once. It’s a fixed entity after that. And that’s a fundamental thing that you’d have to solve if you want to make artificial general intelligence one day. If it can’t learn without scrambling its brain and restarting from scratch, you’re not really going to get there, right? That’s a prerequisite capability to me.

How have researchers dealt with catastrophic forgetting so far?

The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. It’s inspired by memory consolidation in our brain, where during sleep the high-level encodings of the day’s activities are “replayed” as the neurons reactivate.

In other words, for the algorithms, new learning can’t completely eradicate past learning since we are mixing in stored past experiences.

There are three styles for doing this. The most common style is “veridical replay,” where researchers store a subset of the raw inputs — for example, the original images for an object recognition task — and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is “generative replay.” Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods.

Unfortunately, though, replay isn’t a very satisfying solution.

Source link

We will be happy to hear your thoughts

Leave a reply

Info Bbea
Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart