Abstract

In a recent appraisal of deep learning (Marcus, 2018) I outlined ten challenges for deep learning, and suggested that deep learning by itself, although useful, was unlikely to lead on its own to artificial general intelligence. I suggested instead the deep learning be viewed “not as a universal solvent, but simply as one tool among many.” In place of pure deep learning, I called for hybrid models, that would incorporate not just supervised forms of deep learning, but also other techniques as well, such as symbol-manipulation, and unsupervised learning (itself possibly reconceptualized). I also urged the community to consider incorporating more innate structure into AI systems.

Keywords

Deep Learning, General Intelligence

Abstract

Eric Horvitz, managing director and distinguished scientist at Microsoft Research, and Josh Tenenbaum, a professor at Massachusetts Institute of Technology, discuss putting a computational lens on scientific theories of rationality, human cognition, and the future of artificial intelligence.

8144761581Keywords

Computational Rationality, Artificial Intelligence

617-482-4939Abstract

Earlier this month, I had the exciting opportunity to moderate a discussion between Professors Yann LeCun and Christopher Manning, titled “What innate priors should we build into the architecture of deep learning systems?” The event was a special installment of AI Salon, a discussion series held within the Stanford AI Lab that often features expert guests. This discussion topic – about the structural design decisions we build into our neural architectures, and how those correspond to certain assumptions and inductive biases – is an important one in AI right now. In fact, last year I highlighted “the return of linguistic structure” as one of the top four NLP Deep Learning research trends of 2017.

Keywords

Deep Learning, Natural Language Processing, Innate Priors

579-248-4588Abstract

We investigate neural circuits in the exacting setting that (i) the acquisition of a piece of knowledge can occur from a single interaction, (ii) the result of each such interaction is a rapidly evaluatable subcircuit, (iii) hundreds of thousands of such subcircuits can be acquired in sequence without substantially degrading the earlier ones, and (iv) recall can be in the form of a rapid evaluation of a composition of subcircuits that have been so acquired at arbitrary different earlier times.

613-982-1335Keywords

Neural Networks, Lifelong Learning

9157745038Abstract

Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from?

Keywords

Natural Language Processing, Linguistics


7202491696