On The Measure of Intelligence

François Chollet is a French software engineer and artificial intelligence researcher currently working at Google. Chollet is the creator of the Keras deep-learning library, released in 2015, and a main contributor to the TensorFlow machine learning framework. His research focuses on computer vision, the application of machine learning to formal reasoning, abstraction, and how to achieve greater generality in artificial intelligence.

Chollet graduated with a Master of Engineering from the ENSTA Paris school in 2012 and started working at Google in 2015. His papers have been published at major conferences in the field, including the Conference on Computer Vision and Pattern Recognition (CVPR), the Conference on Neural Information Processing Systems (NeurIPS), the International Conference on Learning Representations (ICLR). He is the author of Xception: Deep Learning with Depthwise Separable Convolutions, which is among the top ten most cited papers in CVPR proceedings.

Chollet is the author of Deep Learning with Python, the co-author with Joseph J. Allaire of Deep Learning With R, and the creator of the Abstraction and Reasoning Corpus (ARC) Challenge

Publication

On the Measurement of Intelligence by François Chollet, a widely influential paper on prevailing models of computational learning and intelligence.

https://arxiv.org/pdf/1911.01547.pdf

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

Podcast

François Chollet: Measures of Intelligence | Lex Fridman Podcast #120