TLDR Paper: Compositional Inductive Biases for Function Learning

TLDR Paper: Compositional Inductive Biases for Function Learning

November 14, 2020

One-liner

  • This paper studies and confirms the effectiveness of compositionality through human experiments from data generated by gaussian processes regression, supporting it’s case that human learning is inherently compositional.

TLDR

  • This paper uses eight carefully designed experiments to probe whether an inductive bias for compositionality exists in human learning
  • Two class of theories around function learning exist: rule-based, parametric, strong inductive bias vs. similarity-based non-parametric, weak inductive bias
    • rule-based are limited to linear combinations of a fixed set of parametric functions
    • similarity-based are theoretically unlimited
  • A Gaussian Processes is used to model both learning theories, and test the effectiveness of compositionality by running human experiments.
  • Data is generated for specific tasks using non-compositional structured kernels that are flexible in their parameters, and compositional kernels that are additive and multiplicative of a primitive set of kernels
    • Question: Is this a linear model of nonlinear functions?
  • Experiments include predicting a function, change detection, judging predictability, short-term memory

Bonus Tangents/Papers

  • Comparisons to functional programming language/category theory!
  • Inferring the human kernel. The case for studying rationality, forecasting, decision-making. This is a systematic framework for doing so which is exciting.
  • Note: This paper is over 80 pages long, but so interesting I barely noticed. I will write a follow-up more detailed spoon-feed analysis

Paper on Biorxiv