Learning human-interpretable concepts from information lattices

Can AI learn music theory from music in a human-interpretable form like textbooks? Extracting interpretable rules/concepts from data like what we do is key to knowledge discovery and problem solving in creative domains like art and science. We develop new, white-box learning paradigms that are both self-explanatory and self-exploratory. There, we build information lattices and perform lattice learning to mimic humans’ conceptualization processes. The basic idea is an iterative discovery algorithm with a student-teacher architecture that operates on a generalization of Claude Shannon’s information lattice, which itself encodes a hierarchy of abstractions and is grown algorithimically from universal priors (symmetries, basic arithmetic, partial orders) built on group-theoretic foundations. Our new learning framework is efficient in recovering music theory from sheet music and chemical laws from molecular databases, as well as discovering undocumented new rules and bridging knowledge between disciplines.