Although simple individually, artificial neurons provide state-of-the-art
performance when interconnected in deep networks. Unknown to many, there exists
an arguably even simpler and more versatile learning mechanism, namely, the
Tsetlin Automaton. Merely by means of a single integer as memory, it learns the
optimal action in stochastic environments through increment and decrement
operations. In this paper, we introduce the Tsetlin Machine, which solves
complex pattern recognition problems with easy-to-interpret propositional
formulas, composed by a collective of Tsetlin Automata. To eliminate the
longstanding problem of vanishing signal-to-noise ratio, the Tsetlin Machine
orchestrates the automata using a novel game. Our theoretical analysis
establishes that the Nash equilibria of the game align with the propositional
formulas that provide optimal pattern recognition accuracy. This translates to
learning without local optima, only global ones. We argue that the Tsetlin
Machine finds the propositional formula that provides optimal accuracy, with
probability arbitrarily close to unity. Empirically, these properties manifest
as monotonically increasing mean training and test accuracy. In five
benchmarks, the Tsetlin Machine provides competitive accuracy compared with
SVMs, Decision Trees, Random Forests, Naive Bayes Classifier, Logistic
Regression, and Neural Networks. The Tsetlin Machine has an inherent
computational performance advantage since both inputs, patterns, and outputs
are expressed as bits, while both recognition and learning rely on bit
manipulation. The combination of accuracy, interpretability, and computational
simplicity makes the Tsetlin Machine a promising tool for a wide range of
domains. Being the first of its kind, we believe the Tsetlin Machine will
kick-start new paths of research, with a potentially significant impact on the
AI field and the applications of AI.

Source link