What automaton model captures decision making?

A call for finding a behavioral taxonomy of complexity

Authors

  • Stephan Schosser
  • Bodo Vogt

DOI:

https://doi.org/10.24352/UB.OVGU-2018-539

Abstract

When investigating bounded rationality, economists favor finite-state automatons - for example the Mealy machine - and state complexity as a model for human decision making over other concepts. Finite-state automatons are a machine model, which are especially suited for (repetitions of) decision problems with limited strategy sets. In this paper, we argue that finite-state automatons do not suffice to capture human decision making when it comes to problems with infinite strategy sets, such as choice rules. To proof our arguments, we apply the concept of Turing machines to choice rules and show that rational choice has minimal complexity if choices are rationalizable, while complexity of rational choice dramatically increases if choices are no longer rationalizable. We conclude that modeling human behavior using space and time complexity best captures human behavior and suggest to introduce a behavioral taxonomy of complexity describing adequate boundaries for human capabilities.

Downloads

Published

2018-09-04

Issue

Section

Artikel