To illustrate the statistical nature of the second law of thermodynamics, Maxwell conceived of a being, later dubbed "Maxwell's demon", which could observe molecules in an isolated container with two chambers of equal pressure and temperature. The demon could control a trap door in the divider, using detailed information about the gas molecules to ``sort" them into hotter and colder, whereby a heat gradient and thus a work potential could be created from the demon's knowledge: "no work has been done, only the intelligence of a very observant and neat-fingered being has been employed". Information engines, originating in Szilard's proposal to automate the function (but not the intelligence) of the demon, are theoretical constructs useful for understanding the physical nature of information, as they convert between information and work. In recent years, control over small systems has improved to the point that theoretical predictions are being tested experimentally. As more and more experiments implement efficient schemes to turn information into work, in systems ranging from colloidal particles to quantum systems, questions about real world observer costs and limitations have become important. There are fundamental bounds governing how efficient, how fast, how reliably observers can function. Knowing these physical bounds puts us in a situation where we can ask for design principles which ensure that the limits can be reached. This allows us to automate the demon's "intelligence". To find minimally dissipative observer memories, we need the generalized framework of partially observable information engines. Minimally dissipative observers have to infer the pertinent information that can be converted into work, without wasting memory on irrelevant information. We will use a simple example to examine the characteristics of minimally dissipative observer memories and the resulting engine performance. This framework provides a basis for studying the thermodynamics of signal processing and inference, tasks at the heart of machine learning and artificial intelligence, adding one step towards a physics based understanding of AI.
Description