AGI-17 has three tutorials lined up:

AGI and Consciousness

Naotsugu Tsuchiya (Monash), Ryota Kanai (Araya Co., Japan), Jakob Hohwy (Monash)

The topics of the tutorial are:

  • 1) artificial general intelligence and consciousness
  • 2) theories of consciousness and how the brain works
    • 2a) integrated information theory
    • 2b) free energy principle
  • 3) how to incorporate / implement consciousness in artificial systems

Cross-Paradigm AGI: How Cognitive, Deep, Probabilistic and Universal
AI can Contribute to Each Other

Alexey Potapov (St. Petersburg U)

The tutorial will cover the following topics

  • Cognitive architectures
  • Deep learning
  • Probabilistic models
  • Universal algorithmic intelligence

For each topic, the basic ideas will be first briefly introduced, and then their connection with the other topics will be tracked. In particular, the role of probabilistic models in deep learning and cognitive architectures will be reviewed. Probabilistic programming will be considered as a framework for empirical study of universal induction. Algorithmic representations and learning in deep neural networks, probabilistic programming, and cognitive architectures will be described with application to meta- and transfer learning. Complementarity and convergence of all approaches will be analyzed.

Toward a Grand Unified Theory of AGI: A Survey of Approaches


Ben Goertzel (OpenCog Foundation, Hanson Robotics)

AGI is broad by nature, and so it’s not obvious in what sense there can be a general theory of general intelligence – or a general theory of engineered general intelligence – that has anything useful to say about practical issues encountered in designing, building and teaching AGI systems.

However, a number of approaches are currently being pursued toward the creation of a general theory of AGI.   This tutorial presents a relatively high-level survey of several of these:

  • Unified Cognitive Architecture.   The various cognitive architectures proposed throughout the history of AI and cognitive modeling have many common elements, rooted largely in well-known facts of human cognitive science.   However, different cognitive architectures tend to emphasize different aspects.   Attempts have been made to align the “cognitive architecture diagrams” associated with different systems and theories to highlight their confluence and overlap and their distinct aspects as well.
  • Algorithmic Information Based General Learning Theory.   Extending the idea of Solomonoff induction to include reinforcement learning and related ideas, one obtains what is in a sense a general theory of computable general intelligence, associated with elegant but not-so-practical AGI designs such as AIXI and the Godel Machine.   Open questions include the extent to which these approaches can be “scaled down” to deal with real-world problems; and the extent to which theoretical approaches that ignore resource constraints are ignoring the core of the AGI problem.
  • Complex Systems Theory.   General Systems Theory and cybernetics, cross-disciplinary pursuits dating to the middle of the previous century, provide conceptual and formal tools for describing complex systems of all sorts, including societies, biological organisms and minds.   Complexity science (e.g. dynamical systems theory, cellular automata, etc.) provides a more modern set of mathematical and computational tools in a similar spirit.   To what extent can general principles of complex systems shed light on general principles of cognitive systems?
  • Energy Minimization.   Neuroscientist Read Montague has proposed energy minimization as a key concept for understanding the diverse architecture of the brain.   Karl Friston has proposed a “free energy minimization” theory of neuroscience, which however many researchers have found problematic.   To what extent can basic physics dynamics and constraints of this nature shed light on cognitive architectures and dynamics.
  • Category-Theoretic Formalization of Cognitive Synergy.   One motivation for pursuing integrated, multi-algorithm/multi-representation AGI systems is the hypothesis that to achieve a high degree of general intelligence under realistic resource constraints, it is necessary to interconnect multiple AI approaches that are especially appropriate for particular types of problems or data.   To an extent this is a “pragmatic tinkering” type approach, but it can also be formalized mathematically via modeling different domains (e.g. perception, action, language, inference) as categories and looking at formal mappings between these categories, and formal mappings between the state spaces of algorithms that are adapted to each of these domains.