Workshops

AGI-17 Workshop: Understanding Understanding

August 18, 2018

While the AGI community has focused on numerous aspects of intelligence such as autonomy, domain-independence, ampliative reasoning, and generality – aspects that, while seeming central to general intelligence, the larger AI community has paid little attention to – discussions on the topic of understanding, including its relevance, role, and importance for generality, have been largely absent from the fields of both AI and AGI. While numerous developments in AI over the decades fulfill the requirement of continued advances and success in AI, the ability of current methods, ideas, and approaches to bring forth machines that really understand has yet to be demonstrated: The question whether existing programming languages, methodologies, architectural principles, and theoretical insights will suffice for building machines with the capacity to understand has at this point in time no obvious answer.

Understanding seems central to the human ability to assess our own capacity for affecting change in particular contexts on particular tasks. Most humans not trained in mountain climbing will turn down an offer to climb Mount Everest. They also have an easy time explaining why they turn it down, and can probably cook up a rough outline for the kind of training that might make them change their mind. We call it “a lack of understanding” when issues central to a topic or problem are blissfully ignored by someone trying to solve it, and consider it “a hopeless case” when repeated attempts at explaining to them that they don’t have sufficient understanding of the subject to make any important decisions about it are ignored.

Historically, the use of the term “understanding” in AI has mostly focused on natural language, which relates to the parsing and manipulation of linguistic tokens, and “scene” or “image” understanding, which again relates to parsing or largely semantics-free processing, with any discussion of understanding proper a rare occurrence. To the field of AGI, for which the topic of generality is central, this state of affairs would seem far from ideal. To investigate the phenomenon of understanding, compare systems with respect to their potential for understanding, and get to the crux of what understanding really is, seems important enough to give it more scrutiny.

Understanding Understanding will be a one-day workshop addressing recent work relevant to the question of machine understanding, aiming for a wide range of topics and methods to be presented and discussed. The workshop will focus on understanding, drawing from the fields of AI, AGI, philosophy, cognitive science and psychology, in order to explore the natural questions inherent within this concept. It will present a diverse set of methods, assumptions, approaches, and systems under development from people with a diverse set of backgrounds, providing a perfect introduction to the topic of understanding in AGI.

We are interested in submissions from the fields on AGI, AI, psychology, and philosophy, which focus upon the concept of understanding, especially in relation to the goal of building machines that have the capacity to understand.

Among the questions and topics the workshop will address (but is not limited to) are the
following:

  • How should we define understanding?
  • How can we test for understanding?
  • Is understanding an emergent property of intelligent systems?
  • Is understanding a central property of intelligent systems?
  • What are the typologies or gradations of understanding?
  • How can we create systems that exhibit understanding?
  • What is required in order to achieve understanding in machines?
  • Can understanding be achieved through hand-crafted architectures or must it emerge through self-organizing (constructivist) principles?
  • How can mainstream techniques be used towards the development of machines which exhibit understanding?
  • Do we need radically different approaches than those in use today to build systems with understanding?
  • Does building artificially intelligent machines with versus without understanding depend on the same underlying principles, or are these orthogonal approaches?
  • Do we need special programming languages to implement understanding in intelligent systems?
  • Is general intelligence necessary and/or sufficient to achieve understanding in an artificial system?
  • What differentiates systems that do and do not have understanding?
  • How can current state of the art methods in AGI address the need for understanding in machines?

We welcome technical papers as well as overviews, demonstrations and position papers on a range of topics relating to the topic of understanding:

  • Design proposals for cognitive architectures targeting understanding
  • New programming languages relevant to understanding
  • New methods relevant to understanding
  • New architectural principles relevant to understanding
  • New theoretical insights relevant to understanding
  • Synergies between various approaches to understanding (theoretically, within AGI, etc.)
  • Machine education/learning needed to achieve understanding
  • Analysis of the potential and limitations of existing approaches

 

Primary contact: David Kremelberg ( ​david.kremelberg@gmail.com ​)

 

Workshop Program Committee

The program committee will include a mixture of veterans and newcomers, assembled through the Organizing Committee’s professional networks. A tentative list includes:

  • Joscha Bach, Harvard University, USA
  • Tarek Richard Besold, University of Bremen, Germany
  • Jordi Bieger, Reykjavik University, Iceland
  • Antonio Chella, University of Palermo, Italy
  • Abram Demski, University of Southern California
  • Haris Dindo, Yewno & University of Palermo, Italy
  • Glenn Gunzelmann, Air Force Research Laboratory, USA
  • Helgi P. Helgason, Activity Stream, Iceland
  • Benjamin Johnston, University of Technology, Sydney
  • Jan Koutnik, IDSIA, Switzerland
  • David Kremelberg, IIIM, Iceland
  • Shane Legg, DeepMind, UK
  • Xiang Li, Temple University, China
  • Tony Lofthouse, Evolving Solutions Ltd., Switzerland
  • Javier Snaider, Google Inc.
  • Claes Strannegård, University of Gothenburg, Sweden
  • Laurent Orseau, DeepMind, UK
  • Ricardo Sanz, Universidad Politécnica de Madrid, Spain
  • Bas Steunebrink, IDSIA, Switzerland
  • Kristinn R. Thórisson, Reykjavik University & IIIM, Iceland
  • Pei Wang, Temple University, USA
  • David Weinbaum (Weaver), Vrije Universiteit Brussels