2nd ESSENCE Summer School - Topics and Speakers

Speakers


Aaron Sloman

Aaron Sloman (University of Birmingham)

Evolved construction-kits for building minds

Abstract:
Reading Turing's 1952 paper on The Chemical Basis of Morphogenesis inspired the question: what might he have done if he had lived 20 or 40 more years instead of only two. Tentative answer: The Meta-Morphogenesis project, identifying transitions in information processing since the earliest (proto-) life forms. One aspect of this seems to be production of a huge variety of layered construction-kits all derived from a fundamental construction-kit (FCK) provided by physics (and chemistry). The tutorial will present, and invite discussion of, a provisional ontology for derived construction kits (DCKs) in terms of their types (e.g. concrete, abstract, and hybrid construction kits), their dependency relationships, their biological functions, their explanatory power (including explaining possibilities), their mathematical properties, and the gaps between current AI systems and products of sophisticated biological DCKs.
More details on the tutorial can be found at http://www.cs.bham.ac.uk/research/projects/cogaff/misc/essence-2015.html

Aaron's Bio:
Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science. He is the author of several papers on philosophy, epistemology and artificial intelligence. He held the Chair in Artificial Intelligence and Cognitive Science at the School of Computer Science at the University of Birmingham, and before that a chair with the same title at the University of Sussex. He is Honorary Professor of Artificial Intelligence and Cognitive Science at Birmingham.
He has also been involved in design and development of software tools for teaching and research in AI, including the Poplog system, and SimAgent toolkit.

More about Aaron Sloman: http://www.cs.bham.ac.uk/~axs/

Slides (HTML - external link)(PDF - external link)


Benjamin Kuipers

Benjamin Kuipers (University of Michigan)

The Foundations of Spatial Knowledge: Representation and Learning

Abstract:
Spatial knowledge is one of the foundations for common sense. A computational theory of how spatial knowledge can be grounded in perception and action is useful both for understanding the human cognitive map, and for implementing robots that move purposefully and interact with humans in typical human environments.
In this tutorial, we will describe the Spatial Semantic Hierarchy (SSH), which shows how several different ontologies can be used together to represent knowledge of large-scale and small-scale space. The basic SSH requires only very limited prior knowledge of the agent's sensors and effectors, just enough to implement hill-climbing and trajectory-following control laws. The Hybrid SSH (HSSH) exploits prior knowledge of the sensors to build local metrical maps of small-scale space. These can be abstracted to capture the qualitative decision structure of local space, making it possible to build a global topological map, which can be used as a skeleton for building a global metrical map when resources permit. By using multiple ontologies for spatial knowledge, the SSH naturally supports robust environmental learning and human-robot interaction.
The structure of the SSH naturally leads us to ask whether and how these representations can be learned. We describe statistical learning methods that allow higher-level concepts such as places, paths, objects, actions, goals, and plans to be learned from unguided low-level sensorimotor experience (the "pixel level"). In practical terms, this capability helps robots autonomously adapt to new sensors and new environments. In theoretical terms, this helps us understand whether artificial intelligence requires intelligent design, or whether autonomous learning is possible.

Benjamin's Bio:
Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan. He previously held an endowed Professorship in Computer Sciences at the University of Texas at Austin. He received his B.A. from Swarthmore College, and his Ph.D. from MIT. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge. His research accomplishments include developing the TOUR model of spatial knowledge in the cognitive map, the QSIM algorithm for qualitative simulation, the Algernon system for knowledge representation, and the Spatial Semantic Hierarchy models of knowledge for robot exploration and mapping. He has served as Department Chair at UT Austin, and is a Fellow of AAAI, IEEE, and AAAS.

More about Benjamin Kuipers: http://web.eecs.umich.edu/~kuipers/

Slides (PDF) part 1part 2
Slides public lecture (PDF)


Fausto Giunchiglia

Fausto Giunchiglia (University of Trento)

Adaptive Data Integration

Abstract:
We will discuss a new approach to data integration where new input data can cause a run time modification of the integrative schema. This result is implemented in four layers. In the first, knowledge representation layer, it must be possible to represent the various forms of diversity (in the language, in the schema) which may appear in the input data. In the second, operational layer, a set of operations are defined which allow for the modification and adaptation of the schema, still preserving a notion of consistency, suitably defined. In the third layer, a tool for data import exploits the adaptive capabilities of the layer below in order to produce the desired, adapted schema. In the last layer, the data are imported to a knowledge graph which adapts and evolves according to the data imported.

Fausto's Bio:
Fausto Giunchiglia is a professor of Computer Science at the Faculty for Information Engineering and Computer Science, University of Trento (Italy). He was a PhD student and Visiting Fellow at the Stanford University (USA) and Research Fellow at the University of Edinburgh (UK). His main research field is in Semantics, covering a wide range of topics, including knowledge representation, knowledge management, and knowledge diversity. Within this general field, his main focus is on diversity.

More about Fausto Giunchiglia: http://disi.unitn.it/~fausto/

Slides (PDF)


Georgiana Dinu

Georgiana Dinu (IBM T. J. Watson Research Center, Yorktown)

Vector space models of meaning in natural language processing

Abstract:
Distributional, vector-based, meaning representations extracted from large amounts of running text have a long tradition in natural language processing (NLP) and are based on the observation that words occurring in similar context have similar meaning. In the distributional paradigm, words are represented as vectors encoding co-occurrence patterns and vector similarity measures become a proxy for similarity in meaning. The first part of the tutorial will overview these methods ranging from the traditional “count”-based vector space models to the more recent language modeling-inspired approaches which formulate word vector induction as a learning task where the objective is that of predicting words in their context.
Traditionally, distributional models have been developed for modeling the meaning of words, or of other atomic units. However, modeling meaning in isolation has limited applicability and for this reason recent work has focused on methods to compositionally construct meaning representations for phrases or sentences as a function of the vector representations of their composing words. The second part of the tutorial will address such extensions of distributional methods beyond the word level, to larger units of text.
Finally, applications of these methods will be discussed throughout, as continuous, vector-based, representations are becoming the standard approach to most NLP tasks. These go beyond direct applications such as assessing word/sentence similarity to a much wider variety of tasks, such as part of speech tagging or named entity recognition, in which symbolic word features are replaced with their continuous counterparts in order to exploit correlations learned from unlabeled data and mitigate the data sparseness problem.

Georgiana's Bio:
Georgiana is a researcher in Natural Language Processing at the IBM T. J. Watson Research Center, Yorktown. Prior to this she has been a postdoctoral researcher in COMPOSES (in the CLIC lab of the University of Trento's Center for Mind/Brain Sciences (CIMeC)). She received her PhD in the Computational Linguistics and Phonetics department of Saarland University. Georgiana's research interests revolve around distributional methods for semantics and applications to language technology tasks.

More about Georgiana Dinu: http://clic.cimec.unitn.it/~georgiana.dinu/index.html

Slides (PDF)


Jerome Euzenat

Jérôme Euzenat (INRIA Grenoble)

Dynamic interoperability: from ontology matching to cultural knowledge evolution

Abstract:
Representation of knowledge by human beings or machines is by nature heterogeneous, because agents act in different contexts, and dynamic, because both their knowledge and the world evolve over time. In this tutorial, we consider knowledge as expressed in the semantic web through ontologies related by alignments.
The first part of the tutorial focusses on classical approaches to reduce heterogeneity in distributed knowledge representations. We thus recall the basics of ontology semantics: model theory. We then consider alignments between ontologies as a way to reduce heterogeneity. Finally, we will discuss how to deal with inconsistency in networks made of ontologies and alignments though alignment repair and network revision. This way of dealing with the dynamics and heterogeneity of knowledge representation can be qualified of an engineering approach: it atempts to define first the exact or nominal behaviour of a system. Systems are designed to work correctly, but experience shows that unexpected situations always happen and agents must overcome them.
Hence, the second part of the tutorial concentrates on a more fluid approach to evolve knowledge representations. We introduce cultural knowledge evolution, taking inspiration from the cultural language evolution framework introduced by Luc Steels and colleagues. This approach has the advantage of not assuming that everything should be set correctly before trying to communicate and of being able to overcome failures. We show how agents holding ontologies, attempting at communicating using alignments, take appropriate actions when communication fails. We present experiments in which agents react to mistakes by altering alignments. Agents only know about their ontologies and alignments with others and they correct alignments in a fully decentralised way. We show that such an approach converges towards successful communication through improving the objective correctness of alignments.

Jérôme's Bio:
Jérôme Euzenat is senior research scientist at INRIA and University of Grenoble. He is mostly interested in concurrent representations of the same situation and the relationships among them. This is thus closely related to semantics (the interpretation of representations). Dr Euzenat has published in related knowledge representation areas such as truth maintenance systems, temporal and spatial granularity, collaborative knowledge base construction. He has developed extensive research on ontology matching on which he and Pavel Shvaiko wrote the reference book. He now experimentally investigates how agents can communicate using evolving and heterogeneous knowledge representation.

More about Jérôme Euzenat: http://exmo.inria.fr/~euzenat/

Slides (PDF)


Michael Spranger

Michael Spranger (Sony CSL Tokyo)

Evolutionary Semantics: Case Studies with Robots

Abstract:
Natural language interaction between humans and robots (or more broadly autonomous intelligent systems such as self-driving cars) remains one of the biggest challenges of AI, mainly because it requires integration of highly sophisticated components for vision and motor control, speech, parsing and production of language, interaction through dialog, and
grounded semantics. All these components should ideally acquire content through machine learning and have to remain adaptive to changing contexts, goals and interlocutors. This tutorial focuses on how to achieve evolutionary grounded semantics.
The tutorial starts by introducing basic notions of (cultural) evolution and how they could be used to make semantics self-generated and adaptive. The tutorial then zooms in on procedural semantics. Procedural semantics sees meaning in terms of procedures operating over sensori-motor states and world models. The tutorial examines possible representational languages for procedural semantics, how the process of conceptualization can be seen as a planning problem, how concepts required for conceptualization get learned, and how lexicons and construction grammars are able to express procedural semantics. To illustrate the main points at a technical level, the tutorial uses case studies in a number of different domains, in particular, reference to objects based on their properties, producing and understanding action commands, and spatial and temporal description of situations.

Michael' Bio:
Michael Spranger received his Diploma from the Humboldt-Universitt zu Berlin (Germany) in 2008 and a PhD from the Vrije Universiteit in Brussels (Belgium) in 2011 (both in Computer Science). For his PhD he was a researcher at Sony CSL. He then worked in the R&D department of Sony corporation in Tokyo (Japan) for almost 2 years. He currently holds positions in Sony CSL and Sony Corporation. He is a roboticist by training with extensive experience in research on and construction of autonomous systems including research on robot perception, world modeling and behavior control. After his diploma he fell in love with the study of language and has since worked on different language domains from action language and posture verbs to time, tense, determination and spatial language. His work focusses on artificial language evolution, computational cognitive semantics and robotics.

More about Michael Spranger: http://www.sonycsl.co.jp/en/lab/tokyo/michael-spranger.html

Slides (PDF)
For Summer School participants only. Please write to essence-info@inf.ed.ac.uk for the password.


Nick Hawes

Nick Hawes (University of Birmingham)

Structured Representations for Robot Behaviour

Abstract:
This tutorial will cover a range of approaches for generating behaviour in intelligent autonomous robots, focussing particularly on mobile service robots. In general we will look at approaches which rely on explicit representations of the kinds of knowledge essential for robots operating in human environments, e.g. knowledge of space and time, of how the world changes with and without the robot's input, and how multiple robots can work together.

Nick's Bio:
Dr. Nick Hawes is a Reader in Autonomous Intelligent Robotics in the School of Computer Science at the University of Birmingham. His research is focussed on applying techniques from artificial intelligence to allow robots to perform useful tasks in everyday environments, with a particular interest in long-term autonomy and mobile service robots. He is the coordinator of the EU STRANDS project which aims to produce intelligent mobile robots that are able to run for months in dynamic human environments, and use these long run times to learn and exploit novel spatio-temporal structures.

More about Nick Hawes: http://www.cs.bham.ac.uk/~nah/

Slides (PDF)


Nicolas Maudet

Nicolas Maudet (Univ. Pierre et Marie Curie, Paris)

Negotiation Amongst Agents — Principles and Techniques

Abstract:
Agents with conflicting preferences over possible outcomes may use negotiation to come to an agreement. In this lecture, I will give an overview of the field. In the first part of the course I will cover in particular the underlying principles of negotiation (and their relevance for software agents), an introduction to game-theoretical analysis, and examples of classical protocols and strategies. In the second part of the course I will address more advanced topics, for instance settings involving many agents, agents on networks, or agents with limited knowledge. I will conclude by discussing some works seeing agreement on meaning as a negotiation process.

Nicolas' Bio:
Nicolas is a Professor in Computer Science at Univ. Pierre et Marie Curie (aka Paris-6). He was an Assistant Professor (Lecturer, MCF) in Computer Science at Univ. Paris-Dauphine, LAMSADE Lab; and even before that he spent a year as a postdoctoral Research Fellow at Imperial College and City University. He holds a PhD from Univ. Paul Sabatier (Toulouse) and habilitation from Univ. Paris-Dauphine.
His main research interests are artificial intelligence and multiagent systems, and concern various aspects of collective (and often distributed) decision making.
He is an Associate Editor of the Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS), an Editorial Board member of the Journal of Artificial Intelligence Research (JAIR), and a member of the board of the french AI association (AFIA).

More about Nicolas Maudet: http://www-poleia.lip6.fr/~maudetn/index.html

Slides (PDF)


Nicu Sebe

Nicu Sebe (University of Trento)

From Concepts to Events: A Progressive Process for Multimedia Content Analysis

Abstract:
Images and videos depict semantic contents in different degrees of richness. Generally speaking, people tend to record static concepts such as objects, scenes or moments of human activities. Videos, in contrast, are used to record dynamic events that are more complicated than static concepts. For example, we can capture a flower with an image but a wedding ceremony needs a long lasting video. Therefore, images and videos consist of the main multimedia data and it is important to develop effective analyzing techniques for both of them. In this tutorial I will address the problem of image and video understanding based on the work presented in the state of the art and by my own group. Specifically, I will answer the following questions:

  • Is it possible to get a compact image representation? Would the analyzing accuracy be improved as a result?
  • Is there any way to attain reasonable analyzing performance with only few labeled images and videos are available?
  • Can we skip the explicit concept detection process but learn an intermediate representation using available multimedia archives related to various concepts for complicated events?
  • How can we guarantee reasonable multimedia event detection accuracy when only few positive exemplars are provided?

Nicu's Bio:
Nicu Sebe is a professor in the University of Trento, Italy, where he is is leading the research in the areas of multimedia information retrieval and human-computer interaction in computer vision applications. He was involved in the organization of the major conferences and workshops addressing the computer vision and human-centered aspects of multimedia information retrieval, among which as a General Co-Chair of the IEEE Automatic Face and Gesture Recognition Conference, FG 2008, ACM International Conference on Image and Video Retrieval (CIVR) 2007 and 2010. He was a general chair of ACM Multimedia 2013 and a program chair of ACM Multimedia 2011 and 2007. He will be a program chair of ECCV 2016 and ICCV 2017. Currently he is the ACM SIGMM Director of Conferences. He has been a visiting professor in the Beckman Institute, University of Illinois at Urbana-Champaign and in the Electrical Engineering Department, Darmstadt University of Technology, Germany. He is a co-chair of the IEEE Computer Society Task Force on Human-centered Computing and is an associate editor of IEEE Transactions on Multimedia, Computer Vision and Image Understanding, Machine Vision and Applications, Image and Vision Computing, International Journal of Human-computer Studies and of Journal of Multimedia.

More about Nicu Sebe: http://disi.unitn.it/~sebe/

Slides (PDF)


Peter Gärdenfors

Peter Gärdenfors (Lund University)

The Geometry of Meaning: Semantics Based on Conceptual Spaces

Abstract:
The tutorial will present the geometric approach to modeling the semantics of natural language. First the geometric models will be contrasted with symbolic and connectionist models and their advantages will be highlighted. Then the theory of conceptual spaces will be presented. The main part of the time will devoted to showing how the semantics of major word classes (nouns, adjectives, verbs, prepositions) can be modelled in geometric/topological terms. For verbs, formal accounts of actions and events based on vector models will be presented. As an application, it will be shown how the geometric
approach to events is utilized in developing of a robot-human natural language communication system.

Peter's Bio:
Professor Peter Gärdenfors is Head of Cognitive Science at Lund University, Sweden. His works on belief revision and conceptual spaces have been widely recognized within computer science. He is a member of the Royal Swedish Academy of Letters, History and Antiquities. He received his doctorate from Lund University in 1974. His thesis title was "Group Decision Theory". He is member of several academies and a member of the Prize Committee for the Prize in Economic Sciences in Memory of Alfred Nobel since 2011.

More about Peter Gärdenfors: http://www.fil.lu.se/person/PeterGardenfors

Slides (PDF) (PPT)


Remi van Trijp

Remi van Trijp (SONY CSL Paris)

Computational construction grammar and constructional change

Abstract:
After several decades in scientific purgatory, language evolution has reclaimed its place as one of the most important branches in linguistics, and it is increasingly recognized as one of the most crucial sources of evidence for understanding human cognition. This renewed interest is accompanied by exciting breakthroughs in the science of language. At the same time, construction grammar is increasingly being embraced in all areas of linguistics as a fruitful way of making sense of all these empirical observations. Construction grammar has also enthused formal and computational linguists, who have developed sophisticated tools for exploring issues in language processing and learning, and how new forms of grammar may emerge in speech populations. This tutorial will familiarize the participants with Fluid Construction Gramar (FCG), a state-of-the-art grammar formalism for investigating how new forms of language may emerge and evolve in populations of language users.

Remi's Bio:
Remi is a researcher at Sony CSL Paris. His research is dedicated to the origins and evolution of language. In his work, he tries to piece together new information of this puzzle by combining techniques from computational linguistics and artificial intelligence that allow him to 'bring back alive' language systems that have disappeared or that have unrecognizably changed over time, and that allow him to investigate how a new language can develop from scratch. Within this broad research context, he spends his time on four concrete research topics: The Evolution of German Case, The Origins of Case Systems, Fluid Construction Grammar, and Robust Language Processing and Learning.

More about Remi van Trijp: http://www.remivantrijp.be/


robert

Robert van Rooij (University of Amsterdam)

Games and language interpretation

Abstract:
It is well established that a speaker normally communicates more by the use of a sentence than just its conventional meaning. These meanings can be enriched because we assume that the speaker and hearer conforms to some pragmatic rules of conversation proposed by Grice (1957, 1967). Grice's discussion of speaker meaning and conversational implicatures makes use of patterns of iterated reasoning characteristic of game theoretic analyses. Indeed, Gricean ideas naturally suggest a game theoretic treatment.
In this mini-course we will look at language interpretation from two different, though related, points of view: (i) the negotiation of language meaning in particular discourses using standard game theory, and (ii) the evolution of meaning using evolutionary game theory. As for (i) we will motivate Gricean principles of language use, and for (ii) we will investigate how some particular features of natural languages might have evolved.

Roberts's Bio:
Robert is a professor of Logic and Cognition at the ILLC (Faculty of Science, the University of Amsterdam). He used to work mostly on the formal semantics and pragmatics of natural language (e.g. conversational implicatures) and philosophy of language. More recently, he worked also on topics in philosophical logic (e.g. vagueness, truth, conditionals) and metaphysics (e.g. universalia). From 2005 until 2010 Robert worked on his NWO funded VIDI research project in Amsterdam called `The Economics of Language. Language Use and the Evolution of Linguistic Convention'. Before that, he was a KNAW-fellow working on the project `Games, Relevance, and Meaning'. He did his PhD in Stuttgart (1997).

More about Robert van Rooij: http://www.uva.nl/over-de-uva/organisatie/medewerkers/content/r/o/r.a.m.vanrooij/r.a.m.van-rooij.html


Stephen Muggleton

Stephen Muggleton (Imperial College London)

Logic-based and Probabilistic Symbolic Learning

Abstract:
Symbolic learning is a particular area of Machine Learning that aims at learning rule-based knowledge, called hypotheses, from observations (positive and negative), using existing background knowledge and integrity constraints. Learned hypotheses should be able to explain positive observations, classify them from negative observations, and because of their generalization, accurately predict unseen observations. A key characteristic of symbolic learning is that logic is used as the underlying unifying representation language for observations, background knowledge and hypotheses. Various approaches have been developed in AI since the 1980 and recent advances have seen also an increased application to real world problems in domains such as bioinformatics, privacy and security and software engineering in general. In application domains where observations are noisy and concepts to be learned can be fuzzy, probabilistic inference can be combined with symbolic learning. The integration of these two different worlds has recently inspired new research directions and broadened the applicability of symbolic learning. This course aims to provide an in-depth presentation of the current state-of-the-art of symbolic learning, starting from its key foundation concepts and principles and moving to most recent advances with a particular emphasis on successful applications, available systems, and description of way in which probabilistic inference and parameter learning can be integrated with symbolic inference and symbolic learning.

Stephen's Bio:
Stephen's career has concentrated on the development of theory, implementations and applications of Machine Learning, particularly in the field of Inductive Logic Programming. Over the last decade he has collaborated increasingly with biological colleagues, in particular Prof Mike Sternberg, on applications of Machine Learning to Biological prediction tasks. These tasks have included the determination of protein structure, the activity of drugs and toxins and the assignment of gene function.
Stephen is Director of the Syngenta University Innovation Centre at Imperial College and holds a Royal Academy of Engineering/Syngenta Research Chair. He received his BSc in Computer Science at the University of Edinburgh in 1982. His PhD research, on the topic Inductive Acquisition of Expert Knowledge was carried out at Edinburgh University. He was awarded his PhD in 1986.
He is/has been an editorial board member of the Machine Learning Journal, the Journal of Logic Programming, the Journal of Artificial Intelligence Research, the AI Journal, the ACM Transactions on Computational Logic, Cognitive Science and Theory and Practice of Logic Programming. He was Executive Editor of the Oxford University Press series Machine Intelligence from 1992 and has been Editor-in-Chief of the series since 2000.

More about Stephen Muggleton: http://wp.doc.ic.ac.uk/shm/

Slides (PDF) part 1part 2part 3

Leave a Reply

Evolution of Shared SEmaNtics in Computational Environments – A Marie Curie Initial Training Network