International Conference on Computational Approaches to Diversity in Interaction and Meaning

An Interdisciplinary Conference Organised by the ESSENCE Network

7th to 9th October, 2017, San Servolo/Venice, Italy


SAN-SERVOLO_elabConference venue


Join us for three exciting days of research presentations, invited talks, and discussions in an informal and interdisciplinary environment to develop new perspectives for AI research that capitalise on advances in technologies that bridge diverse conceptualisations of knowledge through interaction!

Jump to Speakers and Abstracts of Talks
Detailled Programme


Over the past four years, the ESSENCE (Evolution of Shared Semantics in Computational Environments) Network has developed a new vision of intelligent systems that emphasises diversity among the building blocks of these systems, and addresses issues of bridging their heterogeneous notions of meaning through interaction.

This stands in contrast to the mostly "monolithic" approaches that are currently prominent in mainstream AI research, and which rely on a single designer developing a single system that solves a well-defined problem by processing large amounts of data that captures the regularities in the problem space and relies on a strong correspondence between data and solution. Acknowledging the strengths of existing approaches, as demonstrated in many recent systems that excel at solving single AI problems previously considered unsolvable for machines, we believe that the future breakthroughs in AI will emanate from the integration of multiple human and machine contributions, each of which may have different views of the world and different objectives.  Emerging problems related to the explainability (gap between the semantics of human users and machine models of meaning), reusability (semantic gap between original and novel domain conceptualisation), and compositionality (semantic gap between different components) of state-of-the-art AI systems are early indications of the significance of this perspective.

Diversity-awareness has been researched by many communities across several disciplines in the past, many of which have inspired work done in ESSENCE, including:

  • Semantic technologies that address semantic interoperability and alignment of divergent domain conceptualisations
  • Multiagent systems that provide methods to aggregate objectives, preferences, and activities of heterogeneous agents
  • Knowledge representation and reasoning techniques that integrate heterogeneous reasoning systems or viewpoints
  • Natural language processing and computational linguistics that study semantic agreement, grounding, and language evolution
  • Human-centric computing methods that develop ways of bridging human and machine views of an application problem

Building on the success of the initial workshop we organised on this topic at ECAI 2016, this conference will provide a forum for the ESSENCE community and leading experts from the above (and other) areas to synthesise novel ideas that will help take this agenda further, exchange insights from different disciplines, and engage in intensive discussions on the key research issues surrounding diversity-aware AI. The conference is also the final event organised by the ESSENCE consortium, and will be used to showcase results from its research and explore follow-up projects with colleagues from various research communities.

The conference will take place the island of San Servolo in Venice from Saturday 7th – Monday 9th October, 2017 (arrival on Friday 6th October, departure on Monday afternoon)  at the campus of Venice International University. The island of San Servolo is an oasis in a unique urban setting, 10 minutes by boat from Piazza San Marco with a peaceful park spread across 12 scenic acres and a panoramic view of Venice.

The event is deliberately planned as a small-scale conference with up to 60 participants that emphasises discussion, debate, and synthesis of existing work. Participants will not be required to submit original research papers, and no formal proceedings are planned, though selected participants may be invited to contribute to an edited volume or journal special issue after the event. Please note this conference is by invitation only and there is limited accommodation availability on the Island of San Servolo.

If you are interested in participating or have any queries please contact us at We will consider expressions of interest to participate based on available places.

The detailed conference programme can be accessed here.





Andrea Baronchelli  City University of London (
The Spontaneous Emergence of Consensus: From Social Conventions to Shared


How does consensus emerge in complex decentralised social systems? This question engages fields as diverse as sociology, linguistics, cognitive science and network science. Various attempts to solve this puzzle presuppose that formal or informal institutions, such as incentives for global agreement, coordinated leadership, or aggregated information about the population, are needed to facilitate a solution. The complex systems approach, by contrast, hypothesises that such institutions are not necessary in order for social consensus to form. Adopting this perspective, I will start by presenting experimental results that demonstrate the spontaneous creation of universally adopted social conventions. In doing so, I will show also how a population’s network structure controls the dynamics of norm formation, as captured by the simple naming game model. Then, I will discuss the case of category systems. Here, individuals can coordinate their language in order to attain common goals, but they remain unable to access the internal representations of their peers, thus leaving space for an intrinsic (and ideally small) possibility of misunderstanding. I will show that a simple multi-agent model reproduces quantitatively many statistical properties of the empirical data.

Eva Blomqvist  Linköpings Universitet (
Understanding the world through ontology patterns

Ontologies are a key technology for interpreting and using diverse information, on the web and in other contexts. On one hand ontologies are intended for sharing agreed, e.g., 'standardised', conceptualisations of the world. However, I would like to argue that this only works well in some specific domains, where such an agreement and standardisation can actually be reached in a reasonable manner. On the other hand, at the core of the Semantic Web idea is decentralisation and diversity, i.e., that anyone can publish anything, including their own ontologies. This is a very important principle, but it has also proven to be an obstacle on the path towards realising the Semantic Web vision itself, since matching and aligning ontologies is hard. However, if we take a step back and consider how humans communicate we note that any two humans will certainly disagree on the meaning of many concepts, but still we are usually able to communicate quite well. How is this possible? This is because we agree on a few basic notions, on a level of abstraction and detail that is relevant to those involved. For instance, we may not at all agree about the basic nature of an object, nor of the full set of detailed attributes. But does it matter? If we can still agree on a few basic principles and attributes that are enough to share information in a specific use case scenario. The answer to this problem for the Semantic Web may be in the notion of Ontology Design Pattern (ODP). ODPs can provide a shared understanding, without having to agree on a complete theory of the world, nor agree on all the more specific categories or attributes. In this talk I will explain what ODPs are, take some examples, and describe where the research front in this area is currently. I will also go through a few use cases for ODPs, where some are case studies from our research projects where ODPs have been used in the context of decision support systems, and finally I will discuss some open issues and problems to address next.

Claudia d'Amato   Università degli Studi di Bari (
Machine Learning for the Semantic Web: an Ontology Mining perspective

In the Semantic Web view, ontologies play a key role. They act as shared vocabularies to be used for semantically annotating Web resources and they allow to perform deductive reasoning for making explicit knowledge that is implicitly contained within them. However, noisy/inconsistent ontological knowledge bases may occur, being the Web a shared and distributed environment, thus making deductive reasoning no more straightforwardly applicable. Machine learning techniques, and specifically inductive learning methods, could be fruitfully exploited in this case. Additionally, machine learning methods, jointly with standard reasoning procedure, could be usefully employed for discovering new knowledge from an ontological knowledge base that is not logically derivable. The focus of the talk will be on various ontology mining problems and on how machine learning methods could be exploited to cope with them.For ontology mining are meant all those activities that allow to discover hidden knowledge from ontological knowledge bases, by possibly using only a sample of data. Specifically, by exploiting the volume of the information within an ontology, machine learning methods could be of great help for (semi-)automatically enriching and refining existing ontologies, for detecting concept drift and novelties within ontologies, for discovering hidden knowledge patterns and/or disjointness axioms. If on one hand this means to abandon sound and complete reasoning procedures for the advantage of uncertain conclusions, on the other hand this could allow to reason on large scale and to to dial with the intrinsic uncertainty characterizing the Web, that, for its nature, could have incomplete and/or contradictory information.

Anca Dumitrache   VU University Amsterdam (
Harnessing the Diversity in Crowdsourcing with CrowdTruth

Diversity of opinion is omnipresent in crowdsourcing, yet the majority of research in machine interpretation of signals such as language, images, video or audio tend to ignore it. This is evidenced by the fact that metrics for quality of machine understanding rely on a ground truth in which each instance, (like a sentence, a photo or a sound clip) is assigned a discrete label. This type of ground truth does not handle well the ambiguous cases in which binary labels cannot be easily applied to the data. CrowdTruth is a form of collective intelligence based on a vector representation that accommodates diverse interpretation and gives human annotators the possibility to disagree with each other, in order to expose latent elements such as ambiguity and worker quality. In other words, CrowdTruth assumes that when annotators disagree on how to label an example, it is because the example is ambiguous, the worker isn’t doing the right thing, or the task itself is not clear. In this talk, I will discuss lessons learned from applying CrowdTruth to a variety of crowdsourced annotation tasks, with the goal of understanding diversity in human interpretation, as well as capturing it as ground truth that will enable machines to deal with such diversity.

Jérôme Euzenat   INRIA Grenoble Rhône-Alpes (
Knowledge diversity under socio-environmental pressure

Experimental cultural evolution has been convincingly applied to the evolution of natural language and we aim at applying it to knowledge. Indeed, knowledge can be thought of as a shared artefact among a population influenced through communication with others. It can be seen as resulting from contradictory forces: internal consistency, i.e., pressure exerted by logical constraints, against environmental and social pressure, i.e., the pressure exerted by the world and the society agents live in. However, adapting to environmental and social pressure may lead agents to adopt the same knowledge. From an ecological perspective, this is not particularly appealing: species can resist changes in their environment because of the diversity of the solutions that they can offer. This problem may be approached by involving diversity as an internal constraint resisting external pressure towards uniformity. We will discuss strategies to implement this approach and how it can be beneficial.

Kobi Gal   Ben-Gurion University of the Negev (
Solving the disengagement problem by reasoning about users' diversity: predictions, interventions and experiments

Many online systems depend critically on maintaining the engagement of participants. Notable examples include peer production sites (e.g., wikipedia, crowdsourcing), e-learning platforms (e.g., MOOCs). The vast majority of users in such systems exhibit casual and non-committed participated patterns, and make very few contributions before dropping out and never returning to the system. We present a methodology for extending engagement and productivity in such systems by combining machine learning with intervention strategies. We show that adopting different intervention strategies is key to account for diversity and interpersonal differences between users. We demonstrate the efficacy of this approach on two real world problems: How to support student group-learning in the classroom, and how to increase the contributions of thousands of volunteers in one of the largest citizen science platforms on the web.

Fausto Giunchiglia  Università degli Studi di Trento (
Understanding and Exploiting Language Diversity

The main goal of this presentation  is to describe a general approach to the problem of \emph{understanding} linguistic phenomena, as they appear in lexical semantics,
through the analysis of large scale resources, while \emph{exploiting} these results to improve the quality of the resources themselves. The main contributions are: the approach itself, a formal quantitative \emph{measure of language diversity}; a set of formal quantitative \emph{measures of resource incompleteness} and a large scale resource, called the \emph{Universal Knowledge Core} (UKC) built following the methodology proposed. As a concrete example of an application, we provide an algorithm for distinguishing \emph{polysemes} from \emph{homonyms}, as
stored in the UKC. (joint work with: Khuyagbaatar Batsuren, Gabor Bella)

Oliver Kutz  Free University of Bozen-Bolzano (
From Conceptual Blending to Computational Concept Invention

In cognitive science, the theory of conceptual blending provides an explanation of the human ability to invent concepts. This cognitive theory provides an inspiration for computational concept invention theory, which has the goal of building creative systems that generate new concepts automatically. In this talk, we will summarise and discuss logical, ontological, and cognitive aspects of a computational theory for conceptual blending as it was developed within the FP7 project COINVENT.
One critical question for the development of such a system is the choice of an appropriate representation language. For this purpose we use the Distributed Ontology, Model and Specification Language (DOL). DOL is a metalanguage that enables the reuse of existing ontologies as building blocks for new ontologies and, further, allows the specification of intended relationships between ontologies and the abstract specification of blending diagrams. A second critical question is how to evaluate the generated concepts and generally steer the invention process. In cognitive linguistics, image schemas are understood as conceptual building blocks that are learned in early infancy and which shape not only language but conceptualisation as a whole. We will discuss the role that image schemas play in concept invention, and will motivate and outline a formalisation approach to image schemas representing them as interlinked families of theories.

Nicolas Maudet   Université Pierre et Marie Curie (Paris VI) (
Explanation in decision-aiding: old questions and new challenges

Providing explanations along with recommendations is a key feature for many decision-aiding systems. Even though it has a long history in AI, some recent developments in our field put the focus back on this topic. Indeed, there is a growing societal demand for accountable algorithmic decisions or recommendations: systems should be equipped so as to be able respond to ‘why’ questions. But what does it mean exactly? In this talk I will present various types of explanations, and survey some recent results. One extremely challenging aspect is that explanations must be carefully adapted to the diversity of users. For instance, they may partly involve knowledge inferred during the interaction with the user. Another difficulty lies in the fact that the underlying decision model may be highly complex. I will illustrate these questions using in particular decision-aiding in settings where multiple criteria are involved.

Diana Maynard  University of Sheffield (
Adapting NLP tools to diverse data: challenges and solutions

Traditional NLP tools have focused on analysing formal high-quality unstructured text such as news, company reports, academic publications and so on. These days, however, there is a wealth of less formal and more diverse kinds of text such as social media, which holds a number of properties that impose significant challenges on such tools. The 3Vs of big data - volume, variety and velocity - are well known; for social media one can also add veracity. In this talk, I will focus mainly on the variety element of social media, discussing some of the challenges and solutions of adapting NLP tools to deal with the short, less well-formed sentences typically found, including issues of slang, incorrect spelling, grammar and capitalisation, semantic drift of entities. mixed languages and code-switching. This impacts not only linguistic pre-processing components, but also fundamentally affects the way we approach tasks such as sentiment analysis, entity finding and linking, relation extraction and summarisation. I will present case studies based on a toolkit for social media analysis developed in the GATE framework.

Roberto Navigli   Sapienza University of Rome (
Multilinguality for free, or why you should care about linking to (BabelNet) synsets

Multilinguality is a key feature of today’s Web and a pervasive one in an increasingly interconnected world. However, many semantic representations, such as word (and often sense) embeddings, are grounded in the language from which they are obtained. In this talk I will argue that there is a pressing need to link our meaning representations to large-scale multilingual semantic networks such as BabelNet, and will show you several tasks and applications where multilingual representations of meaning provide a big boost.

Chris Reed  University of Dundee (
Arguing with Machines

Recent foundational advances have started to unpack the mechanisms by which dialogues and inferences are interconnected, with Inference Anchoring Theory providing an account of how it is that particular patterns of dialogical interaction yield structures we recognise as arguments. This starting point can be used to operationalise the definitions of games of dialogue that describe how humans interact in different contexts. The result is 'mixed initiative argument' whereby human and software agents engage in argument dialogues on a level playing field, with software both presenting apposite data from very large knowledge bases of argument and also critiquing new arguments presented by humans. As techniques for automatically harvesting arguments from the wild ('argument mining') start to mature, such dialogue games hold enormous potential as a new way of interacting with complex information spaces.

Robert van Rooij   University of Amsterdam (
Generics: non-monotonic logic or valuable associations?

Generic sentences like 'Birds fly’ play an important role in AI: they are important for knowledge representation and they motivated the whole field of non-monotonic logic. The idea is that even though ‘Birds fly’ can be true without it being the case that every bird flies, still, most of them, or the normal ones do so. Unfortunately, there are many other accepted generics where a majority or normality-based approach seems much less natural: `Birds lay eggs’, `Lions have manes’, `Frenchmen eat horsemen’, `Ticks carry the Lyme disease’ and `Wolves attach people’. In this talk I will propose that generics of the form `Gs are f’ are good, not because most, or normal Gs have feature f, but because feature f is valuably associated with G. A link will be made with how we learn such associations, and it will be suggested that a similar analysis can be given to various other types of examples.

Valentina Tamma  University of Liverpool (
New models of knowledge sharing: opportunistic ontology negotiation

A fundamental issue in modern information systems is integration and interoperation. Even inside organisations with strong governance and internal communication, it is quite common that similar or overlapping information is modelled in diverse ways. These differences in modelling become apparent when, as is invariably the case, these systems must be combined (integrated) or be made to work together (interoperate). Semantic data integration is often treated primarily as a high level cognitive task, even if the associated computational artefacts are low level.  Ontologies are computational representation of the cognitive level, and are the underlying basis for automating semantic data integration. Ontology alignment is the process of determining correspondences between semantically related entities (classes, relationships and instances) in the ontologies. Traditionally, ontology alignment approaches are “upfront", greedy, global, and task-agnostic: one can exploit all the information in both systems and once the alignment is generated, all alignment activities cease (assuming the alignment is complete and correct). However, the costs of integration are incurred before it can be exploited.  However, it is not typically the case that an upfront integration can be truly task agnostic.  Different tasks may require different correspondences due to subtle differences in the understanding of the domain. Finally, with upfront integration, the effort is speculative: much of the work may never be used over the lifetime of the integrated system. In this talk I will argue the merits of “opportunistic", lazy, local, task-oriented knowledge integration, where mappings are collaboratively determined, through negotiation, on a per task basis, and thus appropriately fit for purpose. I will argue that negotiating ontology alignments can save overall effort by only attempting the alignment process where there is an actual demand. Finally, while total information may be helpful in some cases, it is very common that task specific context provides more effective information for the alignment. This talk explores dialogical mechanisms supporting alignment negotiation under diverse assumptions (partial vs total knowledge disclosure, task dependence etc), will illustrates challenges and will explore future directions.

Leave a Reply

Marie Curie Initial Training Network (2013-2017)