Peter Simons

Representing Representing: the Ontology of Aboutness
Peter Simons
Trinity College Dublin and University of Salzburg

Perceptions, thoughts, pictures and expressions are all typically about something. In mental cases the relation is called intentionality, in pictures depiction, in expressions denotation. The nature of this aboutness has long been a topic for philosophical puzzlement and controversy. Whether it requires the existence of a kind of thing, quality or relation not found in inanimate nature, whether it analysable, whether it comes in one or many forms, are all matters of dispute. The ontology of aboutness has to be at least plausibly conjectured if its features are to be represented within information systems that are sophisticated and capable enough to themselves represent representation. It falls therefore to the ontologist to investigate the entities and factors required and suitable to capture the form and matter of representation. This is no straightforward task, and there are many pitfalls. But it is a task that must be taken up if ontologies and the information systems that employ them are to advance to a stage where conjecture, diversity of opinion, (mis)information, uncertainty, falsehood, error, revision, contradiction and correction are to be smoothly represented and reconciled, and linked, as they must be, to action and decision, whether natural or artificial. Looking for help past and present, this paper sets about addressing that difficult task.

Peter Simons is Emeritus Professor of Philosophy at Trinity College Dublin, having previously taught in Bolton, Salzburg and Leeds. He specialises in metaphysics and ontology, pure and applied, with a sideline in the history of philosophy and logic in Central Europe. His interest in applications led to collaboration with software designers and engineers, among others. The author of the definitive treatise Parts, four other books, and some 300 papers, he is a member of the British, European, Irish and Polish Academies.

Arianna Betti (University of Amsterdam, Netherlands)

Arianna Betti is Professor and Chair of Philosophy of Language at the University of Amsterdam, Institute of Logic, Language and Computation. After studying historical and systematic aspects of ideas such as _axiom_, _truth_ and _fact_ (_Against facts_,   MIT Press, 2015), she now endeavours to trace the development of ideas such as these with computational techniques in a strongly interdisciplinary setting. She did research at the universities of Krakow, Salzburg, Graz, Leiden, Warsaw, Melbourne, Lund and Gothenburg and held research grants among others from the European Research Council, the Italian CNR, the Dutch NWO, and CLARIN-NL. She has been a member of the Young Academy of the Dutch Royal Academy KNAW, of the Global Young Academy, and other international organisations dealing with research policy and topics such as science and society, open access and sustainability of research.


Alessandro Oltramari

Ontologies for artificial minds, by Alessandro Oltramari, Bosch Research and Technology Center

Ontologists build formal models to understand the structure of reality. The fun starts – and I had a lot of it back in the PhD days! – when Formal Ontology is applied to understand the structure of what we indisputably use to understand reality itself: the mind. Philosophers have spent lifetimes hovering over this conundrum, I stopped more than a decade ago: but, fast-forward to the present, I’ve been busy with a not-so-distant, and yet more mundane, problem: building ontologies for AI.
My work focuses on engineering ontologies that can be integrated with the artificial minds’ “substrata”, i.e. deep and shallow neural networks, and with the processes these bring about, all of which can be pretty much boiled down to pattern recognition.
In this talk I will describe how ontologies can be effectively used in data-driven AI frameworks: I will argue that, in order to progress towards Explainable AI, it is necessary to design hybrid systems that integrate human-accessible machine representations with neural machines.
Rather than concocting a philosophical theory, I will build my argument by illustrating core results from some of the projects I’ve been involved in, at Carnegie Mellon first and, more recently, at Bosch.

Alessandro Oltramari is a Research Scientist and Project Lead at the Bosch Research and Technology Center in Pittsburgh (USA), working on hybrid AI systems in the context of Internet of Things.
Prior to this position, he was a Research Associate at Carnegie Mellon University (2010-2016), where he specialized in the integration between knowledge based systems and cognitive architectures. His work at CMU – funded by DARPA, NSF, ARL among the others – spanned from machine vision to robot navigation, occasionally arousing interest in the press (CNET, Forbes).

Alessandro received his Ph.D. in Cognitive Science from the University of Trento (Italy), in co-tutorship with the Institute for Cognitive  Science and Technology of the Italian National Research Council (ISTC-CNR). His interest in ontologies stems from a decade-long collaboration (2000-2010) with the Laboratory for Applied Ontology (LOA), headed by Nicola Guarino. Alessandro was Visiting Research Associate at Princeton University in 2005 and 2006, where he worked with Christiane Fellbaum and George A. Miller on restructuring the computational lexicon WordNet using formal ontology analysis. The author of about 70 scientific articles, 10 book chapters, and editor of 3 books, he is member of AAAI, IAOA and regularly serves in the program committee of international conferences like ISWC, ESWC, LREC and ACL.

Alessandro has been living in Pittsburgh since 2010, with his wife Laura
and his rescued dog Lady.