COURSES
LECTURES
Limits and Metalimits of Visual Attention
Artificial Cognitive Systems
Spatial Cognition and Computation
KEYNOTES
TUTORIALS
COURSE SUMMARIES
LECTURES
Limits and metalimits of visual attention
Daniel Levin
A long tradition of research in visual attention has explored how focusing attention on one thing lessens awareness of other things. Although this seems like a commonsense description of visual experience, the pattern of awareness induced by attention often conflicts dramatically with intuition, necessitating empirical research that can apply general models of visual attention in specific situations. This issue is particularly acute in HCI where designers often rely on ad hoc intuitions about the relationship between available visual information and users’ awareness of action affordances in visually rich (and often cluttered) displays.
Part 1: VISUAL ATTENTION AND METACOGNITION
In the first session, I describe basic research in visual attention and visual metacognition that document limits to awareness and people’s failures to appreciate those limits. This section will focus on research documenting phenomena such as change blindness, inattention blindness, some of the traditions in research on visual attention that preceded this work, and research demonstrating failures of visual awareness in HCI. We will end by asking how this basic research provokes important questions about visual attention in an HCI context, and by considering what needs to be added to this research to make it useful in answering these questions.
Part 2 – VISUAL ATTENTION IN NATURALISTIC CONTEXTS (HCI TO CINEMA
In the second section we will discuss how research can meet the challenges posed in the first session. Some of this discussion will introduce more recent work that explores visual attention in naturalistic contexts ranging from HCI to cinema. This work makes clear the need to develop theories that describe how meaningful events dynamically structure attention not only spatially but also temporally. This work takes two main approaches. First, it describes how broadly applicable meaningful structures constrain visual attention. For example, much of this work explores how attention and awareness change at the boundaries between events. The other main approach is to explore how more specific forms of knowledge affect attention. The key to this work is that some forms of knowledge provide specific guidance for visual attention, but that this guidance applies to broad classes of situations. The key example of this form of attentional guidance derives from people’s understanding of intentional agents. I will introduce basic research on these topics, and hope to use these examples to guide creative brainstorming of new research ideas to explore the interrelationship between HCI and visual attention. One of my key goals is to encourage a two-way interaction between basic attention research and HCI such that work on attention can inform HCI and that work on HCI can inform theories of visual attention.
Related Publications
Durlach, P.J. (2004). Change blindness and implications for complex monitoring and control systems design and operator training. Human-Computer Interaction, 19, 423-451.
Hymel, A.M., Levin, D.T., & Baker, L.J. (2016). Default processing of event sequences. Journal of Experimental Psychology: Human Perception and Performance, 42, 235-246.
Haines, R. F. (1991). A breakdown in simultaneous information processing. In G. Obrecht & L. Stark (Eds.), Presbyopia research: From molecular biology to visual adaptation (pp. 171–175). New York: Plenum.
Levin, D.T., & Baker, L.J. (2015). Change blindness and inattentional blindness. In Fawcett, J., Risko, E.F. & Kingstone, A. (Eds.), The Handbook of Attention, (pp 199-232), Cambridge, MA: MIT Press.
Levin, D.T. (2012). Concepts about agency constrain beliefs about visual experience. Consciousness and Cognition. 21 (2), 875-888.
Vallieres, B.R., Hodgetts, H.M., Vachon, F., & Tremblay, S. (2012). Supporting change detection in complex dynamic situations: Does the CHEX serve its purpose? Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting, 1708-1712.
Varakin, D.A., Levin, D.T., & Fidler, R. (2004). Unseen and unaware: Applications of recent research on failures of visual awareness for human-computer interface design. Human-Computer Interaction, 19, 389-421.
Artificial Cognitive Systems
David Vernon
This lecture provides an overview of the emerging fields of artificial cognitive systems and cognitive robotics. Inspired by artificial intelligence, developmental psychology, and cognitive neuroscience, our aim is to build systems that can act on their own to achieve goals: perceiving their environment, anticipating the need to act, anticipating the actions and intentions of people, interacting with them effectively, learning from experience, and adapting to changing circumstances.
Part 1: FOUNDATIONS OF COGNITIVE SYSTEMS
The term cognition is often understood in different ways so we begin by walking through a definition of cognitive system. This definition strikes a balance between being broad enough to do justice to the many views that people have on cognition and deep enough to help in the formulation of theories and models. We then survey the different paradigms of cognitive science to establish the full scope of the subject, taking in cognitivism and artificial intelligence, emergent systems, connectionism, dynamical systems, and enaction. We follow this with a brief discussion of two key issues: autonomy and embodiment. Like cognition, both terms can be interpreted in several ways depending on the paradigm we adopt. With these foundations, we can then proceed to Part 2 to look at how people design and build cognitive systems.
Part 2: COGNITIVE ARCHITECTURES
We begin this part by explaining what is meant by a cognitive architecture and what it entails from the perspective of the different paradigms of cognitive science. We consider the general features of a cognitive architecture, highlighting those associated with systems that are capable of development. We then look at a few example cognitive architectures before proceeding to consider some of the key components of a typical architecture. We address the various types of memory and we focus in particular on episodic memory and the role it plays in providing a capacity for prospection, one the hallmarks of cognition, through internal simulation and mental imagery. We then proceed to look briefly at social cognition: how cognitive systems interact with people. To do this we introduce the issues of intentionality, theory of mind, instrumental helping, collaboration, joint action, shared intention, shared goals, and joint attention.
Finally, we finish by looking again at the importance of development and consider some of the many interesting challenges that we face in modeling, designing, and building cognitive systems.
Related Publications
D. Vernon, Artificial Cognitive Systems - A Primer, MIT Press, 2014.
D. Vernon, "Cognitive System", Computer Vision: A Reference Guide, pp. 100-106, Springer, 2014.
D. Vernon, C. von Hofsten, and L. Fadiga. "A Roadmap for Cognitive Development in Humanoid Robots", Cognitive Systems Monographs (COSMOS), Vol. 11, Springer, 2010.
D. Vernon, G. Metta, and G. Sandini, "A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents", IEEE Transactions on Evolutionary Computation, special issue on Autonomous Mental Development, Vol. 11, No. 2, pp. 151-180, 2007.
Spatial Cognition and Computation
Mehul Bhatt
This lecture will focus on the foundational significance of visuo-spatial cognition and computation for the design and implementation of computational cognitive systems, and multimodal interaction & assistive technologies where people-centred perceptual sensemaking and interaction with cognitively founded conceptualisations of space, events, actions, and change are crucial.
PART 1: VISUO-SPATIAL THINKING
We explore the nature of analytical and creative visuo-spatial thinking and forms of human-centred cognitive assistance applicable in a wide-range of domains where perceptual sensemaking ---e.g., abstraction, reasoning, learning--- with dynamic visuo-spatial imagery is a central concern. The lecture presents use-cases from ongoing projects at the HCC Lab in Bremen concerned with the processing and interpretation of (potentially large volumes of) highly dynamic visuospatial imagery; domains explored are: (a). architecture design cognition; (b). cognitive film studies; (c). geoinformatics; (d). cognitive vision and robotics.
PART 2: DEEP SEMANTICS FOR SPACE, DYNAMICS, AND COGNITION
In the backdrop of the domains introduced in Part 1, we present the concept of deep (visuo-spatial) semantics denoting: `the existence of declarative models (e.g., for spatio-temporal knowledge) and systematic formalisation that can be used to perform reasoning and query answering, relational learning, embodied grounding and simulation etc''. The broader agenda of deep visuo-spatial semantics encompasses methods for declarative reasoning about space, events, action, and change within frameworks such as constraint logic programming, answer-set programming, and other specialised forms of commonsense reasoning based on expressive action description languages for modelling dynamic spatial systems. Deep semantics, founded on declarative representation, inference & learning in KR, serves as basis to externalise explicit and inferred knowledge, e.g., using modalities such as diagrammatic representations, natural language, complex (dynamic) data visualisation etc.
With an equal emphasis on applications of basic research, the lecture will also showcase methods and tools developed to perform perceptual narrativisation or sensemanking with multi-modal, dynamic human-behaviour data (e.g., visuo-spatial imagery such as video, eye-tracking) in the chosen application areas.
Related Publications
Bhatt, M., Loke, S., (2008). Modelling Dynamic Spatial Systems in the Situation Calculus, Journal of Spatial Cognition and Computation: Special Issue on Spatio-Temporal Reasoning, Eds. Guesgen, H., Renz, 8(1), 86-130, May 2008, ISSN: 1542-7633 (electronic) 1387-5868 (paper), Taylor & Francis.
Bhatt, M., Lee, J. H., Schultz, C. CLP(QS): A Declarative Spatial Reasoning Framework. Proceedings of the 10th International Conference on Spatial Information Theory (COSIT 11). Belfast, Maine.
Bhatt, M. (2012). Reasoning about Space, Actions and Change: A Paradigm for Applications of Spatial Reasoning. in: Hazarika, S. (editor). Qualitative Spatio-Temporal Representation and Reasoning: Trends and Future Directions. IGI Global (PA, USA). DOI: 10.4018/978-1-61692-868-1. ISBN13: 978161692868.
Bhatt, M., Schultz, C., Huang, M. (2012) The Shape of Empty Space: Human-Centred Cognitive Foundations in Computing for Spatial Design. IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) 2012, Innsbruck, Austria.
Bhatt, M., Schultz, C., Freksa, C. (2013). The `Space' in Spatial Assistance Systems: Conception, Formalisation, and Computation. in Thora Tenbrink, Jan Wiener, Christophe Claramunt (editors). Representing space in cognition: Interrelations of behavior, language, and formal models. Series: Explorations in Language and Space. Oxford University Press, 2012. 978-0-19-967991-1.
Bhatt, M., Suchan, J., Schultz, C. (2013). Cognitive Interpretation of Everyday Activities - Toward Perceptual Narrative Based Visuo-Spatial Scene Interpretation. Computational Models of Narrative (CMN) 2013., a satellite workshop of CogSci 2013: The 35th meeting of the Cognitive Science Society., Editors: M. Finlayson., B. Fisseni., Benedikt Löwe., J. C. Meister. OASIcs proceedings volume. OpenAccess Series in Informatics (OASIcs). Dagstuhl, Germany
Bhatt, M., Wallgruen, J. O. (2014). Geospatial Narratives and their Spatio-Temporal Dynamics: Commonsense Reasoning for High-level Analyses in Geographic Information Systems, ISPRS International Journal of Geo-Information (ISSN 2220-9964); Special Issue on: Geospatial Monitoring and Modelling of Environmental Change IJGI., 3(1), 166-205, 2014.
Wałęga, P., Bhatt, M., Schultz, C. (2015). ASPMT(QS): Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories. LPNMR: Logic Programming and Nonmonotonic Reasoning - 13th International Conference, LPNMR 2015., Lexington, KY, USA September 27-30, 2015. http://lpnmr2015.mat.unical.it
Dubba, K., Cohn, A., David Hogg, D., Bhatt, M., and Dylla, F. (2015). Learning Relational Event Models from Video, in: Journal of Artificial Intelligence Research (JAIR). Vol 53. Pages 41-90. http://dx.doi.org/10.1613/jair.4395
Suchan, J., Bhatt, M. (2016). The Geometry of a Scene: On Deep Semantics for Visual Perception Driven Cognitive Film Studies., in: WACV 2016: IEEE Winter Conference on Applications of Computer Vision (WACV 2016)., Lake Placid, NY, USA, IEEE.
Suchan, J., Bhatt, M. (2016). Semantic Question-Answering with Video and Eye- Tracking Data – AI Foundations for Human Visual Perception Driven Cognitive Film Studies. IJCAI 2016: 25th International Joint Conference on Artificial Intelligence, New York City, USA. (to appear)
Image Segmentation by Learning and Integrating Local Relations with Spectral Graph Theory
Stella Yu
In this lecture, we will study an image segmentation approach that is not by classifying the appearance of a patch with respect to training data and labels, but by learning pixel-centric pairwise local relations and integrating these relations in a spectral graph-theoretic framework. That is, we take the view that object segments emerge not from what training instances they resemble, but from the field of pairwise actions on feature similarity, contrast, and ordering relations among visual elements in the entire image. We will start with the seminal normalized cuts approach to image segmentation, extend it to pairwise repulsion cues and regularization, give a concrete application on finding dots and textons in the image, generalize to a new spectral embedding criterion called angular embedding, and conclude with its modern fast solver version integrated with deep learning.
Part 1: IMAGE SEGMENTATION AND GRAPH PARTITIONING
We consider image segmentation as a graph partitioning problem, which aims at extracting the global impression of an image based on pairwise local grouping cues, instead of a sliding window classification problem which focuses on local features and their consistencies in the image data. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups, and there is an efficient solution through generalized eigen-decomposition. On modeling perceptual pop-out, we identify feature similarity and local contrast as two independent grouping forces, and we generalize normalized cuts to multi-way partitioning with these dual measures. We demonstrate its application to segmenting dots of a wide variety as well as detecting textons in natural scenes.
Part 2: ROBUSTNESS AND APPLICATIONS
Given the size and confidence of pairwise local orderings, angular embedding (AE) finds a global ordering with a near-global optimal eigensolution. AE advances spectral clustering methods by covering the entire size-confidence measurement space and providing an ordered cluster organization. As a quadratic criterion in the complex domain, AE is remarkably robust to outliers, unlike its real domain counterpart, the least squares embedding. We show that AE's robustness is due not to the particular choice of the criterion, but to the choice of representation in the complex domain. We show its application to figure-ground organization with a modern fast solver and a deep learning component which learns to predict pairwise relations directly from images, without manual design of features or grouping cues.
Related Publications
Normalized Cuts and Image Segmentation
Jianbo Shi and Jitendra Malik
IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888-905, 2000
Understanding Popout through Repulsion
Stella X. Yu and Jianbo Shi
IEEE Conference on Computer Vision and Pattern Recognition, 2001
Multiclass Spectral Clustering
Stella X. Yu and Jianbo Shi
International Conference on Computer Vision, 2003
Finding Dots: Segmentation as Popping out Regions from Boundaries
Elena Bernardis and Stella X. Yu
IEEE Conference on Computer Vision and Pattern Recognition, 2010
Object Detection and Segmentation from Joint Embedding of Parts and Pixels
Michael Maire and Stella X. Yu and Pietro Perona
International Conference on Computer Vision, 2011
Angular Embedding: A Robust Quadratic Criterion
Stella X. Yu
IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1):158-73, 2012
Progressive Multigrid Eigensolvers for Multiscale Spectral Segmentation
Michael Maire and Stella X. Yu
International Conference on Computer Vision, 2013
Affinity CNN: Learning Pixel-Centric Pairwise Relations for Figure/Ground Embedding
Michael Maire and Takuya Narihira and Stella X. Yu
IEEE Conference on Computer Vision and Pattern Recognition, 2016
Automated Reasoning and Cognitive Computing
Ulrich Furbach
In this lecture we discuss the use of first order automated reasoning in question answering and cognitive computing. We will depict the state of the art in automated reasoning and the special constraints for its use within cognitive computing systems. Furthermore some attempts to model commonsense and human reasoning are presented.
Part 1: AUTOMATED REASONING AND QUESTIONS ANSWERING
In this part we will discuss the state of the art in first order theorem proving and we will give a very coarse description of the calculus used in the Hyper reasoning system. Based on this, we will discuss the use of Hyper within the deep question answering system LogAnswer. We will demonstrate that various AI techniques have to be combined such that natural language question answering can be tackled. This includes a treatment of query relaxation, web-services, large knowledge bases and co-operative answering.
Part 2: COMMONSENSE REASONING BENCHMARKS AND HUMAN REASONING
In recent years various sets of benchmark problems for commonsense reasoning have been proposed. There is the Winograd Schema Challenge and the Choice Of Plausible Alternatives (COPA)
Challenge, both sets are based on natural language processing. Another benchmark set, the TriangleCopa
problems are already given in first order logic. All of these benchmark problems have in common, that they can only be tackled with the help of background knowledge. We will discuss the use of general knowledge bases like Cyc and Wordnet within a reasoning system to tackle these benchmarks. In a last part a bridge to human reasoning as it is investigated in cognitive psychology is constructed by using standard deontic logic.
Related Publications
Furbach, Ulrich, Björn Pelzer, and Claudia Schon. "Automated Reasoning in the Wild." Automated Deduction-CADE-25. Springer International Publishing, 2015. 55-72.
Furbach, Ulrich, and Claudia Schon. "Deontic logic for human reasoning." Advances in Knowledge Representation, Logic Programming, and Abstract Argumentation. Springer International Publishing, 2015. 63-80.
Furbach, Ulrich, Andrew S. Gordon, and Claudia Schon. "Tackling Benchmark Problems of Commonsense Reasoning." Bridging the Gap between Human and Automated Reasoning (2015): 47.
Maslan, Nicole, Melissa Roemmele, and Andrew S. Gordon. "One hundred challenge problems for logical formalizations of commonsense psychology." Twelfth International Symposium on Logical Formalizations of Commonsense Reasoning, Stanford, CA. 2015.
Levesque, Hector J., Ernest Davis, and Leora Morgenstern. "The Winograd schema challenge." AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning. 2011.
Furbach, Ulrich, Ingo Glöckner, and Björn Pelzer. "An application of automated reasoning in natural language question answering." Ai Communications 23.2-3 (2010): 241-265.
KEYNOTES
Socially Sensitive Technologies for Human-Centered Computing
Elisabeth Andre
Recent years have initiated a paradigm shift from pure task-based human-machine interfaces towards socially-sensitive interaction. In addition to what users explicitly say or gesture at, socially-sensitive interfaces are able to sense more subtle human cues, such as head postures and movements, to infer psychological user states, such as attention and affect, and also to enrich system responses with social signals. However, most approaches focus on offline analysis of previously recorded data limiting the investigation to prototypical behaviors in laboratory-like settings. In my presentation, I will focus on challenges that arise when integrating social signal processing techniques into interactive systems designed for real-world applications. From a technical perspective, this requires effective tools able to synchronize, process, and analyze relevant signals in online mode. From a user perspective, appropriate strategies need to be defined to respond to social signals at the right moment in time without disturbing the flow of interaction. The talk will be illustrated by applications enabled by socially sensitive technologies, such as robotic companions for the elderly, computer-enhanced social and emotional learning and socially augmented interfaces for people with disabilities.
From Physical to Cognitive Interaction
Helge Ritter
A grand challenge at the heart of human-centered computing is the realization of cognitive interaction between a human and a technical agent. In contrast to now familiar concepts of interaction between physical constituents of matter -- which were central to the successes of physics in the previous centuries -- cognitive interaction involves constituents that are much more complex by being active agents, endowed with perception, a rich embodiment, and capabilities such as memory and learning. Compared to physics, an analogous, deep analysis of the resulting, extremely rich spectrum of cognitive interaction patterns and their replication in technical artefacts is still a rather young scientific endeavor that connects human-centered computing and robotics with disciplines such as cognitive psychology, the brain sciences, social science and linguistics. The talk will point out pertinent research questions and challenges in this emerging field and describe current approaches with examples from work carried out at the Bielefeld CoE "Cognitive Interaction Technology".
Lifted Machine Learning
Kristian Kersting
Our minds make inference that appear to go far beyond machine learning. Whereas people can learn richer representations and use them for a wider range of functions, machine learning has been mainly employed in a stand-alone context, constructing a single function from a table of training examples. In this talk, I shall touch upon computational models that can capture these human learning aspects by combining relational logic and statistical learning.
However, as we tackle larger and larger relational learning problems, the cost of inference comes to dominate learning time and makes performance very slow. Hence, we need to find ways to reduce the cost of inference both at learning and at run time. One promising direction to speed up inference is to exploit symmetries in the computational models. I shall illustrate this for probabilistic inference, linear programs, and convex quadratic programs.
This is based on joint works together with Martin Mladenov, Amir Globerson, Martin Grohe, Sriraam Natarajan, Aziz Erkal Selman, and many more.
Cloud-Based Autonomous Intelligent Robots
Michael Beetz
Recently, we witness the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions they have been carefully designed for. They are still far from the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Making the transition from performing everyday activities to mastering them, requires us to equip the robots with comprehensive knowledge bases and reasoning mechanisms. Robots that can master everyday activities have to perform natural language instructions such as "flip the pancake" or "push the spatula under the pancake". To perform such tasks adequately, robots must, for instance, be able to infer the appropriate tool to use, how to grasp it and how to operate it. They must, in particular, not push the whole spatula under the pancake, i.e. they must not interpret instructions literally but rather recover the intended meaning.
In this talk, I will present some of our ongoing research that investigates how such knowledge can be collected and provided, using a cloud-based knowledge service. We propose openEASE, a remote knowledge representation and processing service that provides its users with unprecedented access to knowledge of leading-edge autonomous robotic agents. It also provides the representational infrastructure to make inhomogeneous experience data from robots and human manipulation episodes semantically accessible, as well as a suite of software tools that enable researchers and robots to interpret, analyze, visualize and learn from the experience data. Using openEASE users can retrieve the memorized experiences of manipulation episodes and ask queries regarding to what the robot saw, reasoned, and did as well as how the robot did it, why, and what effects it caused.
TUTORIALS
COMPUTER VISION IN REASONING AND INTERACTION
Hannah Dee
This tutorial will look at computer vision techniques and toolkits for interaction: automatically extracting meaningful information from images and more usefully video. Three broad topics will be covered:
- Detection: Edges, lines, and other structures. What makes a good feature to find? What kind of real-world objects can we detect? (Practical illustrations: Canny, KLT, Viola-Jones).
- Tracking: You've found it, now how can you find it again? Motion and cues for motion. Tensions and synergies between tracking, learning and detection. Why not just track every frame? (Practical illustrations: Mosse tracker, CamShift tracker)
- Conceptual knowledge: Deriving higher-order concepts from visual information: left-of, right-of, trajectories, bounding boxes, segmentations and connectivity relations.
The tutorial will provide practical illustrations of combining Viola-Jones with Mosse to work out simple hand / face relations from a webcam.
Related Publications
Hannah M. Dee, Anthony G. Cohn, David C. Hogg: Building semantic scene models from unconstrained video. Computer Vision and Image Understanding 116(3): 446-456 (2012)
Juan Cao, Frédéric Labrosse, Hannah M. Dee: An Evaluation of Image-Based Robot Orientation Estimation. TAROS 2013: 135-147
Lu Lou, Suzana Barreto, Rokas Zmuidzinavicius, Mark James Neal, Hannah M. Dee, Frédéric Labrosse: Vision-Aided IMU Estimation of Attitude and Orientation for a Driverless Car. TAROS 2012: 465-466
Statistical Relational AI: Logic, Probability, Computation
Kristian Kersting
This course will provide a gentle introduction into the foundations of statistical relational artificial intelligence, and will realize this by introducing the foundations of logic, of probability, of learning, and their respective combinations.
Both predicate logic and probability theory extend propositional logic, one by adding relations, individuals and quantified variables, the other by allowing for measures over possible worlds and conditional queries. While logical and probabilistic approaches have often been studied and used independently within artificial intelligence, they are not in conflict with each other but they are synergistic. This explains why there has been a considerable body of research in combining first-order logic and probability over the last 25 years, evolving into what has come to be called (see publications) Statistical Relational Artificial Intelligence (StarAI):
"the study and design of intelligent agents that act in worlds composed of individuals (objects, things), where there can be complex relations among the individuals, where the agents can be uncertain about what properties individuals have, what relations are true, what individuals exist, whether different terms denote the same individual, and the dynamics of the world.”
Relational probabilistic models — we use this term in the broad sense, meaning any models that combine relations and probabilities — form the basis of StarAI, and can be seen as combinations of probability and predicate calculus that allow for individuals and relations as well as probabilities. In building on top of relational models, StarAI goes far beyond reasoning, optimization, learning and acting optimally in terms of a fixed number of features or variables, as it is typically studied in machine learning, constraint satisfaction, probabilistic reasoning, and other areas of AI.
Since StarAI draws upon ideas developed within many different fields, however, it can also be quite challenging for newcomers to get started. What seems to be missing, is a gentle introduction that can help newcomers to the field understand the state of the art and the current challenges. The present course (and the accompanying book) aims to fill this gap. It reviews the foundations of StarAI, motivates the issues, justifies some choices that have been made and lists some open problems. Laying bare the foundations will hopefully inspire others to join us in exploring the frontiers and the yet unexplored areas of StarAI.
Related Publications
Luc De Raedt, Kristian Kersting, Sriraam Natarajan, David Poole, Statistical Relational Artificial Intelligence: Logic, Probability, and Computation. Morgan and Claypool Publishers,Synthesis Lectures on Artificial Intelligence and Machine Learning, ISBN: 9781627058414, 2016.