Machine Learning as a tool for positive impact : case studies from climate change

Abstract Watch live at–world. Recording at Brought to you with support from World Wide Neuro – Seminar listing. Climate change is one of our generation’s greatest challenges, with increasingly severe consequences on global ecosystems and populations. Machine Learning has the potential to address many important challenges in climate change, from both mitigation (reducing its extent) and adaptation (preparing for unavoidable consequences) aspects. To present the extent of these opportunities, I will describe some of the projects that I am involved in, spanning from generative model to computer vision and natural language processing.

An inference perspective on meta-learning

Abstract Watch live at Brought to you with support from World Wide Neuro – Seminar listing. While meta-learning algorithms are often viewed as algorithms that learn to learn, an alternative viewpoint frames meta-learning as inferring a hidden task variable from experience consisting of observations and rewards. From this perspective, learning to learn is learning to infer. This viewpoint can be useful in solving problems in meta-RL, which I’ll demonstrate through two examples: (1) enabling off-policy meta-learning, and (2) performing efficient meta-RL from image observations.

Multi-resolution Multi-task Gaussian Processes: London air pollution

Abstract Watch live at Brought to you with support from World Wide Neuro – Seminar listing. Poor air quality in cities is a significant threat to health and life expectancy, with over 80% of people living in urban areas exposed to air quality levels that exceed World Health Organisation limits. In this session, I present a multi-resolution multi-task framework that handles evidence integration under varying spatio-temporal sampling resolution and noise levels.

Learning Theory for Continual and Meta-Learning

Abstract Watch live at Brought to you with support from World Wide Neuro – Seminar listing. Biography From Prof Christoph Lampert’s lab webpage: Christoph Lampert received the PhD degree in mathematics from the University of Bonn in 2003. In 2010 he joined the Institute of Science and Technology Austria (IST Austria) first as an Assistant Professor and since 2015 as a Professor. There, he leads the research group for Machine Learning and Computer Vision, and since 2019 he is also the head of IST Austria’s ELLIS unit.

Neural circuit redundancy, stability, and variability in developmental brain disorders

Abstract Watch live at Brought to you with support from World Wide Neuro – Sheffield ML Seminars. Despite the consistency of symptoms at the cognitive level, we now know that brain disorders like Autism and Schizophrenia can each arise from mutations in >100 different genes. Presumably there is a convergence of “symptoms” at the level of neural circuits in diagnosed individuals. In this talk I will argue that redundancy in neural circuit parameters implies that we should take a circuit-function rather that circuit-component approach to understanding these disorders.

The geometry of abstraction in artificial and biological neural networks

Abstract Watch live at The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training.

Multilevel Causal Modeling

Abstract Watch live at Brought to you with support from World Wide Neuro – Seminar listing. Complex systems can be modeled at various levels of granularity, e.g., we can model a person at the cognitive level, on the neuronal level, or down to the biochemical level. When multiple models represent the same system at different scales, we would like to be able to reason about the causal effects of interventions on each level in such a way that the models remain consistent across levels.

Causality for Neuroscience

Abstract Watch live at Brought to you with support from World Wide Neuro – Seminar listing. Biography From Prof Konrad Kording’s website: Konrad is interested in the question of how the brain solves the credit assignment problem and similarly how we should assign credit in the real world (through causality). In extension of this main thrust he is interested in applications of causality in biomedical research. Konrad has trained as student at ETH Zurich with Peter Konig, as postdoc at UCL London with Daniel Wolpert and at MIT with Josh Tenenbaum.

Challenges on Mining and Learning from Music-related Data

Abstract In this seminar, we will address the music data science, an application domain in significant growth in recent years. With the increasing maturity and availability of streaming platforms and technologies for music dissemination, computational tasks are increasingly needed in this domain. Examples of these tasks include genre and emotion classification, plagiarism identification, recommendation systems, visualization, and automatic playlist generation. From a data mining and machine learning perspective, this is a particularly challenging domain, as the related data is complex and heterogeneous, such as audio recordings, song lyrics, consumer comments and likes, album covers, artists’ pictures, experts’ reviews, among others.

Continual Gaussian Processes

Abstract Gaussian processes (GP) are powerful tools for non-linear regression and classification with application to a wide range of scenarios, many of them related to temporal problems. The main focus in the literature has been on the reduction of their computational cost, typically O(N^3) for training and O(N^2) for prediction. In order to sidestep that prohibitive complexity, sparse approximations based on inducing-inputs appeared as the most fundamental solution, substituting exact inference by variational methods.