Seminar

Are Supervised Learning Algorithms the Key to a Paradigm Shift in the Way We Measure Air Pollution?

Abstract Low cost chemical sensors may prove to be a disruptive technology for air pollution measurements. The potential for these technologies is huge, enabling measurements on previously unachievable spatial scales and providing affordable tools to help tackle one of the largest environmental health risks in the developing world. Recent academic scrutiny has highlighted several issues with the relatively simple analytical methods used in these sensors, compared with traditional monitoring equipment, and methods to overcome these challenges need to be developed before they can reach their full potential.

Scalable Unsupervised Phenotyping using Tensor Factorization

Abstract Originally purposed to streamline documentation of care, Electronic Health Records (EHRs) provide a massive amount of diverse and readily available data that can be used to tackle important healthcare problems. Clinical phenotyping is one of them, which refers to identifying patient subgroups sharing common clinically meaningful characteristics. However, there are significant challenges in using EHR data to computationally tackle this problem, related to algorithmic scalability, model interpretability and the longitudinal nature of patient data.

Advances in GANs based on the MMD

Abstract Generative adversarial networks have led to huge improvements in sample quality for image generation. But their success is hindered by both practical and theoretical problems, leading to the proposal of a huge number of alternative methods over the last few years. We study one of these alternatives, the MMD GAN, which uses a similar architecture to an original GAN but does some of its optimization in closed form, in a Hilbert space.

Bayesian Quadrature for Multiple Related Integrals

Abstract Bayesian probabilistic numerical methods are a set of tools providing posterior distributions on the output of numerical methods. The use of these methods is usually motivated by the fact that they can represent our uncertainty due to incomplete/finite information about the continuous mathematical problem being approximated. In this talk, we demonstrate that this paradigm can provide additional advantages, such as the possibility of transferring information between several numerical methods.

Less is more? Controlling swarms and Turing Learning

Abstract In this talk, we look at swarms of robots from two different perspectives. First, we consider the problem of designing behavioral rules of extreme simplicity. We show among others how “computation-free” robots, with only 1 bit or trit of sensory information, can accomplish tasks such as self-organized aggregation 1 or collective choice 2. Second, we consider the problem of inferring the behavioral rules or morphology of the individuals. We use Turing Learning 3 - a generalization of Generative Adversarial Networks 4.

On Some Geometrical Aspects of Bayesian Inference

Abstract In this talk I will provide a geometric interpretation to Bayesian inference that allows me to introduce a natural measure of the level of agreement between priors, likelihoods, and posteriors. The starting point for the construction of our geometry is the observation that the marginal likelihood can be regarded as an inner product between the prior and the likelihood. A key concept in our geometry is that of compatibility, a measure which is based on the same construction principles as Pearson correlation, but which can be used to assess how much the prior agrees with the likelihood, to gauge the sensitivity of the posterior to the prior, and to quantify the coherency of the opinions of two experts.

Probability and Uncertainty in Deep Learning

Abstract In this talk, I will motivate the need for introducing probabilistic and Bayesian flavour to “traditional” deep learning approaches. For example, Bayesian treatment of neural network parameters is an elegant way of avoiding overfitting and heuristics in optimization, while providing a solid mathematical grounding. I will also highlight the deep Gaussian process family of approaches, which can be seen as non-parametric Bayesian neural networks. The Bayesian treatment of neural networks comes with mathematical intractabilities, therefore I will outline some of the approximate inference methods used to tackle these intractabilities.

Neuroinformatics of Learning, Memory and Decision Making: from Model-based Analyses to Individualized Cognitive Neurotherapeutics

Abstract How we learn, recall our memories, and use them for making decisions depend on our genes as well as on environmental modulators, such as stress, emotion and uncertainty. Cognitive performance is the outcome of several neurobiologically distinct mental processes, some of which are not easily amenable to direct observation. Their roles and interactions can, however, be dissociated with computational models. Using examples from animal learning under stress and imaging genetics of human memory, I will show how computational models can be used to discover neural and genetic correlates of cognitive phenomena, and suggest their computational explanations.

Learning Non-Stationary Data Streams With Gradually Evolved Classes

Abstract In machine learning, class evolution is the phenomenon of class emergence and disappearance. It is likely to occur in many data stream problems, which are problems where additional training data become available over time. For example, in the problem of classifying tweets according to their topic, new topics may emerge over time, and certain topics may become unpopular and not discussed anymore. Therefore, class evolution is an important research topic in the area of learning data streams.

Cortical Microcircuits as Gated-Recurrent Neural Networks

Abstract Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), I will describe a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits.