Seminar

Challenges on Mining and Learning from Music-related Data

Abstract In this seminar, we will address the music data science, an application domain in significant growth in recent years. With the increasing maturity and availability of streaming platforms and technologies for music dissemination, computational tasks are increasingly needed in this domain. Examples of these tasks include genre and emotion classification, plagiarism identification, recommendation systems, visualization, and automatic playlist generation. From a data mining and machine learning perspective, this is a particularly challenging domain, as the related data is complex and heterogeneous, such as audio recordings, song lyrics, consumer comments and likes, album covers, artists’ pictures, experts’ reviews, among others.

Continual Gaussian Processes

Abstract Gaussian processes (GP) are powerful tools for non-linear regression and classification with application to a wide range of scenarios, many of them related to temporal problems. The main focus in the literature has been on the reduction of their computational cost, typically O(N^3) for training and O(N^2) for prediction. In order to sidestep that prohibitive complexity, sparse approximations based on inducing-inputs appeared as the most fundamental solution, substituting exact inference by variational methods.

MUMBO: MUlti-task Max-value Bayesian Optimization

Abstract We propose MUMBO, the first high-performing yet computationally efficient acquisition function for multi-task Bayesian optimization. Here, the challenge is to perform efficient optimization by evaluating low-cost functions somehow related to our true target function, a broad class of problems including the popular task of multi-fidelity optimization. However, while information-theoretic acquisition functions are known to provide state-of-the-art Bayesian optimization, existing implementations for multi-task scenarios have prohibitive computational requirements. Previous acquisition functions have therefore been suitable only for problems with both low-dimensional parameter spaces and function query costs sufficiently large to overshadow very significant optimization overheads.

Generalized Variational Inference

Abstract In this talk, I introduce a generalized representation of Bayesian inference. It is derived axiomatically, recovering existing Bayesian methods as special cases. It is then used to prove that variational inference (VI) based on the Kullback-Leibler Divergence with a variational family Q produces the optimal Q-constrained approximation to the exact Bayesian inference problem. Surprisingly, this implies that standard VI dominates any other Q-constrained approximation to the exact Bayesian inference problem.

Using Gaussian process models to infer pseudotime and identify gene-specific branching dynamics from single-cell data

Abstract We demonstrate how to develop and apply Gaussian Process models for dimensionality reduction and inference of branching dynamics in single-cell transcriptomic data. We will discuss two models: GrandPrix: an efficient implementation of the Gaussian process latent variable model which allows scaling up the GP approach to modern single-cell datasets. We apply our method on microarray, nCounter, RNA-seq, qPCR and droplet-based datasets from different organisms. The model converges an order of magnitude faster compared to existing methods whilst achieving similar levels of estimation accuracy.

Learning invariances using the marginal likelihood

Abstract When learning mappings from data, knowledge about any invariances in the function output in response to changes in the input can strongly improve generalisation performance. Invariances are commonplace in many machine learning models under the guise of convolutional structure or data augmentation. However, the choice of which invariances are used is currently still made by humans in the learning loop, often through trial-and-error and cross-validation. In this talk, we will view data augmentation as an invariance that can be expressed in a Gaussian process model, and we give a general method for learning useful invariances for a particular dataset.

Towards AI-Powered Healthcare: Automated Medical Image Analysis via Deep Learning

Abstract In modern healthcare, disease diagnosis, assessment and therapy have been significantly depending on the interpretation of medical images, e.g., CT, MRI, Ultrasound, histology images and endoscopy surgical videos. The exploding amount of biomedical image data collected in nowadays clinical centers offer an unprecedented challenge, as well as enormous opportunities, to develop a new-generation of data analytics techniques for improving patient care and even revolutionizing healthcare industry. In the meanwhile, the momentum in cutting-edge AI systems is towards representation learning and pattern recognition via data-driven approaches.

Correlation Transfer in Cortical Neurons

Abstract Correlated electrical activity in neurons is a prominent characteristic of cortical microcircuits. Despite a growing amount of evidence concerning both spike-count and subthreshold membrane potential pairwise correlations, little is known about how different types of cortical neurons convert correlated inputs into correlated outputs. In this talk, I will report about our new study on pyramidal neurons and two classes of GABAergic interneurons in layer 5 of the rat neocortex.

Weightless Particle Filters and Their Applications in Neuroscience

Abstract Nonlinear filtering is the Bayesian optimal solution to the problem of dynamically estimating a latent variable from a continuous stream of noisy observations. It is highly relevant in neuroscience both for data analysis applications1 as well as a candidate for brain function at different levels ranging from the single synapse level2 to the neural network level3. A widely used solution to the nonlinear filtering problem is to rely on a set of samples (particles) to represent the filtering distribution.

Multi-task Dirichlet-multinomial regression for detecting global microbiome associations

Abstract There is evidence that the human gut bacterial microbiome influences diseases as disparate as inflammatory bowel disease, cardiovascular disease and schizophrenia. However, current statistical techniques for microbiome association studies either rely on distance measures, which can lead to differing estimates depending on which distance measure is used; or they rely on detecting associations with individual bacterial species or higher-level operational taxonomic units (OTUs). A method that extends the latter approach beyond individual species is the multi-task Dirichlet-multinomial model in Chen and Li, 2013, which uses group lasso penalization across taxonomic units to select non-zero coefficients for each measured covariate.