Placeholder image for staff profiles

Dr Christopher Russell


Lecturer in Computer Vision and Machine Learning
+44 (0)1483 683413
12 BA 00

My publications

Publications

A rating scale was developed to assess the contribution made by computer software towards the delivery of a quality consultation, with the purpose of informing the development of the next generation of systems. Two software programmes were compared, using this scale to test their ability to enable or inhibit the delivery of an ideal consultation with a patient with heart disease. The context was a general practice based, nurse run clinic for the secondary prevention of heart disease. One of the programmes was customized for this purpose; the other was a standard general practice programme. Consultations were video-recorded, and then assessed by an expert panel using the new assessment tool. Both software programmes were oriented towards the implementation of the evidence, rather than facilitating patient-centred practice. The rating scale showed, not surprisingly, significantly greater support from the customized software in the consultation in five out of eight areas. However, the scale's reliability measured by Cronbach's Alpha, was sub-optimal. With further refinement, this rating scale may become a useful tool that will inform software developers of the effectiveness of their programmes in the consultation, and suggest where they need development. © 2002 Informa UK Ltd All rights reserved.
Tome D, Russell Christopher, Agapito L (2017) Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image, CVPR 2017 Proceedings pp. 2500-2509 IEEE
We propose a unified formulation for the problem of
3D human pose estimation from a single raw RGB image
that reasons jointly about 2D joint estimation and 3D pose
reconstruction to improve both tasks. We take an integrated
approach that fuses probabilistic knowledge of 3D
human pose with a multi-stage CNN architecture and uses
the knowledge of plausible 3D landmark locations to refine
the search for better 2D locations. The entire process is
trained end-to-end, is extremely efficient and obtains stateof-the-art
results on Human3.6M outperforming previous
approaches both on 2D and 3D errors.
Kusner MJ, Loftus J, Russell Christopher, Silva R (2017) Counterfactual Fairness, Advances in Neural Information Processing Systems 30 (NIPS 2017) pre-proceedings 30 MIT Press
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school.
Srivastava A, Valkoz L, Russell Christopher, Gutmann MU, Sutton C (2017) VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning, Advances in Neural Information Processing Systems 30 (NIPS 2017) pre-proceedings 30 MIT Press
Deep generative models provide powerful tools for distributions over complicated manifolds, such as those of natural images. But many of these methods, including generative adversarial networks (GANs), can be difficult to train, in part because they are prone to mode collapse, which means that they characterize only a few modes of the true distribution. To address this, we introduce VEEGAN, which features a reconstructor network, reversing the action of the generator by mapping from data to noise. Our training objective retains the original asymptotic consistency guarantee of GANs, and can be interpreted as a novel autoencoder loss over the noise. In sharp contrast to a traditional autoencoder over data points, VEEGAN does not require specifying a loss function over the data, but rather only over the representations, which are standard normal by assumption. On an extensive set of synthetic and real world image datasets, VEEGAN indeed resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.
Russell Christopher, Kusner MJ, Loftus JR, Silva R (2017) When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness, Advances in Neural Information Processing Systems 30. Pre-proceedings 30 MIT Press
Machine learning is now being used to make crucial decisions about people?s lives.
For nearly all of these decisions there is a risk that individuals of a certain race,
gender, sexual orientation, or any other subpopulation are unfairly discriminated
against. Our recent method has demonstrated how to use techniques from counterfactual
inference to make predictions fair across different subpopulations. This
method requires that one provides the causal model that generated the data at hand.
In general, validating all causal implications of the model is not possible without
further assumptions. Hence, it is desirable to integrate competing causal models to
provide counterfactually fair decisions, regardless of which causal ?world? is the
correct one. In this paper, we show how it is possible to make predictions that are
approximately fair with respect to multiple possible causal models at once, thus
mitigating the problem of exact causal specification. We frame the goal of learning
a fair classifier as an optimization problem with fairness constraints entailed by
competing causal explanations. We show how this optimization problem can be
efficiently solved using gradient-based methods. We demonstrate the flexibility of
our model on two real-world fair classification problems. We show that our model
can seamlessly balance fairness in multiple worlds with prediction accuracy.
Pansari P, Russell C, Kumar M (2018) Worst-case Optimal Submodular Extensions for Marginal Estimation, Proceedings of Machine Learning Research 84
Submodular extensions of an energy function can be used to efficiently compute approximate marginals via variational inference. The accuracy of the marginals depends crucially on the quality of the submodular extension. To identify the best possible extension, we show an equivalence between the submodular extensions of the energy and the objective functions of linear programming (LP) relaxations for the corresponding MAP estimation problem. This allows us to (i) establish the worst-case optimality of the submodular extension for Potts model used in the literature; (ii) identify the worst-case optimal submodular extension for the more general class of metric labeling; and (iii) efficiently compute the marginals for the widely used dense CRF model with the help of a recently proposed Gaussian filtering method. Using synthetic and real data, we show that our approach provides comparable upper bounds on the log-partition function to those obtained using tree-reweighted message passing (TRW) in cases where the latter is computationally feasible. Importantly, unlike TRW, our approach provides the first practical algorithm to compute an upper bound on the dense CRF model.