June 2, 2020. Along with some great colleagues at DeepMind we’re releasing Acme, an RL framework that we’ve been working on and using for our own research for quite some time. You can check it out here or take a look at our whitepaper!
January 15, 2019. I have finally gotten around to moving and updating my website. At the moment the data here should be incredibly out-of-date, but it’s only a matter of time before the rest gets updated! (Thanks to Yannis for forcing me to do this!)
August 12, 2016. Our paper on Learning to learn by gradient descent by gradient descent was accepted at NIPS 2016. See you in Barcelona!
January 1, 2016. I have accepted a position as a research scientist at Google DeepMind and am excited to join this coming March!
September 17, 2015. I spoke at the Gaussian process summer school’s workshop on global optimization; at the site you can find videos for each talk presented. We also released an updated version of pybo, our code for modular Bayesian optimization.
December 13, 2014. Along with several colleagues I presented papers at the BayesOpt workshop on modular Bayesian optimization, a shortened version of our PES paper, PES with unknown constraints, as well as entropy-based approaches to portfolio construction.