Ian Gemp
Senior Research Scientist
DeepMind
London UK
imgemp at deepmind dot com
I am a Senior Research Scientist working at DeepMind in London. My work focuses on multi-agent systems, equilibrium problems, and game theory. In particular, I research natural mechanisms for emergent cooperation, no-regret learning in non-stationary and adversarial environments, and efficient algorithms for identifying fixed points. Before DeepMind, I was a PhD student at the College of Information and Computer Sciences at UMass Amherst where I worked with Sridhar Mahadevan as part of the Autonomous Learning Laboratory (ALL) on topics concerning , , , , and . My thesis focused on identifying equilibria in differentiable games, specifically those that arise in machine learning applications.
Selecting the "best" option.
Consider the case where "best" means lowest altitude - find the longitude and latitude of the location with lowest altitude on a contour map.
A point of balance between competing forces.
This encompasses topics such as predator-prey models in biological systems, supply and demand in microeconomics, and even optimization (think about the force of gravity pulling a ball to the point of lowest altitude in a region).
Learning a strategy to maximize a reward.
Imagine teaching a dog a trick or task by feeding it a treat each time it succeeds. Eventually, the dog learns what it needs to do to earn the treat. This area of machine learning draws inspiration from both biology and psychology.
Multiple agents and their environment.
This includes any situation where multiple humans, robots, etc are engaged in a scenario, whether it be competitive, cooperative, or neither so long as their actions affect each other either directly or through the shared environment. It's nearly impossible to think of a situation in real life where this is not the case. As an example, consider two people competing in a game of rock-paper-scissors.
Reducing the number of dimensions.
Dimensionality refers to the number of dimensions, features, attributes, etc (think columns on an excel sheet), so dimensionality reduction just means reducing the number of columns such that you still have enough information on the spreadsheet to complete your task but hopefully much faster now that there's less data to process.
Education
MS, PhD (CS) Univ of Massachusetts
Amherst MA 2016, 2018
MS (ESAM) Northwestern Univ
Evanston IL 2011
BS (ME/ESAM) Northwestern Univ
Evanston IL 2010
High School Kinkaid
Houston TX 2006
Experience
DeepMind
Research Scientist 2019 - Present
UMass Amherst
Amherst MA
Research Assistant 2013 - 2018
AWS AI Algorithms
New York NY
Applied Scientist Intern Summer 2018
Adobe Research
San Jose CA Data Science Intern Summer+Fall 2016
Capgemini SA&I
Chicago IL
Consultant 2011-2013
Northwestern Univ
Evanston IL
Research Assistant 2007-2010