
Gates Cambridge Scholars have been having an impact in multiple areas of new technology, from tackling misinformation to biotech, green tech and more.
We cannot just rely on the developers to build something that is ethical and of benefit to all of humanity. Developers are experts in their field, but we need to include way more people in the process and to build a more AI-literate society which understands the risks and limitations of AI as well as its possibilities.
Hannah Claus
Technology is embedded in every inch of our lives these days, but the increasing domination of machine learning and Artificial Intelligence, including ChatGPT, raise many questions about fairness, about oversight, about the potential for abuse and about what makes us human in the digital age.
Gates Cambridge Scholars are active in every area of new technology and the debates it throws up – from academic research on how it works and what its implications are to its application in many different fields including security, business innovation, health and climate change activity to how we mitigate against misinformation and abuse.
Here we look at some of the Gates Cambridge Scholars who are making an impact in the technology sphere.
Research
Many scholars are involved in research which pushes the boundaries of how technology works and how it can be more accessible to all.
Richard Diehl’s research looks at how to change the way AI learns so that it is more human-like and not just learning through the inputting and processing of a mountain of data, so that it moves from a big data model to a smaller, smarter model based on meta learning. His end goal is to be able to teach AI to learn languages where there is not a lot of textual or spoken data rather than being biased in favour of languages like English. To date, Richard [2021] has created an AI framework, Pico, which demystifies how language models learn by training and analysing models across different scales. The focus on smarter, smaller AI is a huge issue, given concerns about AI’s environmental footprint.
Pradipta Biswas [2006] is also doing innovative research to improve human machine interaction, particularly for those with disabilities. At Cambridge, for instance, he explored visual and auditory perception, rapid aiming movement and problem-solving strategies in the context of human machine interaction. He also invented new algorithms, for instance, for use in eye gaze technology. Among the technologies he has patented is an eye gaze and gesture controlled interactive Head Up Display.
Since returning to India, he has built on his work on eye tracking technology working with the Indian air force. He has also led a project to design a virtual reality cockpit for India’s maiden human space flight mission and was one of five researchers in India selected to undertake research study on human machine interaction at the International Space Station during the Axiom 4 mission. He has also led the first of its kind toy hackathon to help children with severe disabilities to communicate through eye-controlled interfaces. Pradipta, now an Associate Professor at the Indian Institute of Science, Bangalore, was also Vice Chair of the International Telecommunication Union [ITU], the oldest UN agency, founded in 1865, to facilitate international connectivity in communications networks.
Several scholars are doing PhDs in the relatively new field of Digital Humanities, bridging the gap between technology and the arts, addressing some of the ethical questions that the digital age throws up and ensuring humans are at the centre of debates about technological innovation. They include incoming scholar Eryk Salvaggio [2025] who has worked for decades in the area of arts and technology. His interest was ignited back in the 1990s when he discovered the net.art movement and became involved in experimental online art and writing. His PhD will produce frameworks that examine assumptions about the use of generative AI in policy, pedagogy and design. He says: “I am interested in what this technology is capable of, what we want to say yes to and how we use the humanities lens to understand what patterns and what new things it may be creating. I am trying to figure out how we approach this newness, how we use it for the benefit of human problems so we can be proactive and not overtaken by it.”
Similarly, Alex Mentzel’s work explores the intersection of human and digital worlds with the aim of avoiding the potential negative connotations and risks that we might not foresee, such as discriminatory systems of surveillance and gender and racial bias, consequences that will be amplified in cross-reality environments. As a spin-off from this Alex [2022] has created a public-facing performance research project, the Faust Shop, which investigates the Faustian bargains we make with technology in a mixed reality pop-up experience.
Tristan Dot [2022] studies the millions of images circulated every day through global networks which are automatically analysed by Artificial Intelligence (AI) models. Increasingly, the same AI models are used in art history to develop inventive digital methods to study large-scale datasets outside the Western canon. Tristan’s work is attempting to determine the place of computational formalism in the historiography of formalism in order to better understand AI-encoded digital images. He says: “I am convinced that art history will help us determine the issues of power, domination and invisibilisation at stake behind the circulation of digital images nowadays.”
Meanwhile, current Scholar Hannah Claus [2024] is interested in the development of responsible and inclusive AI systems, ensuring that language technologies are accessible, fair and culturally sensitive across diverse communities worldwide.
She says there is a need to reflect the vast richness of humanity in AI and to include information from as many different cultural and human perspectives as possible. “We cannot just rely on the developers to build something that is ethical and of benefit to all of humanity,” she says. “Developers are experts in their field, but we need to include way more people in the process and to build a more AI-literate society which understands the risks and limitations of AI as well as its possibilities.”
Security
Other scholars are more involved in how technology is being used in today’s work, for instance, in the field of security.
Christopher Kirchhoff is co-author with Raj M Shah of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War. The two men know more about Unit X than most as they co-founded it in 2016. Kirchhoff [2001] had already served as a strategist in President Obama‘s National Security Council and as the civilian assistant to General Martin Dempsey, Chairman of the Joint Chiefs of Staff, when he was appointed by the Secretary of Defense to create and launch the Pentagon’s Silicon Valley Office. There he led 67 civilian and uniformed personnel overseeing over 100 projects piloting advanced commercial technology, from flying cars to microsatellites, in military missions. The unit tested the innovative technology which was used in the air operations software that prevented ISIS from committing genocide as well as space-based Synthetic-aperture radar imaging used to detect North Korean and Russian missile launches.
Christopher’s book paints a picture of the role of innovation and technology in geopolitical power and details some of the innovations that came out of the unit. These range from flying cars that can land like helicopters and artificial intelligence-powered drones that can fly into buildings and map their interiors to microsatellites that can see through clouds and monitor rogue missile sites.
General David Petraeus, former Director of the CIA, called it “an illuminating, behind-the-scenes examination of the numerous, critical, and sometimes competing efforts inside the Pentagon to change the way it does business. An exceedingly important book!”
Joe Bonneau [2008] is a cryptography expert and co-author of Bitcoin and Cryptocurrency Technologies, a textbook on cryptocurrencies, which was runner-up for the 2017 PROSE Award in Computing and Information Sciences from the Association of American Publishers. Joe is an Associate Professor in the Computer Science Department of the Courant Institute at New York University. His research focuses on applied cryptography and security engineering, in particular, human authentication, secure messaging tools, blockchains and decentralised systems, and public randomness protocols.
Business innovation
Other scholars have used their technology knowledge in the business field.
Tim Kotin [2012] is currently Director of Data Analytics at AmaliTech, an IT service company, where he leads the strategic development and execution of the company’s Data Analytics and Generative AI (GenAI) capabilities. AmaliTech aims to empower IT talent from Ghana, Rwanda, and other African countries, while expanding its impact across Europe and other key markets. It also has a focus onCorporate Social Responsibility and Tim is looking at how its data and AI solutions can serve non-profits and local organisations in Africa, contributing to social impact and sustainable development.
Tim, who did an MPhil in Engineering for Sustainable Development, has spent much of his career working with or founding start-ups in the IT sector and describes himself as being “passionate about overcoming challenges of global scale and relevance through cutting-edge technologies, effective policy and innovative business models”.
Talia Gershon [2008] is a materials scientist by training and started working at IBM Research in 2012. After 4.5 years of developing next-generation solar cell materials, she became inspired to learn about quantum computing. She passionately believes that anyone can get started learning quantum and her online tutorial, the Beginner’s Guide to the QX, has got over 8,000 likes on Youtube. She now leads the hybrid cloud infrastructure research team at IBM Research, a team of world-class research scientists and engineers “with the mission of inventing the future of cloud computing”. The team works on everything from systems innovation, software innovation, linux and AI to DevOps/CICD which seeks to streamline and accelerate the software development lifecycle. The team recently launched IBM Research’s first AI-optimised, cloud-native supercomputer.
Mohammad Ghassemi [2010] is also leading the way in tech entrepreneurship. He combines an academic career in Computer Science and Engineering at Michigan State University with being Founding Partner of Ghamut, which works with organisations including Thomson Reuters and the Gates Foundation to improve their strategic use of data and technology.
Mohammad, who did his MPhil in Information Engineering at Cambridge, worked on a number of smart medical devices after graduating. He is driven by a desire to use AI for the common good. For instance, during his PhD at MIT, he set up MIT Connect, an organisation which matched up graduate students for friendship rather than romantic reasons.
Another Scholar who is pushing the boundaries in technology research from a business perspective is Alex Davies [2010]. He is Lead of AI for Maths at Google DeepMind, the British–American artificial intelligence research laboratory. AI for Maths’ work on how machine learning can aid mathematicians in discovering new conjectures and theorems was featured on the front cover of Nature magazine. He has previously worked on making phone batteries more efficient, based on studying user habits to determine what apps they use, when and delegating power accordingly. Before DeepMind Alex worked at Google on planning, implementing and scaling machine learning solutions.
From green tech to health and biotech
Scholars have also been active in using technology for sustainable futures. For instance, Impact Prize winner Uche Ogechukwu [2024] co-founded Greenage Technologies which aims to harness renewable energy to combat energy poverty and environmental degradation in Africa. Since it was founded the company has put solar energy solutions in the homes of thousands of people, hospitals and schools in Nigeria. Uche, who is doing an MPhil in Technology Policy, says: “I am passionate about sustainable development in Africa’s energy sector. For that reason, I co-founded Greenage Technologies to combat energy poverty. With a background in technology from the private sector, I’m eager to explore technology’s policy aspects and collaborate with the Gates Cambridge community to drive positive change.”
Josh Weygant [2023] is among a number of Scholars who are working in the field of biotechnology. His research focuses on 3D bioprinting of human organs, a subject that is very personal to him since he lost half of his large intestine at the age of one. In the next decades it is hoped that 3D bioprinting will be able to extract a person’s living cells and use them to print a functional organ that is adapted to their body. In the shorter term, Josh is working on 3D bioprinting that can develop better treatments for common medical ailments because it allows scientists to create mini organs outside the body which behave as they would do inside it. Currently, treatments are tested in environments that do not represent what is going on in the human body.
Several scholars work in the field of health tech. They include Alexandra Grigore [2012] and Toby Norman [2011], Co-Founders of Simprints, the world’s first open-source biometric ID platform with privacy at its core. The company has now reached over three million people across 17 countries and anticipates that impact will increase tenfold in the next years. In the 10 years since it started, Simprints has worked with partners including the International Committee of the Red Cross on safe digital ID cryptography and with Gavi on breakthrough biometric algorithms to ensure children receive vital vaccines. The impact of Simprints’ work can be seen, for example, in a 38% increase in maternal health visits in Bangladesh and a 62% boost in accurate HIV tracing in Malawi.
On the other side of the coin, current Scholar Christine Carpenter [2024] is interested in ensuring technology is not used to restrict women’s freedom. Her PhD centres on the industry known as FemTech (short for “Female Technology”) and data privacy protection law in the larger context of digital surveillance. She is interested in the risk individuals who use fertility and period-tracking apps run in terms of having their data potentially subpoenaed and used in court to show they have had an abortion.
Misinformation/ethics
A big issue in the field of technology is whether science is racing ahead without the potential social and other implications of what it can do being taken into account. With Nobel Laureate Geoffrey Hinton, referred to as the Godfather of AI, warning about the potential dangers of the technology he pioneered, there is a lot of concern about the power of technology for bad, whether intentionally or unintentionally.
A big area of interest is misinformation and bias and many Scholars are tackling this in their work. For instance, Melisa Basol [2018] who, as a PhD Scholar, worked with Professor Sander van der Linden on a ‘psychological vaccine’ against misinformation. She now heads Pulse, an innovation lab which crafts “evidence-based strategies that ensure resilient and responsible technological innovation across society”. It integrates psychological insights with technology to develop human-centred innovation which anticipates future challenges.
Jonathan Ong [2007], Associate Professor of Global Digital Media at the University of Massachusetts – Amherst, looks at the issue of misinformation from the viewpoint of global media ethics, digital politics and the anthropology of humanitarianism. In his disinformation studies research, he uses ethnography to understand the social identities, work arrangements and moral justifications of “paid trolls” and political public relations strategists. His “disinformation whistleblowers podcast” Catch Me If You Can, was ranked in Top 5% Most Followed Podcasts globally by Spotify in 2022.
Meanwhile, Andrew Gruen [2008] runs planning and special initiatives functions at the Mozilla Foundation in addition to managing the organisation’s policy, advocacy and campaigns work. The Foundation develops technology and data to support better human-centred technology futures. Andrew, who runs his own consultancy, is also a senior fellow at the Future of Privacy Forum, which is working on creating more regulatory clarity on the best uses of Privacy Enhancing Technologies (PETs) to enable more data sharing and more/better quality analytics, while also providing the people represented by that data with better guarantees around their own rights.
Molly Crockett [2006], a cognitive scientist at Princeton University who studies human morality, altruism and decision-making, is also interested in issues around ethics and morality. Research in her lab investigates how people learn and make decisions in social situations, for instance, how people decide whether to help or harm, punish or forgive, trust or condemn. Recently, the lab has focused more on moral cognition in the digital age and across different types of social relationships, including how these themes connect with the psychology of the self and identity. She recently wrote an op-ed about the way some researchers are staging “competitions” in which AI technology appears to outperform humans in human areas such as empathy. She says the competitions are rigged against humans because they don’t ask machines to perform human tasks. She states: “It’s more accurate to say that they ask humans to behave in machine-like ways as they perform lifeless simulacra of human tasks.”
Other scholars who are researching in the area of ethics and technology include Kerry McInerney [2017] who leads a popular podcast, The Good Robot, and is co-author of The Good Robot: Why technology needs feminism which asks questions such as what is good technology, is it possible and how can feminism help us work towards it?
She is interested in ensuring we take a balanced view towards AI. “There’s so much hype and mystique around AI,” she says. “I am trying to make sure that we talk about AI in ways that are accurate, measured and not overhyping AI’s capabilities.”
*Picture credit: Kharsohtun and Wikimedia Commons.