You are currently viewing AI Frontiers Lab: Collaborating to build technology responsibly
  • Reading time:7 mins read
  • Post category:Microsoft
image

Microsoft Research is the research arm of Microsoft, pushing the frontier of computer science and related fields for the last 33 years. Our research team, alongside our policy and engineering teams, informs our approach to Responsible AI. One of our leading researchers is Ece Kamar, who runs the AI Frontiers lab within Microsoft Research. Ece has worked in different labs within the Microsoft Research ecosystem for the past 14 years and has been working on Responsible AI since 2015.  

What is the Microsoft Research lab, and what role does it play within Microsoft? 

Microsoft Research is a research organization inside Microsoft where we get to think freely about upcoming challenges and technologies. We evaluate how trends in technology, especially in computer science, relate to the bets that the company has made. As you can imagine, there has never been a time when this responsibility has been bigger than it is today, where AI is changing everything we do as a company and the technology landscape is changing very rapidly.   

As a company, we want to build the latest AI technologies that will help people and enterprises do what they do. In the AI Frontiers lab, we invest in the core technologies that push the frontier of what we can do with AI systems — in terms of how capable they are, how reliable they are, and how efficient we can be with respect to compute. We’re not only interested in how well they work, we also want to ensure that we always understand the risks and build in sociotechnical solutions that can make these systems work in a responsible way. 

My team is always thinking about developing the next set of technologies that enable better, more capable systems, ensuring that we have the right controls over these systems, and investing in the way these systems interact with people.  

How did you first become interested in responsible AI? 

Right after finishing my PhD, in my early days of Microsoft Research, I was helping astronomers collect scalable, clean data about the images captured by the Hubble Space Telescope. It could really see far into the cosmos and these images were great, but we still needed people to make sense of them. At the time, there was a collective platform called Galaxy Zoo, where volunteers from all over the world, sometimes people with no background in astronomy, could look at these images and label them. 

We used AI to do initial filtering of the images, to make sure only interesting images were being sent to the volunteers. I was building machine learning models that could make decisions about the classifications of these galaxies. There were certain characteristics of the images, like red shifts, for example, that were fooling people in interesting ways, and we were seeing machines replicate the same error patterns.   

Initially we were really puzzled by this. Why were machines that were looking at one part of the universe versus another having different error patterns? And then we realized that this was happening because machines were learning from the human data. Humans had these perception biases that were very specific to being human, and the same bias were being reflected by the machines. We knew back then that this was going to become a central problem, and we would need to act on it.   

How do AI Frontiers and the Office of Responsible AI work together?    

The frontier of AI is changing rapidly, with new models coming out and new technologies being built on top of these models. We’re always seeking to understand how these changes shift the way we think about risks and the way we build these systems. Once we identify a new risk, that’s a good place for us to collaborate. For example, when we see hallucinations, we realize a system being used in information retrieval tasks is not returning the grounded correct information. Then we ask, why is this happening, and what tools do we have in our arsenal to address this? 

It’s so important for us to quantify and measure both how capabilities are changing and how the risk surface is changing. So we invest heavily in evaluation and understanding of models, as well as creating new, dynamic benchmarks that can better evaluate how the core capabilities of AI models are changing over time. We’re always bringing in our learnings from the work we do with the Office of Responsible AI in creating requirements for models and other components of the AI tech stack.    

What potential implications of AI do you think are going overlooked by the general public?  

When the public talks about AI risks, people mainly focus on either dismissing the risks completely, or the polar opposite, only focusing on the catastrophic scenarios. I believe we need conversations in the middle, grounded in the facts of today. The reason I’m an AI researcher is because I very much believe in the prospect of these technologies solving many of the big problems of today. That’s why we invest in building out these applications.    

But as we are pushing for that future, we have to always keep in mind in a balanced way both opportunity and responsibility, and lean into both equally. We also need to make sure that we’re not only thinking about these risks and the opportunities as far off in the future. We need to start making progress today and take this responsibility seriously.  

This is not a future problem. It is real today, and what we do right now is going to matter a lot. 

To keep up with the latest from Microsoft Research, follow them on LinkedIn 

Microsoft is a technology company, a small local company, with few employees, no offices, and almost making no profit… >>


You can also contribute and send us your Article.


Interested in more? Learn below.