By Marylee Williams

AI tools have a hard time admitting what they don't know, but Carnegie Mellon University's Daniel Fried thinks that makes for better collaboration.
Fried, an assistant professor in the School of Computer Science's Language Technologies Institute, said AI tools have the potential to be a trusted coworker, but communication gaps need to be addressed first. His work at CMU focuses on how humans and AI can work together better.
What are the gaps you see in how we communicate with AI?
AI is bad at acting as a collaborator for people.
For the people building the systems, the default is to make agents carry out complex tasks without taking guidance from a person, even in situations where guidance is necessary. As an example, we've asked state-of-the-art agents to carry out tasks that might take a person around an hour to do, like data analysis or processing receipts. The agent will often make mistakes along the way or do something totally crazy, but it just keeps going. We had an agent processing receipts but it couldn't read the images, so it made up information. It even said it was generating fake data, but that was buried in thousands of words of output as it kept trying to carry out the task.
Agents are also bad at determining the most informative thing to convey to a person and the best way to convey it. They're not good at communicating the important things. Instead, they communicate everything.
What does better collaboration mean?
An agent should ask for help when it gets stuck, when the person didn't give all the information upfront or something unexpected happens. It needs to come back and ask a question or say something went wrong. It should help determine the right thing to do, rather than deciding on its own without taking the person's input into account.
When you talk about collaboration, are you mainly thinking about work or task settings?
We've started to think a lot about this. There are important issues that need to be addressed by natural language processing and machine learning researchers and human computer interaction researchers, policymakers and decisionmakers in the workplace.
We want AI to help people do good work and work they enjoy. Code has been a big focus for us. Code is a domain where there are a lot of ways to collaborate — a lot of ways that AI models can specialize.
Are we at a place where these tools can augment work?
In some domains, yes. The tools are having measurable impacts for good and for bad. It's definitely disruptive.
Microsoft has said about 30% of the code at the company was written by AI. Anthropic has said about 90% of its code is written by AI. Regardless of who reports the numbers or what kind of code is being written, AI is writing a lot of code.
We need to prepare for that reality and shape it in a good way. There are important questions about quality and long-term sustainability. Studies show people can be less productive with AI if they use it the wrong way. If developers don't review AI-generated code, they accumulate technical debt, which means they've potentially traded short-term speed for problems down the road.
In your research, how do you help autonomous agents better understand people?
We try to take inspiration from how people communicate and tackle communication as a phenomenon. We often formulate communication as games where a speaker and listener are trying to achieve a goal together. Language becomes the move they take in the game.
Games allow us to set clear objectives and analyze how people produce language to achieve them. Someone might want an effect in the world but can't achieve it alone, so they produce language like, "Can you pass me that object?" Asking a question is also a move in the communication game that reduces uncertainty about the world and helps achieve goals.
We've also studied Diplomacy, a strategy-based board game that requires negotiation. It's a rich setting where people have partially conflicting goals but still need to cooperate. The game has complex social dynamics, but success is measurable, which allows us to analyze communication strategies and team formation.
Games make one part of analysis easier — understanding objectives — while still allowing us to study complex communication behavior.
We can also use games to improve models. If following instructions is framed as a game, a system can reason about how a person will interpret instructions. If the person reaches the destination, both succeed. That gives a clear success condition for training models.
We use simulations and games to train systems to communicate, but ultimately we need to test them with real people because simulations aren't 100% accurate.
Where is your research going in the future?
We're working on agents that can adapt to people, including their communication styles, preferences and ways of doing tasks.
Several students are building agents that can take actions in web browsers and graphical user interface applications — software you visually navigate — to complete personal and work tasks beyond software development.
We've been working on having systems learn representations of subtasks that can be reused. So the system might encounter a new website and need a little help from a person to figure out how to complete tasks on it. But then it stores information about how to do those tasks and can perform them more efficiently and accurately on its own in the future.
We've also been developing methods to prompt systems to ask questions when they get stuck and learn to use the same words and style of communication as people, so it's easier for people to communicate with them.
What does success look like for you in your research area?
Success means people are always involved in guiding systems and systems are doing what people want. The tools and domains will change. We started working on coding and are now working more on agents for open-ended and social domains. But optimizing how people and systems work together will remain important.