First Quote Added
April 10, 2026
Latest Quote Added
"If you’re going to build agents that interact with people, you have to think about people’s cognition and the ways they behave. That doesn’t necessarily mean you have to do cognitive modeling — although that is an interesting approach — but you do need to care about how people process information and communicate."
"When I was working on speech understanding systems at SRI in the 1970s, other research team members were responsible for syntax and grammar — determining the structure and building a computer representation of the meaning of an individual sentence. Everyone involved in early speech understanding systems knew that wasn’t enough. When people talk, the context matters. They use pronouns and definite descriptions. They depend on each other to interpret those imprecise expressions appropriately in context. For example, depending on the setting, “the cup” might mean my coffee cup or the cup you received as a gift. We knew that if we were going to have a system that could carry on a dialogue and be able to handle the way people actually spoke, we needed to have a computational model of dialogue that could track context. Many researchers thought if they sat in a chair and thought really hard, they could figure it out. I expected that wouldn’t work and devised a way to capture dialogue about the same topic from many different pairs of people. This was actually the first “Wizard of Oz” experiment in dialogue systems, though that name came later. I placed two people in separate rooms and had one give the other instructions in how to put together a piece of equipment — an air compressor. My analysis of the way they talked led to the first computational model of discourse."
"These systems are major accomplishments, but they don’t come close to human dialogue capabilities. When Siri first came out people said to me, ‘you have nothing left to do, right?’ So, I borrowed a phone with Siri and it took me two questions to break the system. I asked, “Where are the nearest gas stations,” and then I asked, “Which ones are open?” It replied, “Would you like me to search the web for ‘which ones are open?’” It had no context, no discourse. Siri has improved since then, but it’s still pretty easy to break the system with a question that depends on dialogue context. No current system is thinking to the extent Turing imagined computers might be by now."
"To clarify: I suggested that the way we use computers had changed so much, as had our knowledge of human cognition, that Turing himself might ask a different question now. My new question is rooted in our now knowing that collaboration is essential to intelligent behavior and seems to play a fundamental role in the ways infants learn. Can we design systems that behave so well that they pass for human? One big challenge, which my team is addressing in our research, is getting delegation to work well. Delegation of particular responsibilities to different team members is a hallmark of teamwork. To make teamwork work (or as we might say in computer science, to make it tractable), team members have to share information but not overwhelm each other with too much information. An enormous challenge for systems is to be able to determine what information to share with whom when."
"For example, I’m making dinner with Bobby and Susie. Susie is assigned appetizers, Bobby is assigned the main dish and I’m assigned dessert. I don’t ask Bobby how he is making the main course because if he has to tell me everything he’s doing, it’s a huge cognitive load. That said, it’s still crucial to know certain things, such as if we both need the same pan."
"Right. We’re working with a pediatrician at Stanford University Hospital whose patients have complex diseases, many of them seeing 10 to 15 doctors. The cognitive load for coordinating care among 15 people (turning the group into a real team) is enormous — no care giver needs to see everything everyone else is doing but they may need to know something about each other’s work. A key question is when one member of the team learns something new about a patient, who should get that information and when? Our goal is to build the foundations for smart computer care coordination systems to help. To do that, we need to figure how to effectively compute the information to be shared in the absence of detailed models of how people are carrying out their responsibilities. If we do this, we’ll also know how to build computer agents that are good teammates."
"One of the things I want students to learn is the importance of designing artifacts for the people who will use them. A computer system should make us feel smarter, not dumber and work seamlessly with us, like a human partner. I tell students to look for limitations and cracks in a system and think about the unintended consequences of those limitations. If you’re only focused on what you’re building, you’re blind to what a system may do that you hadn’t thought about."
"The fear of AI systems running amok or taking over the world is greatly exaggerated. Some of the predictions are based on lack of understanding of the current state of AI (or even of what’s actually computable). Also, it’s important not to lose sight of who’s in charge: people design AI systems, and they can design any number of plugs to pull. If we design systems to work with people — which has always been my goal — then the probability of them running amok is greatly lowered."
"Even so, as the people who develop these systems, AI scientists and practitioners need to take responsibility for the uses to which AI capabilities are put. We should be clear about the limitations of the technology. Should we think – and talk – about negative or potential unintended consequences? Absolutely! Are these concerns reasons not to develop systems that are smart? Absolutely not."
Young though he was, his radiant energy produced such an impression of absolute reliability that Hedgewar made him the first sarkaryavah, or general secretary, of the RSS.
- Gopal Mukund Huddar
Largely because of the influence of communists in London, Huddar's conversion into an enthusiastic supporter of the fight against fascism was quick and smooth. The ease with which he crossed from one worldview to another betrays the fact that he had not properly understood the world he had grown in.
Huddar would have been 101 now had he been alive. But then centenaries are not celebrated only to register how old so and so would have been and when. They are usually celebrated to explore how much poorer our lives are without them. Maharashtrian public life is poorer without him. It is poorer for not having made the effort to recall an extraordinary life.
I regret I was not there to listen to Balaji Huddar's speech [...] No matter how many times you listen to him, his speeches are so delightful that you feel like listening to them again and again.
By the time he came out of Franco's prison, Huddar had relinquished many of his old ideas. He displayed a worldview completely different from that of the RSS, even though he continued to remain deferential to Hedgewar and maintained a personal relationship with him.