First Quote Added
April 10, 2026
Latest Quote Added
"When we design AI systems, we’re not just making technical choices. We’re making moral and philosophical decisions."
"In an analogy to AI,the risks of harm posed by AI: Research shows women, minorities, and marginalized groups tend to get fewer options or opportunities in systems driven by AI models."
"The problem, is that a lot of this harm and unfairness is baked in: it’s in the training data, in the models we chose, and in the trade-offs we incorporate into the AI system. Existing ethical problems only get magnified by the model."
"philosophers and computer scientists, and the goal is to help industry. That means corporations as well as startups, or organizations like law enforcement or hospitals, to develop and deploy AI systems responsibly and ethically,"
"A lot of the companies come to us and say, ‘Here’s a model that we are planning to use. Is this fair?’"
"The biggest risk that AI holds comes from the fact that it is systematic and efficient. So the risk is, if you create a bad system, it is systematically and efficiently bad."
"But the flip side of it is that if you create a good system that is better than now, it [will be] systematically and efficiently good, and will make it systematically and efficiently better than now."
"If we haven’t yet, we have to face the fact that the world is a massively unethical place, and we create a ton of suffering, needless suffering. Any shift is an opportunity to fix some of those [problems], and any shift holds the risk that we will just create more."