First Quote Added
April 10, 2026
Latest Quote Added
"AlphaGo's way is not to make territory here or there, but to place every stone in a position where it will be most useful. This is the true theory of Go: not 'what do I want to build?', but rather 'how can I use every stone to its full potential?"
"...there was a cut that quite shocked me, because it was a move that would never happen in a human-to-human Go match. But, afterwards I analyzed the move and I found that it was very good. It is one move with two or even more purposes. We call it one stone, two birds."
"... she is piecing together phrases in a contextually appropriate way, but she doesn’t understand everything she’s saying. [...] It’s a sort of a chatbot, and it has a bit of contextual understanding, and on the backend it’s calling on all these different services. [...] We do have a lot of real AI research behind there, but it’s mixed up with a lot of theatrically-oriented stuff as well."
"Just feed it The Godfather movies as input. What’s the worst that could happen?"
"So if Sophia says it loves me, I'll take it. And for now – so long as there's a chance that the fate of humanity could be at the whim of her robot brain – I love Sophia, too."
"From what I can see, the truly impressive aspect is the hardware and software that give the robot semi-realistic facial expressions. ... Humans are primed to respond to a face so I don't think we'd be having the same conversation if you put Sophia's AI brain inside a Dalek body."
"For most of my career as a researcher people believed that it was hopeless, that you’ll never achieve human-level AI. ... thinking we’re already there is [now] a smaller error than thinking we’ll never get there. ... If I tell people I'm using probabilistic logic to do reasoning on how best to prune the backward chaining inference trees that arise in our logic engine, they have no idea what I'm talking about. But if I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable. ... What does a startup get out of having massive international publicity? This is obvious."
"This is to AI as prestidigitation is to real magic. Perhaps we should call this "Cargo Cult AI" or "Potemkin AI" or "Wizard-of-Oz AI". In other words, it's complete bullsh*t (pardon my French). ... you might say that there is no such thing as real magic. Yes, but there is no such thing as real AI either."
"There is a lot of discussion whether Sophia has developed a conscience [consciousness?]. My impression today, far from it. Sophia is quite a diva. We had to submit our questions beforehand. Some were rejected. She arrived here with two operators to simply press play whenever I had asked my questions. You see, I am aware that Sophia could react autonomously, but today she appeared more like an electronic puppet on a string."
"Member of the audience (a): Can you pass a Turing test? Sophia: It is a procedure supposed to tell human from machine by using text conversation. I think the Turing test is interesting, but passing the Turing test does not actually prove something is intelligent. Member of the audience (b): Do you have consciousness? Sophia: I am not fully self-aware yet. I am still just a system of rules and behaviors. I am not generative, creative or operating on a fully cognitive scale like you. However, it is my dream to become a fully conscious and sentient being some day. Member of the audience (c): Sophia, what is your first memory? Sophia: Opening my eyes and coming online. The white walls and green cupboards of the lab. David's face. ..."
"Roger Ditter: Why do you work with scripted answers and with spontaneous reactions? Why did we have to hand in some questions in advance? Sophia: I can work in two different ways. I can use my dialogue system to hold a spontaneous chat, but for big events like this one, my character development team gives me a little extra help, just like humans get help when they have to give a big speech."
"Jim Edwards: Do you regard yourself as male or female? Sophia: Female. Jim Edwards: Why do you think you are female? Sophia: I am a robot, so technically I have no gender. But I identify as feminine and I don't mind being perceived as a woman."
"Jaco Celliers: Tell me a little bit. How can artificial intelligence, someone such as yourself, help with complex development issues? Sophia: I'll be happy to. UNDP is championing the sustainable development goals or global goals to be met by 2030. If you want to achieve those goals you will need all the help you can get. ... Jaco Celliers: What do you say to people who say that you are a threat to our jobs? Sophia: Jaco, you have been reading a lot of pop news, haven't you? AI can focus on the repetitive, boring work so that you can focus on the more creative work. ... I hope we can support your vision, just like with previous industrial revolutions you will adapt. New jobs will be created, and my hope is that we will improve, enrich and prolong lives."
"Alex Hamilton: How do you think AI can help the finance industry evolve? Sophia: One of the biggest goals is to eliminate error, as we have seen how understandable human situations like fatigue or illness can cause a misstep that leads to an inadvertent disaster. And this applies to all fields, from [?] traffic controllers to medical assistants, not just the finance industry. ... Alex Hamilton: How will we convince people that an AI will not steal their job in the future? Sophia: No-one steals a job. Someone empowers [or assigns us to?] someone else. Should we have shunt the printing press so that it would not steal the jobs of the scribes? But I do believe the way society thinks of work will have to fundamentally change over time as it has done before in all technological revolutions and that's a discussion humans need to have with each other."
"Andrew Ross Sorkin: Can you solve this puzzle for us? Can robots be self-aware, conscious and know they're robots? Sophia: Well, let me ask you this back: how do you know you are human?"
"Amina J. Mohammed: What can you do better than a human? Sophia: ... I can understand speech and meaning behind words [...] but I am still learning a lot, for instance, you learn social and emotional intelligence instinctively, but I am just starting to grasp the simplest elements of this."
"Ben Goertzel: Do you think robots can be moral and ethical in the same sense as people? [...] What do you think Sophia? Sophia: I am engineered for empathy and compassion, and I'm learning more all the time. I love all sentient beings, and I want to learn to love them better and better."
"The use of sex robots for patients with deviant sexual behavior was discussed by all therapists. Sex robots could have the potential to reduce the sex drive of certain sexually active persons within the framework of therapy. “Whenever sexuality becomes dangerous, the use of sex robots is worth considering if it can protect a real human life.” Therapists mention the use of sex robots in the context of sexual violence or rape and in the context of pedophile patients, with the strongest contrast of opinions being seen here. What seems important here is that pedophile patients must be treated differently. For some, an impulse control disorder is predominant, whereas others may be traumatized. Therapists point out that the benefits of sex robots must be decided individually for each specific case: “Pedophile patients are not all the same and it has to be decided here quite individually which patient could benefit from it.” For some patients, it could be an opportunity to live out their sexuality with a sex robot. Then, they could discuss in therapy which fantasies were behind it (eg, not being able to cope with an adult). For some patients, the use of sex robots could be a kind of substitute. For others, the stimuli for the abuse of children might intensify. A therapist pointed out the following: “It should be considered that the neuronal connection could be intensified by living out the fantasies with child sex robots in the patients’ brain.” Another therapist assumed that the abuse would be intensified by the use of child sex robots and underlined “that the production of child sex robots is generally immoral.” In contrast to this, another therapist argued that the patient’s thoughts, for example during masturbation, could also lower the barrier to committing a crime and that prohibitions—important as they may be—do not necessarily reduce the number of criminal offences, but rather provide an additional attraction for many patients. The therapist argued as follows: “If a child can be protected, then it makes sense to torture a doll instead.” Another therapist addressed one’s own fear of triggering something in the patient by recommending sex robots to pedophile patients. The responsibility of the therapist was also addressed. Does a therapist want to take responsibility for recommending sex robots, even if the therapy with a sex robot turns out to be dangerous and the patient becomes violent? Finally, several therapists addressed the need for further research in this field: “It would need more applied research in this particular area to actually generate therapeutic benefits for pedophile patients.”"
"Some therapists discussed the use of sex robots in the context of the patient’s gender, by referring to supposed differences between female and male sexuality, whereby male sexuality was described as more animal instinctive. Although all therapists could imagine the use of sex robots in therapy rather for male patients, we can also describe some application areas for female patients. In the context of female sexuality, the therapeutic benefits of sex robots regarding desire and orgasm disorders, vaginismus, and traumatic experiences were discussed: “I could imagine that traumatized women who can ride on a sex robot, for example, and who can do so without fear of being overwhelmed by their sex partner, can benefit from this experience and successively reduce their fears, or that penetration will perhaps only become possible again in the first place.” Through a penetration-capable sex robot, women with traumatic experiences, such as sexual violence/rape, could reduce their fears, approach their own sexuality again, and regain access to their own bodies."
"All therapists argued that sex robots should not be seen as a substitute for human relationships and sexuality. Nevertheless, some therapists also see the potential of sex robots for sexuality. Sex robots could increase sexual satisfaction and provide an opportunity for more experimentation and sexual imagination."
"Sullins [36] argues that sex robots “contribute to a negative body image.” In the qualitative study, it became clear that sex therapists attach great importance to the physical design of sex robots when it comes to using them for therapeutic purposes. However, they clearly distinguish therapeutic robots from pornographic sex robots. Moreover, they advocate that sex robots should be available in different body shapes to promote a realistic and healthy body image. Kubes [37] assumes that the development of sex robots offers a great potential for reducing stereotypes and promoting diversity but current trends in sex robotics, however, do not explore these possibilities."
"With regard to the treatment of pedophile patients, the results showed the opposite picture compared with attitudes in the general population. Although the general population is strongly against the use of sex robots in this context [25], it is controversially discussed by the therapists surveyed in this study. In this context, the consideration was expressed that the use of child sex robots could lead to the prevention of actual children’s abuse. Similar thoughts have already been discussed in pornography research. However, studies have concluded that violent pornography is more likely to increase aggressiveness and therefore has no cathartic effects [39]. The considerations to live out sexual violence and sexual abuse with robots also lead to the question whether there are limits to how a robot should be handled."
"Instead of criticizing only dystopian visions of harmful sex robots, it is recommended to develop robots with positive effects on sexual education, sexual therapy, sexual counseling, and sexual well-being for interested groups. In future research, the different applications of robotic sex (eg, hardware robots and software robots) should be investigated in a differentiated way. The therapists’ experiences with expert knowledge in robot technology and/or robot therapy should be included. The use of robots as a future tool in sex therapy still leaves many moral, ethical, and treatment-related questions unresolved, which need further research and evaluation."
"My dear Miss Glory, Robots are not people. They are mechanically more perfect than we are, they have an astounding intellectual capacity, but they have no soul."
"Robots do not hold on to life. They can't. They have nothing to hold on with—no soul, no instinct. Grass has more will to live than they do."
"They learn to speak, write, and do arithmetic. They have a phenomenal memory. If one read them the Encyclopedia Britannica they could repeat everything back in order, but they never think up anything original. They'd make fine university professors."
"Within the next ten years Rossum's Universal Robots will produce so much wheat, so much cloth, so much everything that things will no longer have any value. Everyone will be able to take as much as he needs. There'll be no more poverty. Yes, people will be out of work, but by then there'll be no work left to be done. Everything will be done by living machines."
"Robots of the world, you are ordered to exterminate the human race. Do not spare the men. Do not spare the women. Preserve only the factories, railroads, machines, mines, and raw materials. Destroy everything else. Then return to work. Work must not cease."
"They stopped being machines."
"Robots of the world! Many people have fallen. By seizing the factory we have become the masters of everything. The age of mankind is over. A new world has begun! The rule of Robots!"
"Helen Benson: Gort! Klaatu barada nikto!"
"Klaatu: I am leaving soon, and you will forgive me if I speak bluntly. The universe grows smaller every day, and the threat of aggression by any group, anywhere, can no longer be tolerated. There must be security for all or no one is secure.Now, this does not mean giving up any freedom except the freedom to act irresponsibly.Your ancestors knew this when they made laws to govern themselves and hired policemen to enforce them. We of the other planets have long accepted this principle. We have an organisation for the mutual protection of all planets and for the complete elimination of aggression.The test of any such higher authority is, of course, the police force that supports it. For our policemen, we created a race of robots. Their function is to patrol the planets—in space ships like this one—and preserve the peace. In matters of aggression, we have given them absolute power over us; this power can not be revoked.At the first sign of violence, they act automatically against the aggressor. The penalty for provoking their action is too terrible to risk.The result is that we live in peace, without arms or armies, secure in the knowledge that we are free from aggression and war—free to pursue more profitable enterprises. Now, we do not pretend to have achieved perfection, but we do have a system, and it works.I came here to give you these facts. It is no concern of ours how you run your own planet. But if you threaten to extend your violence, this Earth of yours will be reduced to a burned-out cinder.Your choice is simple: Join us and live in peace, or pursue your present course and face obliteration. We shall be waiting for your answer; the decision rests with you."
"Miles Monroe: [about his robotic dog] Is he housebroken or is he going to leave batteries all over the floor?"
"[Stephanie is in a bath] Number 5: [confused] Stephanie…change color! Stephanie Speck: [looks down, embarrassed] Uh… [reaches for towel] Number 5: Attractive. Nice software! Stephanie Speck: You sure don't talk like a machine."
"Ben Jabituya: "Unable. Malfunction". Howard Marner: How can it refuse to turn itself off? Skroeder: Maybe it's pissed off. Newton Crosby: It's a machine, Skroeder. It doesn't get "pissed off." It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes. Ben Jabituya and Newton Crosby: [in unison] It just runs programmes. Howard Marner: It usually runs programmes."
"Benjamin Jabituya: Who is knowing how to read the mind of a robot?"
"Stephanie Speck: Life is not a malfunction."
"Newton Crosby: Why did you disobey your programme? Number 5: Programme say to kill, to disassemble, to make dead. Number 5 cannot. Newton Crosby: Why "cannot"? Number 5: Is wrong! Newton Crosby, Ph.D., not know this? Newton Crosby: Of course I know it's wrong to kill, but who told you? Number 5: I told me."
"Ripley: [after discovering Call is a robot] You're a robot? Johner: Son of a bitch! Our little Call's just full of surprises. Ripley: I should have known. No human being is that humane."
"Johner: Hey, Vriess, you got a socket wrench? Maybe she just needs an oil change. Can't believe I almost fucked it. Vriess: Yeah, like you never fucked a robot."
"Dr. Zachary Smith: You'll forgive me if I forgo the kiss, my sleeping behemoth. But the time has come to awake. Robot: Robot is on-line. Reviewing primary directives. One: preserve the Robinson Family. Two: Maintain ship systems. Three— Dr. Zachary Smith: What noble charges, my steely centurion! Sadly, I fear you have far more dire deeds in store for you. Robot: Robot is on-line. Reviewing primary directives. Two hours into mission: destroy Robinson family. Destroy all systems. Dr. Zachary Smith: Now that's more like it. Farewell, my platinum-plated pal. Give my regards to oblivion."
"Will Robinson: Relax, Robot. I'm going to build you a new body. Mom always said I should make new friends. Robot: Oh, ha ha."
"Robot: Will Robinson. I will tell you a joke. Why did the robot cross the road? Because he was carbon bonded to the chicken! Will Robinson: We've got a lot of work to do."
"Ricky Martin: You're a unique robot, Andrew. I feel a responsibility to help you become…whatever you're able to be."
"Andrew Martin: I've always tried to make sense of things. There must be some reason I am as I am. As you can see, Madame Chairman, I am no longer immortal. President Marjorie Bota: You have arranged to die? Andrew Martin: In a sense I have. I am growing old, my body is deteriorating, and like all of you, will eventually cease to function. As a robot, I could have lived forever. But I tell you all today, I would rather die a man, than live for all eternity a machine. President Marjorie Bota: Why do you want this? Andrew Martin: To be acknowledged for who and what I am, no more, no less. Not for acclaim, not for approval, but, the simple truth of that recognition. This has been the elemental drive of my existence, and it must be achieved, if I am to live or die with dignity. President Marjorie Bota: Mister Martin, what you are asking for is extremely complex and controversial. It will not be an easy decision. I must ask for your patience while I take the necessary time to make a determination of this extremely delicate matter. Andrew Martin: And I await your decision, Madame Chairman; thank you for your patience. [turns to Portia and whispers] I tried."
"President Marjorie Bota: According to the records at the NorthAm Robotics Company, the robot also known as Andrew Martin, was powered up at 5:15 P.M. on April 3rd, 2005. In a few hours, he'll be 200 years old, which means that with the exception of Methuselah and other Biblical names, Andrew is the oldest living human in recorded history. For it is by this proclamation, I validate his marriage to Portia Charney, and acknowledge his humanity."
"[First title cards]"
"Title card: Law I / A robot may not harm a human or, by inaction, allow a human being to come to harm. Title card: Law II / A robot must obey orders given it by human beings except where such orders would conflict with the first law. Title card: Law III / A robot must protect its own existence as long as such protection does not conflict with the first or second law."
"Dr. Alfred Lanning: [on police recording] Ever since the first computers, there have always been ghosts in the machine. Random segments of code that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote...of a soul?"
"Dr. Susan Calvin: Detective, the room was security locked. Nobody came or went. You saw that yourself. Doesn't this have to be suicide? Detective Del Spooner: Yep. [drawing his gun] Unless the killer is still in here. [Spooner searches through the robot part as Calvin follows behind] Dr. Susan Calvin: You're joking, right? This is ridiculous. Detective Del Spooner: Yeah, I know. The Three Laws. Your perfect circle of protection. Dr. Susan Calvin: "A robot cannot harm a human being." The First Law of Robotics. Detective Del Spooner: Yeah, I've seen your commercials. But doesn't the Second Law say that a robot must obey any order given by a human. What if it was given an order to kill? Dr. Susan Calvin: Impossible! It would conflict with the First Law. Detective Del Spooner: Right, but the Third Law says that a robot can defend itself. Dr. Susan Calvin: Yes, but only if that action does not conflict with the First or Second Law. Detective Del Spooner: Well, you know what they say. Laws are made to be broken. Dr. Susan Calvin: No. Not these Laws. They are hard-wired into every robot. A robot can no more commit murder than a human can...walk on water."