First Quote Added
April 10, 2026
Latest Quote Added
"Those who work in computer vision, those who write facial recognition software, know almost nothing about the technical developments that concern, say, automatic reasoning systems: they know almost nothing about that sub-area of artificial intelligence. Furthermore, still using the example of computer vision, another important aspect is the fact that, in the vast majority of cases, this approach no longer draws on what we know from studies regarding the vision of other biological entities, such as humans or even animals."
"In the community of artificial intelligence scholars, this type of approach, which brings together expertise from different fields to study the same phenomenon in both biological and artificial systems, is extremely rare. Today, hyper technicality clearly prevails."
"A future where human dignity, justice, peace, kindness, care, and rights and freedoms for all serve as the north stars that guide AI development and use is possible, but that future canât appear out of thin air and without intentional tireless work, continual dialogues, and most importantly, confronting ugly realities including big tech and government use of AI and infrastructure to power genocide and enable mass surveillance. The Declaration we produced and presented to Pope Leo XIV is an important move towards this future."
"And this applies not only to vision but to all areas perception, action, reasoning, and even natural language, as is evident today with ChatGPT and other generative models of this type. It's important to emphasize that the way these systems respond has nothing to do with the way we humans generate responses using our linguistic competence."
"let's start with the approach. I'd say first of all that my approach is the historical one, that of early artificial intelligence. Artificial intelligence, born at the 1956 Dartmouth conference, is a technical discipline that brings together philosophers, computer scientists, engineers, neuroscientists, and psychologists of the time."
"In the long term, the tables may turn on humans, and the problem may not be what we could do to harm AIs, but what AI might do to harm us."
"For instance, if AI cannot be conscious, then if you substituted a microchip for the parts of the brain responsible for consciousness, you would end your life as a conscious being"
"Anything she says will be ok with him -- this she feels instinctively. She looks up and meets his eyes: eagle against the sky, his eyes boring into her. He leans over and kisses her, first lightly, then his arms circles her waist and his hand grasps a shoulder blade, pulling her up and closer. Inside her a diamond, the glittering spot where her feelings have solidified into the hardest substance on earth, catches fire and melts"
"The development of AI is driven by market forces and the defense industryâbillions of dollars are now pouring into constructing smart household assistants, robot supersoldiers, and supercomputers that mimic the workings of the human brain."
"And if an AI is a conscious being, forcing it to serve us would be akin to slavery"
"According to a recent survey, for instance, the most-cited AI researchers expect AI to âcarry out most human professions at least as well as a typical humanâ within a 50 percent probability by 2050, and within a 90 percent probability by 2070.â"
"Nature is medicine for the soul, Miss Rolston"
"Kurzweil and other transhumanists contend that we are fast approaching a âtechnological singularity,â a point at which AI far surpasses human intelligence and is capable of solving problems we werenât able to solve before, with unpredictable consequences for civilization and human nature."
"Mentally tease apart the threads that keep you connected to your mother. See that those threads, those feelings, that you experience with her are what find the two of you -- but they do not have to weave the tapestry of your entire life."
"The mother needs mothering too"
"If superintelligent machines are not conscious, either because itâs impossible or because they arenât designed to be, we could be in trouble."
"For if we are not careful, we may experience one or more perverse realizations of AI technologyâsituations in which AI fails to make life easier but instead leads to our own suffering or demise, or to the exploitation of other conscious beings"
"Toward developing a more fine-grained and more comprehensive framework, I will adopt the more generic but less loaded terms of implicit and explicit processes (Reber, 1989, Sun, 2002) and present a more nuanced view of these processes, centered on a computational âcognitive architectureâ."
"It is assumed in this work that cognitive processes are carried out in two distinct âlevelsâ with qualitatively different mechanisms. Each level encodes a fairly complete set of knowledge for its processing, and the coverage of the two sets of knowledge encoded by the two levels overlaps substantially."
"It thus seems necessary that we come up with more nuanced and more detailed characterizations of the two systems (the two types of processes) in order to avoid painting the picture in too broad strokes."
"The distinction between âintuitiveâ and âreflectiveâ thinking has been, arguably, one of the most important distinctions in cognitive science. There are currently many dual-process theories (two-system views) out there."
"It was my privilege to be among those who participated in this event in the 'coming of age' of cybernetics."
"I started writing about it right away in my Substack and I said immediately that itâs going to excite people but it has limits."
"Iâve never been fully optimistic about it. I donât think weâve made as much progress as Iâd hoped that we would. I would have guessed that 40 years after I was looking at this, that a lot of A.I. would have been solved problems."
"âWe donât really believe that computers can have emotions, but we see that emotions have a certain function in human practical reasoning,â says Mehdi Dastani, an artificial-intelligence researcher at Utrecht University, in the Netherlands. By bestowing intelligent agents with similar emotions, researchers hope that robots can then emulate this human-like reasoning."
"Dastaniâs emotional functions have been derived from a psychological model known as the OCC model, devised in 1988 by a trio of psychologists: Andrew Ortony and Allan Collins, of Northwestern University, and Gerald Clore, of the University of Virginia. âDifferent psychologists have come up with different sets of emotions,â says Dastani. But his group decided to use this particular model because it specified emotions in terms of objects, actions, and events.â"
"We have a lot of tools now that are very useful. But I think the bigger questions of, like, how do you represent and acquire knowledge? You still donât have really good answers to those. So Iâve never reached a point of being really satisfied. I always want it to go better."
"I didnât know too much about, but they thought itâd be fun to have a human interest story. They nominated me to do the explanation, you know, cute kid explaining what a computer is. And I saw it on TV that night. Thereâs no archival footage that we can find, but I did actually see it on my fatherâs little black and white TV later that night."
"Itâs a very specific moment. I learned how to program on basically a simulation of a computer that was on paper. It was part of my after-school program for gifted kids, and that night I explained to the media how it worked. So, my media career and my A.I. career began that day."
"Scientists in the Netherlands are endowing a robotic cat with a set of logical rules for emotions. They believe that by introducing emotional variables to the decision-making process, they should be able to create more-natural human and computer interactions."
"In individual emotional development the precursor of the mirror is the mother's face."
"When girls and boys in their secondary narcissism look in order to see beauty and to fall in love, there is already evidence that doubt has crept in about their mother's continued love and care. So the man who falls in love with beauty is quite different from the man who loves a girl and feels she is beautiful and can see what is beautiful about her."
"Biologists, or rather botanists and zoologists, studied flora and fauna in exhaustive detail, in niches, in situ, penetrating the mysteries of their local habitations, measuring them, counting them, tracking cycles, writing all this down in the equivalent of field guides, and developing the ability to predict many natural phenomena, including phenomena of change: if frost falls, the bud is harmed; if the soil is enriched, growth improves, and so on. The world of life forms was a text whose meaning the biologist interpreted. But these interpretations did not explain and were not meant to explain the biological processes according to which these species could exist in the first place, or descend, or develop, or differ. To explain these more basic issues required the theory of evolution, which, once it was available, became an indispensable instrument in the professional study of local, narrowly coordinated, in situ life forms and the niches they inhabit."
"I think political systems will use it to terrorize people."
"If you or I learn something and want to transfer that knowledge to someone else, we canât just send them a copy. But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. Thatâs a huge difference. Itâs as if there were 10,000 of us, and as soon as one person learns something, all of us know it. Itâs a completely different form of intelligence. A new and better form of intelligence."
"The reason hidden units in neural nets are called hidden units is that Peter Brown told me about hidden Markov models. I decided "hidden" was a good name for those extra units, so that's where the name "hidden" comes from."
"I'm much more interested in how the brain does it. I'm only interested in applications just to prove that this is interesting stuff to keep the funding flowing. To do an application really well, you have to put your whole heart into it; you need to spend a year immersing yourself in what the application' s all about. I guess I've never really been prepared to do that."
"Then we got very excited because now there was this very simple local-learning rule. On paper it looked just great. I mean, you could take this great big network, and you could train up all the weights to do just the right thing, just with a simple local learning rule. It felt like we'd solved the problem . That must be how the brain works. I guess if it hadn't been for computer simulations, I'd still believe that, but the problem was the noise. It was just a very very slow learning rule. It got swamped by the noise because in the learning rule you take the difference between two noisy variables--two sampled correlations, both of which have sampling noise. The noise in the difference is terrible. I still think that's the nicest piece of theory I'll ever do. It worked out like a question in an exam where you put it all together and a beautiful answer pops out."
"I first of all explained to him why it wouldn't work, based on an argument in Rosenblatt's book, which showed that essentially it was an algorithm that couldn't break symmetry... The next argument I gave him was that it would get stuck in local minima... We programmed a backpropagation net, and we tried to get this fast relearning. It didn't give fast relearning, so I made one of these crazy inferences that people make--which was, that backpropagation is not very interesting... [One year of trying and failing to scale up Boltzmann machines later] "Well, maybe, why don't I just program up that old idea of Rumelhart's, and see how well that works on some of the problems we've been trying?"... We had all the arguments: It's assuming that neurons can send real numbers to each other; of course they can only send bits to each other ; you have to have stochastic binary neurons; these real-valued neurons are totally unrealistic. It's ridiculous." So they just refused to work on it, not even to write a program, so I had to do it myself."
"... as soon as I got backpropagation working, I realized--because of what we'd been doing with Boltzmann machines--that you could use autoencoders to do unsupervised learning. You just get the output layer to reproduce the input layer, and then you don't need a separate teaching signal. Then the hidden units are representing some code for the input."
"It seems very likely to a large number of people that we will get massive unemployment caused by Ai."
"I got Christianity at school and Stalinism at home. I think that was a very good preparation for being a scientist because I got used to the idea that at least half the people are completely wrong."
"We got quite a few applications, and one of these applications I couldn't decide if the guy was a total flake or not... He wrote a spiel about the machine code of the brain and how it was stochastic, and so the brain had this stochastic machine code. It looked like rubbish to me, but the guy obviously had some decent publications and was in a serious place, so I didn't know what to make of him... David Marr said, "Oh yes, I've met him." I said, "So what did you think of him?" David Marr said, 'Well, he was a bit weird, but he was definitely, smart." So I thought, OK, so we'll invite him. That guy was Terry Sejnowski, of course... the book was one of the first books to come out about neural networks for a long time. It was the beginning of the end of the drought... both Dave Rumelhart and Terry said that from their point of view, just getting all these people interested and in the same room was a real legitimizing breakthrough."
"In late 1985, I actually had a deal with Dave Rumelhart that I would write a short paper about backpropagation, which was his idea, and he would write a short paper about autoencoders, which was my idea. It was always better to have someone who didn't come up with the idea write the paper because he could say more dearly what was important. So I wrote the short paper about backpropagation, which was the Nature paper that came out in 1986, but Dave still hasn't written the short paper about autoencoders. I'm still waiting."
"He examines how propaganda operates subtly, how it undermines democracyâparticularly the ideals of democratic deliberation and equalityâand how it has damaged democracies of the past. ⌠Stanley provides a historically grounded introduction to democratic political theory as a window into the misuse of democratic vocabulary for propaganda's selfish purposes. He lays out historical examples, such as the restructuring of the US public school system at the turn of the twentieth century, to explore how the language of democracy is ⌠used to mask an undemocratic reality."
"My father's vision of civic compassion was premised on rejecting the language of unity, as too "contaminated by propagandistic usage." If anything, his solution was the oppositeâto engage respectfully, to imaginatively stand in the places of others, to inhabit worlds that initially seem strange and even threatening, to acknowledge one's inability to be as wise, as generous, or as open as pluralistic democracy requires. To resist the slide into cruelty is perhaps the most important educational goal of a people."
"Democracy is an ideal. It is an ideal in which every citizen has political equality rooted in the recognition of all peopleâs full humanity. And realizing the ideal of political equality is impossible without an understanding of who has been denied it and why."
"When fascists adopt classical education, ⌠they rely on the flattened, instrumentalized version of it, which does nothing to challenge the practice of viewing people solely in terms of their productive capacityâin the case of women, the capacity to produce children, and in the case of men, the capacity for labor. The Nazis, for example, who praised the virtues of an education in Western civilization, were also responsible for the mass extermination of disabled people. According to the distinguished historian of Nazism Nikolaus Wachsmann, the main criterion in selecting which disabled people should be gassed was "the patients' ability to work: anyone regarded as unproductive would be killed.""
"According to the civilization savagism paradigm, the "uncivilized" are not fully human, and may reasonably be reduced to their capacity for labor. Because of this, they are worthy only of what we might call industrial education, a dehumanizing form of education focused entirely on technical training, which ascribes no value at all to knowledge."
"All education presupposes values, even substantive moral and political ones. The idea that it should not presuppose perspectives, even value-laden ones, involves a false conception of objectivity, and a tendentious and in fact ultimately incoherent distinction between facts and values. All inquiry must make presuppositions, and these presuppositions form an intertangled web of fact and value. The demand for neutral inquiry is philosophically incoherent. No wonder that such demands invariably, and hypocritically, mask political agendas."