First Quote Added
April 10, 2026
Latest Quote Added
"Defaults are not neutral. They often reflect the coded gaze—the preferences of those who have the power to choose what subjects to focus on"
"How will people build professional callouses if the early work that may be viewed as mundane essentials are taken over by AI systems?"
"If you have a face, you have a place in the conversation about AI."
"AI will not solve poverty, because the conditions that lead to societies that pursue profit over people are not technical. AI will not solve discrimination, because the cultural patterns that say one group of people is better than another because of their gender, their skin color, the way they speak, their height, or their wealth are not technical. AI will not solve climate change, because the political and economic choices that exploit the earth’s resources are not technical matters. As tempting as it may be, we cannot use AI to sidestep the hard work of organizing society so that where you are born, the resources of your community, and the labels placed upon you are not the primary determinants of your destiny. We cannot use AI to sidestep conversations about patriarchy, white supremacy, ableism, or who holds power and who doesn’t."
"I'm humbled and honoured to be named one of Canada’s Top 100 Most Powerful Women by the Women's Executive Network @WXN"
"I'm a winner in the Science and Tech category, recognizing my contributions and commitment to tech advancement as well as diversity and inclusion in #STEM."
"Maybe already the next generation [of tools] that is coming in 2024 could be very dangerous. Governments need to start preparing for this."
"I’ve always been inspired and motivated by the idea. It wasn’t called AGI back then, but you know, like, having a neural network do everything. I didn’t always believe that they could. But it was the mountain to climb."
"The thing you really want is for the human teachers that teach the AI to collaborate with an AI. You might want to think of it as being in a world where the human teachers do 1% of the work and the AI does 99% of the work. You don't want it to be 100% AI. But you do want it to be a human-machine collaboration, which teaches the next machine."
"In a nutshell, I had the realization that if you train, a large neural network on a large and a deep neural network on a big enough dataset that specifies some complicated task that people do, such as vision, then you will succeed necessarily. And the logic for it was irreducible; we know that the human brain can solve these tasks and can solve them quickly. And the human brain is just a neural network with slow neurons. So, then we just need to take a smaller but related neural network and train it on the data. And the best neural network inside the computer will be related to the neural network that we have in our brains that performs this task."
"I lead a very simple life. I go to work; then I go home. I don’t do much else. There are a lot of social activities one could engage in, lots of events one could go to. Which I don’t."
"It may be that today's large neural networks are slightly conscious."
"... the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.” (Does he have children? “No, but I want to,” he says.)"
"There was a period of time when we were starting OpenAI when I wasn’t exactly sure how the progress would continue. But I had one very explicit belief, which is: one doesn’t bet against deep learning. Somehow, every time you run into an obstacle, within six months or a year researchers find a way around it."
"If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy. But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it. It’s a completely different form of intelligence. A new and better form of intelligence."
"In late 1985, I actually had a deal with Dave Rumelhart that I would write a short paper about backpropagation, which was his idea, and he would write a short paper about autoencoders, which was my idea. It was always better to have someone who didn't come up with the idea write the paper because he could say more dearly what was important. So I wrote the short paper about backpropagation, which was the Nature paper that came out in 1986, but Dave still hasn't written the short paper about autoencoders. I'm still waiting."
"... as soon as I got backpropagation working, I realized--because of what we'd been doing with Boltzmann machines--that you could use autoencoders to do unsupervised learning. You just get the output layer to reproduce the input layer, and then you don't need a separate teaching signal. Then the hidden units are representing some code for the input."
"I'm much more interested in how the brain does it. I'm only interested in applications just to prove that this is interesting stuff to keep the funding flowing. To do an application really well, you have to put your whole heart into it; you need to spend a year immersing yourself in what the application' s all about. I guess I've never really been prepared to do that."
"The reason hidden units in neural nets are called hidden units is that Peter Brown told me about hidden Markov models. I decided "hidden" was a good name for those extra units, so that's where the name "hidden" comes from."
"I first of all explained to him why it wouldn't work, based on an argument in Rosenblatt's book, which showed that essentially it was an algorithm that couldn't break symmetry... The next argument I gave him was that it would get stuck in local minima... We programmed a backpropagation net, and we tried to get this fast relearning. It didn't give fast relearning, so I made one of these crazy inferences that people make--which was, that backpropagation is not very interesting... [One year of trying and failing to scale up Boltzmann machines later] "Well, maybe, why don't I just program up that old idea of Rumelhart's, and see how well that works on some of the problems we've been trying?"... We had all the arguments: It's assuming that neurons can send real numbers to each other; of course they can only send bits to each other ; you have to have stochastic binary neurons; these real-valued neurons are totally unrealistic. It's ridiculous." So they just refused to work on it, not even to write a program, so I had to do it myself."
"Then we got very excited because now there was this very simple local-learning rule. On paper it looked just great. I mean, you could take this great big network, and you could train up all the weights to do just the right thing, just with a simple local learning rule. It felt like we'd solved the problem . That must be how the brain works. I guess if it hadn't been for computer simulations, I'd still believe that, but the problem was the noise. It was just a very very slow learning rule. It got swamped by the noise because in the learning rule you take the difference between two noisy variables--two sampled correlations, both of which have sampling noise. The noise in the difference is terrible. I still think that's the nicest piece of theory I'll ever do. It worked out like a question in an exam where you put it all together and a beautiful answer pops out."
"We got quite a few applications, and one of these applications I couldn't decide if the guy was a total flake or not... He wrote a spiel about the machine code of the brain and how it was stochastic, and so the brain had this stochastic machine code. It looked like rubbish to me, but the guy obviously had some decent publications and was in a serious place, so I didn't know what to make of him... David Marr said, "Oh yes, I've met him." I said, "So what did you think of him?" David Marr said, 'Well, he was a bit weird, but he was definitely, smart." So I thought, OK, so we'll invite him. That guy was Terry Sejnowski, of course... the book was one of the first books to come out about neural networks for a long time. It was the beginning of the end of the drought... both Dave Rumelhart and Terry said that from their point of view, just getting all these people interested and in the same room was a real legitimizing breakthrough."
"I got Christianity at school and Stalinism at home. I think that was a very good preparation for being a scientist because I got used to the idea that at least half the people are completely wrong."
"It seems very likely to a large number of people that we will get massive unemployment caused by Ai."
"I think political systems will use it to terrorize people."
"The term knowledge raises philosophical eyebrows (strictly speaking, it should be called belief)."
"No one, to my knowledge, has suggested that the image must accelerate and decelerate or that the relation among torque, angular momentum, and angular velocity has a,1 analogue in the mental rotation case. Of course it may tum our that it takes subjects longer to rotate an object that they imagine to be heavier, thus increasing the predictive value of the metaphor. But in that case it seems clearer that, even if it was predictive, the metaphor could nor be explanatory (surely, no one believes that some images are heavier than others and the heavier ones accelerate more slowly)."
"What people report is not properties of their image but of the objects they are imagining. Such properties as color, shape, size and so on are clearly properties of the objects that are being imagined. This distinction is crucial. The seemingly innocent scope slip that takes image of object X with property Pro mean (image of object X) with property P instead of the correct image of (object X with property P) is probably the most ubiquitous and damaging conclusion in the whole imagery literature."
"When taken as a way of modeling cognitive architecture, really does represent an approach that is quite different from that of the classical cognitive science that it seeks to replace. Classical models of the mind were derived from the structure of Turing and Von Neumann machines. They are not, of course, committed to the details of these machines as exemplified in Turing's original formulation or in typical commercial computers—only to the basic idea that the kind of computing that is relevant to understanding cognition involves operations on symbols.. In contrast, connectionists propose to design systems that can exhibit intelligent behavior without storing, retrieving, or otherwise operating on structured symbolic expressions. The style of processing carried out in such models is thus strikingly unlike what goes on when conventional machines are computing some function."
"[A computer program for Task A qua an explanatory model and how a human cognizer actually carries out Task A are equivalent in the strong sense when it can be shown that]... the model and the organism are carrying out the same process."
"[If] we equip the programmed computer with transducers so it can interact freely with a natural environment and a linguistic one, as well as the power to make inferences, it is far from obvious what if any latitude the theorist (who knows how the transducers operate and therefore what they respond to) would still have in assigning a coherent interpretation to the functional states so as to capture psychologically relevant regularities in behavior. If the answer is that the theorist is left with no latitude beyond the usual inductive indeterminism all theories have in the face of finite data, it would be perverse to deny that these states had the semantic content assigned to them by the theory."
"Rather than a series of levels, we have a distinguished level, , the level at which interpretation of the symbols is in the intentional, or cognitive, domain or in the domain of the objects of thought."
"Pylyshyn complains that Kosslyn's model is "more like an encyclopedia than a theory" [Pylyshyn, 1984, p. 254]. Because it is essentially ad hoc, the fact that it "predicts" the empirical evidence is hardly surprising."
"Without a specification of a creature's goals, the very idea of intelligence is meaningless. A toadstool could be given a genius award for accomplishing with pinpoint precision and unerring reliability, the feat of sitting exactly where it is sitting. Nothing would prevent us from agreeing with the cognitive scientist Zenon Pylyshyn that rocks are smarter than cats because rocks have the sense to go away when you kick them."
"During the late 1970s and early 1980s there was vigorous debate about the nature of visual mental imagery. One position (championed primarily by Pylyshyn 1973, 1981) held that representations that underlie the experience of mental imagery are the same type as those used in language; the other position (which my colleagues and I supported, e.g., Kosslyn, 1980, 1994) held that these representations serve to depict, not describe, objects. The debate evolved over time... but always centred on the nature of the internal representations that underlie the experience of visualisation."
"The expression 'cognitive penetrability' is borrowed from a cognitive scientist, Zenon Pylyshyn. He distinguishes between our cognitively penetrable mental functions on the one hand and our functional architecture on the other."
"Some skeptics, such as the cognitive scientist Zenon Pylyshyn, argue that images are “epiphenomenal.”"
"Cognitive scientist Zenon Pylyshyn objected that Shepard's mentally rotated images and Kosslyn's mentally compared images had to be constructed from imageless propositions in the central nervous system—propositions containing all of the information necessary to identify the correct answer without constructing any imagery."
"As information systems play a more active role in the management and operations of an enterprise, the demands on these systems have also increased. Departing from their traditional role as simple repositories of data, information systems must now provide more sophisticated support to manual and automated decision making; they must not only answer queries with what is explicitly represented in their Enterprise Model, but must be able to answer queries with what is implied by the model. The goal of the TOVE Enterprise Modelling project is to create the next generation Enterprise Model, a Common Sense Enterprise Model. By common sense we mean that an Enterprise Model has the ability to deduce answers to queries that require relatively shallow knowledge of the domain."
"Various perspectives exist in an enterprise, such as efficiency, quality, and cost. Any system for enterprise engineering must be capable of representing and managing these different perspectives in a well-defined way."
"There is a paradigm shift towards a distributed and integrated enterprise. Currently, computer systems that support enterprise functions were created independently. This hampers Therefore, there is a need for a computer based data model which provides a shared and well defined terminology of an enterprise, and has the capability to deductively answer common sense questions. This paper discusses how TOVE tackles these needs by defining a framework for modeling generic level representations such as activities, time, and resources. Since there has never been a well-defined set of criteria to evaluate such models, this paper also introduces a set of evaluation criteria which may be used to evaluate modelling efforts."
"We consider an organization to be a set of constraints on the activities performed by agents. This view follows that of Weber, who views the process of bureaucratization as a shift from management based on self-interest and personalities to one based on rules and procedures. Mintzberg [1983] provides an early (and informal) analysis of organization structure distinguishing among five basic parts of an organization and five distinct organization configurations that are encountered in practice. This “ontology” includes several mechanisms that together achieve coordination, like goals, work processes, authority, positions and communication. The various parts of an organization are distinguished by the specific roles they play in achieving coordination with the above means. The “” (Winograd 1987) on cooperative work in organizations provides an ontology that emphasizes the social activity by which “agents” generate the space of cooperative actions in which they work, rather than the mental state of individuals. The basic idea is that social activity is carried out by language and communication."
"In , we want to define the actions performed within an enterprise, and define constraints for plans and schedules which are constructed to satisfy the goals of the enterprise. This leads to the following set of informal competency questions:"
"Overemphasis of efficiency leads to an unfortunate circularity in design: for reasons of efficiency early programming languages reflected the characteristics of the early computers, and each generation of computers reflects the needs of the programming languages of the preceding generation."
"The practice of first developing a clear and precise definition of a process without regard for efficiency, and then using it as a guide and a test in exploring equivalent processes possessing other characteristics, such as greater efficiency, is very common in mathematics. It is a very fruitful practice which should not be blighted by premature emphasis on efficiency in computer execution."
"It is important to distinguish the difficulty of describing and learning a piece of notation from the difficulty of mastering its implications. [...] Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for exploration."
"The properties of executability and universality associated with programming languages can be combined, in a single language, with the well-known properties of mathematical notation which make it such an effective tool of thought."
"The utility of a language as a tool of thought increases with the range of topics it can treat, but decreases with the amount of vocabulary and the complexity of grammatical rules which the user must keep in mind. Economy of notation is therefore important."
"If it is to be effective as a tool of thought, a notation must allow convenient expression not only of notions arising directly from a problem, but also of those arising in subsequent analysis, generalization, and specialization."
"Most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician."