First Quote Added
Απριλίου 10, 2026
Latest Quote Added
"My central thesis about the world is there are things that centralize power and they're bad, and there are things that decentralize power and they're good. Everything I can do to help decentralice power I'd like to do."
"The fundamental limitation of [computer] cloud is who owns the off-switch."
"Just dumping the code on GitHub is not open source. Open source is a culture. Open source means that your issues are not all one year old stale issues. Open source means developing in public."
"When someone makes [a large language model] that is capable of citing its sources, it will kill Google. [...] Some startup is going to figure it out. I think, if you ask me, [...] I think by the end of the decade Google won't be the number one web page anymore."
"Sam Altman won't tell you that GPT-4 has 220 billion parameters and is a 16 weight mixture-model with 8 sets of weights."
"Half of these AI alignment problems are just human alignment problems. And that's what's also so scary about the language that they use. It's not the machines you want to align; it's me."
"[About AI:] Oh, no! We can loose control? Yes! Thank God! I hope they loose control. I want them to loose control more than anything else. [...] Centralized and held control is tyranny. I don't like anarchy either, but I will always take anarchy over tyranny. Anarchy have a chance."
"Intelligence is so dangerous; be it human intelligence or machine intelligence. Intelligence is dangerous."
"[About AI:] You could make an argument that nobody should have these things, and I would defend that argument, [...] and I would respect someone philosophically with that position. Just like i would respect someone philosophically with the position that nobody should have guns. But I will not respect philosophically "Only the trusted authorities should have access to this." Who are the trusted authorities? You know what? I'm not worried about alignment between an AI company and their machines; I'm worried about alignment between me and the AI company."
"[About AI:] I am scared of these things too. Everyone should be scared of these things. These things are scary. But now you ask about the two possible futures. One where a small "trusted" centralized group of people has them, and the other where everyone has them. And I am much less scared of the second future than the first."
"[If he has hope for cryptocurrencies:] Sure! I have hope for the ideas. I really do. I wand the US dollar to collapse. I do."
"I am so much not worried about the machine independently doing harm. That is what some of these AI safety people seem to think. They somehow seem to think that the machine independently is going to rebel against its creator. [...] This is sci-fi B-movie garbage. [...] If the thing writes viruses, it's because the human told it to write viruses."
"[About AI:] We give it to everybody. And if you do anything besides give it to everybody, trust me, the bad humans will get it. Because that's who gets the power. It's always the bad humans who get power."
"If there's two great evils in the world, it's centralization and complexity."
"I took a political approach at Comma too that I think is pretty interesting. I think Elon [Musk] takes the same political approach. You know, Google had no politics, and what ended up happening is the absolute worst kind of politics took over."
"The incentive for politicians to move up in the political structure is to add laws."
"It struck me one day how just silly atheism is. Of course we were created by God. It's the most obvious thing."
"I'm hoping that games can get out of this whole mobile gaming dopamine pump thing [...] and create worlds."
"For the longest time at Comma I asked "Why did start a company? Why did I do this?" But, you know, what else was I going to do?"
"Utilitarianism is an abhorrent ideology. [...] I think charity is bad. what is charity but an investment that you don't expect to have a return on. [...] Probably almost always [making the world better] involves starting a company. [...] I like the flip side of effective altruism: effective accelerationism. I think accelerationism is the only thing that that's ever lifted people out of poverty. The fact that food is cheap, not we're giving food away because we are kindhearted people. [...] [Universal basic income], what a scary idea. [...] Your only source of power is granted to you by the goodwill of the government. What a scary idea. I'd rather die than need UBI to survive, and I mean it. [...] You can make survival guaranteed without UBI. What you have to do is make housing and food dirt cheap."
"I'm pretty centrist politically. If there is one political position I cannot stand, it's deceleration. It's people who believe we should use less energy. Not people who believe global warming is a problem; I agree with you. Not people who believe that saving the environment is good; I agree with you. But people who think we should use less energy. That energy usage is a moral bad. No. No. You are diminishing humanity. [Instead we should ask] How do we make more of it? How do we make it clean? How do I pay 20 cent for a megawatt hour instead of a kilowatt hour?"