A Clarification

I made a point in this post inelegantly in a way that was easy to misunderstand, so I’d like to clarify it.

I didn’t mean that we need to tolerate brilliant homophobic jerks in the lab so that we can have scientific progress.  Although there are famous counterexamples, most of the best scientists I’ve met are unusually nice, open-minded people.  Generally I expect that labs that don’t tolerate jerks will produce more impressive results than the ones that do, and choosing not to employ jerks is a good idea—jerks usually reduce the net output of organizations.

What I meant is simply that we need, as a society, to tolerate controversial ideas.  The biggest new scientific ideas, and the most important changes to society, both start as extremely unpopular ideas.

It was literally heretical, not so long ago, to say that it was ok to be gay—the Bible has a different viewpoint.  In a society where we don’t allow challenges to the orthodoxy, gay rights would not have happened.

We need to allow free speech because sometimes society is wrong—we needed people to be able to say “gay people are ok” at a time when “gay people are evil” was the consensus opinion.

It’s probably impossible to design a simple set of rules that will always allow the right speech and not the wrong speech (although I am sure that in this particular case, it is wrong that gay people in some places still fear for their safety.) 

So we agree as a society that people are allowed to say controversial things, and that free speech goes both ways.  Much of the time people use that privilege to be jerks, and we can, should, and do point out why their bigotry is bad.  Sometimes they use it to say that people deserve more rights, or that the solar system works in a different way from what the church says—and sometimes we collectively listen. 

Over time, this system produces a more and more just world, which says something really good about people as a whole.

I wish we could figure out a way to just never allow hate, discrimination, and bigotry and always allow debate on controversial but important ideas.  If that were possible, I’d support it.  The distinction is usually clear, but the exceptions are sometimes critically important.  Figuring out exactly where to draw the line is really hard.

Generations before us believed a lot of things we now believe (correctly, in my opinion) to be unethical or wrong.  Future generations will think a lot of things we believe today are unethical or wrong. 

For example, today it is pretty unpopular to say “anyone who eats meat is unethical”.  But this is easily a stance I could imagine being commonplace in 50 years, because of evolving views on animal rights, impact on the planet, and availability of lab-grown replacements.  Perhaps even the arrival of AI makes us think differently about being ok eating other beings just because they’re much less smart/emotionally sophisticated than we are.

The last time I tried to discuss this with someone, he said something like: “Banning eating meat would be infringing on my rights, this is not up for discussion.” 

I expect the fact that we let people live in poverty is also something that future generations will consider an absolute moral failing.  I could go on with a long list of other ideas, and I’m sure I can’t even think of some of the most important ones.

The point I most wanted to make is that is that it’s dangerous to just ban discussion of topics we find offensive, like what happened yesterday

Continue ReadingA Clarification

E Pur Si Muove

Earlier this year, I noticed something in China that really surprised me.  I realized I felt more comfortable discussing controversial ideas in Beijing than in San Francisco.  I didn’t feel completely comfortable—this was China, after all—just more comfortable than at home.

That showed me just how bad things have become, and how much things have changed since I first got started here in 2005.

It seems easier to accidentally speak heresies in San Francisco every year.  Debating a controversial idea, even if you 95% agree with the consensus side, seems ill-advised.

This will be very bad for startups in the Bay Area.

Restricting speech leads to restricting ideas and therefore restricted innovation—the most successful societies have generally been the most open ones.  Usually mainstream ideas are right and heterodox ideas are wrong, but the true and unpopular ideas are what drive the world forward.  Also, smart people tend to have an allergic reaction to the restriction of ideas, and I’m now seeing many of the smartest people I know move elsewhere.

It is bad for all of us when people can’t say that the world is a sphere, that evolution is real, or that the sun is at the center of the solar system.

More recently, I’ve seen credible people working on ideas like pharmaceuticals for intelligence augmentation, genetic engineering, and radical life extension leave San Francisco because they found the reaction to their work to be so toxic.  “If people live a lot longer it will be disastrous for the environment, so people working on this must be really unethical” was a memorable quote I heard this year.

To get the really good ideas, we need to tolerate really bad and wacky ideas too.  In addition to the work Newton is best known for, he also studied alchemy (the British authorities banned work on this because they feared the devaluation of gold) and considered himself to be someone specially chosen by the almighty for the task of decoding Biblical scripture.  

You can’t tell which seemingly wacky ideas are going to turn out to be right, and nearly all ideas that turn out to be great breakthroughs start out sounding like terrible ideas.  So if you want a culture that innovates, you can’t have a culture where you allow the concept of heresy—if you allow the concept at all, it tends to spread.  When we move from strenuous debate about ideas to casting the people behind the ideas as heretics, we gradually stop debate on all controversial ideas.

This is uncomfortable, but it’s possible we have to allow people to say disparaging things about gay people if we want them to be able to say novel things about physics. [1] Of course we can and should say that ideas are mistaken, but we can’t just call the person a heretic.  We need to debate the actual idea. 

Political correctness often comes from a good place—I think we should all be willing to make accommodations to treat others well.  But too often it ends up being used as a club for something orthogonal to protecting actual victims.  The best ideas are barely possible to express at all, and if you’re constantly thinking about how everything you say might be misinterpreted, you won’t let the best ideas get past the fragment stage.

I don’t know who Satoshi is, but I’m skeptical that he, she, or they would have been able to come up with the idea for bitcoin immersed in the current culture of San Francisco—it would have seemed too crazy and too dangerous, with too many ways to go wrong.  If SpaceX started in San Francisco in 2017, I assume they would have been attacked for focusing on problems of the 1%, or for doing something the government had already decided was too hard.  I can picture Galileo looking up at the sky and whispering “E pur si muove” here today.


Followup: A Clarification



[1] I am less worried that letting some people on the internet say things like “gay people are evil” is going to convince reasonable people that such a statement is true than I fear losing the opposite—we needed people to be free to say "gay people are ok" to make the progress we've made, even though it was not a generally acceptable thought several decades ago.

In fact, the only ideas I’m afraid of letting people say are the ones that I think may be true and that I don’t like.  But I accept that censorship is not going to make the world be the way I wish it were.

Continue ReadingE Pur Si Muove

The Merge

A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species). Most guesses seem to be between 2025 and 2075.

People used to call this the singularity; now it feels uncomfortable and real enough that many seem to avoid naming it at all.

Perhaps another reason people stopped using the word “singularity” is that it implies a single moment in time, and it now looks like the merge is going to be a gradual process. And gradual processes are hard to notice.

I believe the merge has already started, and we are a few years in. Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think.

The algorithms that make all this happen are no longer understood by any one person. They optimize for what their creators tell them to optimize for, but in ways that no human could figure out — they are what today seems like sophisticated AI, and tomorrow will seem like child’s play. And they’re extremely effective — at least speaking for myself, I have a very hard time resisting what the algorithms want me to do. Until I made a real effort to combat it, I found myself getting extremely addicted to the internet. [1]

We are already in the phase of co-evolution — the AIs affect, effect, and infect us, and then we improve the AI. We build more computing power and run the AI on it, and it figures out how to build even better chips.

This probably cannot be stopped. As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.

More important than that, unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen. It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves.

Our self-worth is so based on our intelligence that we believe it must be singular and not slightly higher than all the other animals on a continuum. Perhaps the AI will feel the same way and note that differences between us and bonobos are barely worth discussing.

The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.

It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate—the most surprising thing I’ve learned working on OpenAI is just how correlated increasing computing power and AI breakthroughs are—and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.

It would be good for the entire world to start taking this a lot more seriously now. Worldwide coordination doesn’t happen quickly, and we need it for this.



[1] I believe attention hacking is going to be the sugar epidemic of this generation. I can feel the changes in my own life — I can still wistfully remember when I had an attention span. My friends’ young children don’t even know that’s something they should miss. I am angry and unhappy more often, but I channel it into productive change less often, instead chasing the dual dopamine hits of likes and outrage.

(Cross-posted from https://medium.com/wordsthatmatter/merge-now-430c6d89d1fe to here for consistency; thanks to Medium for inviting me to write this!)

Continue ReadingThe Merge

American Equity

I’d like feedback on the following idea.

I think that every adult US citizen should get an annual share of the US GDP.

I believe that owning something like a share in America would align all of us in making the country as successful as possible—the better the country does, the better everyone does—and give more people a fair shot at achieving the life they want.  And we all work together to create the system that generates so much prosperity.

I believe that a new social contract like what I’m suggesting here—where we agree to a floor and no ceiling—would lead to a huge increase in US prosperity and keep us in the global lead.  Countries that concentrate wealth in a small number of families do worse over the long term—if we don’t take a radical step toward a fair, inclusive system, we will not be the leading country in the world for much longer.  This would harm all Americans more than most realize.

There are historical examples of countries giving out land to citizens (such as the Homestead Acts in the US) as a way to distribute the resources people needed to succeed.  Today, the fundamental input to wealth generation isn’t farmland, but money and ideas—you really do need money to make money.

American Equity would also cushion the transition from the jobs of today to the jobs of tomorrow.  Automation holds the promise of creating more abundance than we ever dreamed possible, but it’s going to significantly change how we think about work.  If everyone benefits more directly from economic growth, then it will be easier to move faster toward this better world.

The default case for automation is to concentrate wealth (and therefore power) in a tiny number of hands.  America has repeatedly found ways to challenge this sort of concentration, and we need to do so again.

The joint-stock company was one of the most important inventions in human history.  It allowed us to align a lot of people in pursuit of a common goal and accomplish things no individual could.  Obviously, the US is not a company, but I think a similar model can work for the US as well as it does for companies.

A proposal like this obviously requires a lot of new funding [1] to do at large scale, but I think we could start very small—a few hundred dollars per citizen per year—and ramp it up to a long-term target of 10-20% of GDP per year when the GDP per capita doubles.

I have no delusions about the challenges of such a program.  There would be difficult consequences for things like immigration policy that will need a lot of discussion.  We’d also need to figure out rules about transferability and borrowing against this equity.  And we’d need to set it up in a way that does not exacerbate short-term thinking or favor unsustainable growth.

However, as the economy grows, we could imagine a world in which every American would have their basic needs guaranteed.  Absolute poverty would be eliminated, and we would no longer motivate people through the fear of not being able to eat.  In addition to being the obviously right thing to do, eliminating poverty will increase productivity.

American Equity would create a society that I believe would work much better than what we have today.  It would free Americans to work on what they really care about, improve social cohesion, and incentivize everyone to think about ways to grow the whole pie.

 


[1] It’s time to update our tax system for the way wealth works in the modern world—for example, taxing capital and labor at the same rates.  And we should consider eventually replacing some of our current aid programs, which distort incentives and are needlessly complicated and inefficient, with something like this.

Of course this won’t solve all our problems—we still need serious reform in areas such as housing, education, and healthcare.  Without policies that address the cost of living crisis, any sort of redistribution will be far less effective than it otherwise could be.


Continue ReadingAmerican Equity

The United Slate

I would like to find and support a slate of candidates for the 2018 California elections, and also to find someone to run a ballot initiative focused on affordable housing in the state.  A team of aligned people has a chance to make a real change.

I believe in creating prosperity through technology, economic fairness, and maintaining personal liberty.

We are in the middle of a massive technological shift—the automation revolution will be as big as the agricultural revolution or the industrial revolution.  We need to figure out a new social contract, and to ensure that everyone benefits from the coming changes.

Today, we have massive wealth inequality, little economic growth, a system that works for people born lucky, and a cost of living that is spiraling out of control.  What we've been trying for the past few decades hasn't been working—I think it's time to consider some new ideas.

More information about the principles and policies I believe in is at the link below.

http://unitedslate.samaltman.com

Continue ReadingThe United Slate

Join the YC Software Team

If you want to get funded by YC as a founder in the future, but you don't have a startup that's ready for that yet, joining the YC software team is a great hack to get there.

The YC software team is a small group of hackers in SF that write the software that makes all the parts of YC work.

As a member of the software team, you'll get full access to the YC program, just like founders do.  You'll learn the ins and outs of how YC works, and you'll get to follow and learn from hundreds of companies.  You'll meet the best people in the startup world and get exposed to the best startup ideas.

Software is how we can scale YC, and the limits of that are probably further out than most people think.

You can apply here: http://bit.ly/1Od0T2l.
Continue ReadingJoin the YC Software Team