You are currently viewing America Needs More Techno-Optimism

In this fireside chat from the American Dynamism Summit, a16z Cofounder and General Partner Marc Andreessen sits down with economist, podcaster, and polymath Tyler Cowen to discuss the state of innovation in America, from recent AI advances to growing support for nuclear power. They’ll explain why the future many people claim to want — a better economy, better quality of life, and a safer world — is only possible if America leads.

Here is a transcript of their conversation:

Tyler Cowen: Now, Marc. If your entrance music were to be Beethoven, which symphony and why?

Marc Andreessen: For those of you who do care and know who Tyler Cowen is, this is how his podcasts start. This is how he intimidates his guests into submission. First of all, I just wanted to say, thank everybody for being here with us today. We’re really grateful that you were all able to spend time with us and hopefully it’s been useful. Second is, I’m gonna get new business cards printed up that say, “You either know who I am, or you don’t care.”

Tyler: Or both.

Marc: Or both. I mean, how can you possibly…let’s see, I mean, I guess we have to rule out Beethoven’s 9th Symphony because that’s the official music of the European Union, is that right?

Tyler: That’s correct.

Marc: That’s the official anthem of the European Union.

Tyler: That’s right.

Marc: Which is just such a terrible mean thing for them to do to such a great piece of music like that. We should lodge a formal diplomatic protest. I guess, probably Beethoven’s 5th in retaliation.

Tyler: I would peg you as the fifth.

Marc: Yes.

Tyler: Now, how will AI make our world different five years from now? What’s the most surprising way in which it will be different?

Marc: Yeah, so there’s a great kind of breakdown on adoption of new technology that the science fiction author, Douglas Adams, wrote about years ago. He says any new technology is received differently by three different groups of people. If you’re below the age of 15, it’s just the way things have always been. If you’re between the ages of 15 and 35, it’s really cool and you might be able to get a job doing it. If you’re above the age of 35, it’s unholy and against the order of society and will destroy everything. AI, I think, so far is living up to that framework.

What I would like to tell you is AI is gonna, you know, be completely transformative for education. I believe that it will. Having said that, I did recently roll out ChatGPT to my eight-year-old. And, you know, I was, like, very, very proud of myself because I was like, “Wow, this is just gonna be such a great educational resource for him.” And I felt like, you know, Prometheus bringing fire down from the mountain to my child. And I installed it on his laptop and said, you know, “Son, you know, this is the thing that you can talk to any time, and it will answer any question you have.” And he said, “Yeah.” I said, “No, this is, like, a big deal that answers questions.” He’s like, “Well, what else would you use a computer for?” And I was like, “Oh, God, I’m getting old.”

So, I actually think there’s a pretty good prospect that, like, kids are just gonna, like, pick this up and run with it. I actually think that’s already happening, right? ChatGPT is fully out, you know, and barred and banging all these other things. And so, I think, you know, kids are gonna grow up with basically…you know, you could use various terms, assistant friend, coach, mentor, you know, tutor, but, you know, kids are gonna grow up in sort of this amazing kind of back-and-forth relationship with AI. And any time a kid is interested in something, if there’s not, you know, a teacher who can help with something or they don’t have a friend who’s interested in the same thing, they’ll be able to explore all kinds of ideas. And so I think it will be great for that.

You know, I think it’s, obviously, gonna be totally transformative and feels like warfare and you already see that. You know, the concern, quite honestly, I actually wrote an essay a while ago on sort of why AI won’t destroy all the jobs, and the sort of the short version of it is because it’s illegal to do that because so many jobs in the modern economy require licensing and are regulated. And so, you know, I think the concern would be that there’s just so much, sort of, glue in the system now that prevents change and it’ll be very easy to sort of not have AI healthcare or, you know, AI education or whatever because, literally, some combination of, like, you know, doctor licensing, teacher unions and so forth will basically outlaw it. And so I think that’s the risk.

Tyler: If we think of AI and its impact in sociological terms, large language models, who will gain in status and who will decline in status, and how should this affect how we think about policy?

Marc: Yeah, so first of all, it’s important to qualify sort of exactly what’s going on with large language models, which is super interesting. And kind of this thing has happened that you kind of read about a lot in the press, which is kind of there was this general idea that there would be something called AI at some point, and then large language models appeared, and everybody said, “Aha, that’s AI,” just like we thought it would be. And then everybody sort of extrapolates out. And that’s true to a certain extent.

The success of large language models is very unexpected in the field. And actually, the origin story of even ChatGPT is… this is not what OpenAI’s actually started to do. They started to do something different. And then there was actually one guy who…actually, his name is, I think, Alec Radford, and he literally was like off in the corner at OpenAI, like, working on this in 2018, 2019. And then it, basically, you know, was this revolution building on work that had been done at Google.

So it was kind of this very surprising thing. And then it’s important to sort of qualify, like, how it works, because it’s not just like some sort of robot brain. You know, what it is, is you basically feed, essentially, ideally, all known human-generated information into a machine, and then you let it basically build a giant matrix of numbers and basically correlate everything. You know, in a nutshell, that’s what these things are. And then basically what happens is, when you ask it a question or if you ask it to, like, you know, make a drawing or something, it basically traverses. It essentially does a search. It does a search across, you know, basically all of these words and sentences and diagrams and books and photos and everything that human beings have created. And it sort of tries to find the optimal kind of path through that. And that’s how it sort of generates the answer that it gives you.

And so philosophically, it’s kind of this really profound thing, I think, which is basically staring… It’s like you as an individual using this machine to kind of stare at the entirety of the creation of all human knowledge and then sort of have it played back at you. And so it sort of harnesses the creativity of, you know, thousands of years of human authors and artists and then sort of, derives, you know, new kinds of answers or new kinds of images or whatever. But fundamentally, you’re sort of in interaction with our civilization in a very profound way.

In terms of who gains and who loses status, there’s actually a very interesting thing happening in the research right now. There’s a very interesting research question for the impact on job scale, for example, for people who work with words or work with images and are starting to use these technologies in the workforce. And sort of the question is, who benefits more? The high-skilled worker and think, you know, lawyer, doctor, accountant, whatever, graphic designer. The high-skilled person who uses these tools to become an additional quantum leap high-skilled. And that would be a theory of sort of separation.

But the other scenario is the sort of average or even low-skilled worker who gets upgraded. And, of course, just, you know, kind of the nature of the economy. You know, there are kind of more people in the middle. And at least there’s been a series of research studies that have been coming back that it’s actually the uplift to average is actually more significant than the uplift to the high skill level. And so actually, what seems to be happening right now is it’s actually a compression by kind of lifting people up. And so I wouldn’t… You know, social questions are often a zero-sum game of who gains and who loses. But there may be something here where just a lot of people just get better at what they do.

Tyler: Why is open-source AI in particular important for national security?

Marc: Yeah. So, for a whole bunch of reasons. So, one is, it is really hard to do security without open source. And so there actually used to be…there’s actually two schools of thought on kind of information security, computer security broadly that have played out over the last 50 years. There was one school of security that says, “You wanna basically hide the source code.” And you wanna hide the source code precisely. And this seems intuitive because presumably you wanna hide the source code so that, you know, bad guys can’t find the flaws in it, right? And presumably that would be the safe way to do things.

And then over the course of the last 30 or 40 years, basically, what’s evolved is the realization, you know, in the field, and I think very broadly, that actually that’s a mistake. In the software field we call that “security through obscurity,” right? It’s sort of, we hide the code, people can’t exploit it. The problem with it, of course, is, okay, but that means the flaws are still in there, right? And so if anybody actually gets to the code, they just basically have a complete index of all the problems. And there’s a whole bunch of ways for people to get to code. They hack in and…

You know, it’s actually very easy to steal software code from a company. You hire the janitorial staff to stick a USB stick into a machine at 3 in the morning. So, like, you know, software companies are, like, very easily penetrated. And so it turned out security through obscurity was a very bad way to do it. The much more secure way to do it is actually open source. Basically, put the code in public and then basically, build the code in such a way that when it runs, it doesn’t matter whether somebody has access to the code, it’s still fully secure. And then you just have a lot more eyes on the code to discover the problems. And so in general, open source has turned out to be much more secure. And so I would start there. If we want secure systems, I think this is what we have to do.

Tyler: What’s the biggest adjustment problem governments will face as AI progresses? For instance, if drug discovery goes up by 3x, all of a sudden, the FDA is overloaded. If regulatory comments are open, AI can write great regulatory comments. What does government have to do to get by in this new world?

Marc: Yeah. So I think for every scenario, by the way, hopefully at least the first of those two scenarios happens. Maybe also the second. For anything like this, what there should be is there should be a corresponding phenomenon happening on the other side, right? And so the government, sort of correspondingly then should be using AI to evaluate new drugs, right? And so, company shows up with a new drug design, there should be AI assist to the FDA to help them evaluate new drugs. A regulatory agency that has public comments should have AI assist to be able to process all that information and be able to aggregate it and then be able to reply back to everybody.

And this is kind of true of basically every possible… This is a very interesting thing about AI, sort of every possible threat you can think of AI posing, basically, there is a corresponding defense that has to get built. I’ll pick another one, cybersecurity people are quite, I think, legitimately concerned that AI is gonna make it easier to actually create and launch cybersecurity attacks. But correspondingly, there should be better defenses. There should be AI-based cybersecurity defenses.

By the way, we see the exact same thing with drones. You know, weaponized AI, autonomous drones are clearly a threat, as we see in the world today. And so we need AI defenses against drones. The cynical view would be this is just a classic arms race, you know, attack-defense, attack-defense. And kind of, does the world get any better if there’s just more threats and more defenses? I think the positive way of looking at it is we probably need these defenses anyway, right? So even if we didn’t have AI drug discovery, I think we should be using AI to evaluate drugs. Even if we didn’t have AI drones, we should still have, you know, defense against standard missiles and against enemy aircraft. Even if we didn’t have AI-driven cyber-attacks, we should have AI-driven cyber defenses. And so I think this is an opportunity for the defenders to not only keep up, but also build better systems for the present-day threat landscape, also.

Tyler: The Biden AI directive, what’s the best thing about it? What’s the worst thing about it?

Marc: Yeah. So the best thing about it is it didn’t overtly attempt to kill AI. So that was good. You never know with these things, you know, how much teeth they’re gonna try to put into it. And then, of course, you know, there’s always the question of whether it stands up in court. But there were things that were being discussed in the process that were much worse and I think much more hostile to the technology than ended up being in it. So I think that’s good news. I think it was quite benign in terms of its just, like, flat out directives, which is good.

You know, the issue with it, people have different opinions. The issue of it is it kind of greenlit essentially 15 different regulatory agencies to basically put AI under their purview in sort of undefined ways. And so, you know, we will now have, I think, a relatively protracted process of many regulators from many agencies without explicit authority in the domain, basically, inserting themselves into the space. And then, you know, presumably at some point, there will be a determination of who, you know, has purview over what. But it seems like we’re in for a period of quite a bit of confusion as a result.

Tyler: So how much more green energy do we need to, in essence, fuel all of this AI, and where will it come from? What do you see the prospect is like, for the next 20 years?

Marc: Yeah. So the good news with AI and the good news with also, by the way, with crypto, because there’s always a lot of controversy around crypto and Web3 and blockchain around energy use. The good thing with these technologies, the good news from energy is these systems lend themselves to centralization of data centers, right? And so if we need, you know, a million to go into 10 million, going to 100 million to a billion AI chips, you know, they could be distributed out all over the place, but they can also be highly centralized. And because you can highly centralize them, you can think not just in terms of building a server, you can think about building basically a data center that’s an integrated thing from the chip, basically, all the way to the building or to the complex of buildings.

And then the way those modern data centers are built by the leading-edge companies now is they’re sort of built on day one with an integrated strategy for energy and for cooling. And, so, basically, any form of energy that you have, you know, that you could do in a very efficient way, in a very clean way, you know, or new energy technologies. You know, this AI is a use case for developing and deploying that kind of power. And so, you know, just building on what we’ve seen from Internet data centers, that could be geothermal, that could be hydroelectric, that could be nuclear fission, that could be nuclear fusion, solar, you know, wind, big battery packs and so forth. And so I think the aspirational hope would be this is sort of another catalyst to a more advanced rollout of energy. And even if there’s sort of net energy increase, the sort of motivation to get to higher levels of efficiency will be net good in helping us get to a better energy footprint.

Tyler: And which of those energy sources, in your view, is most underrated?

Marc: Oh, I mean, nuclear fission, for sure, is the most underrated today. And so, you know, there ought to be…yes, anyway, wave a magic wand. We ought to be doing what Richard Nixon proposed in 1971, right? We ought to build what he called project independence, which was to build 1,000 new nuclear power plants in the U.S. and then cut the entire U.S. grid over to nuclear and electricity, go to all-electric cars, do everything else. Richard Nixon’s other, you know, great corresponding, you know, creation of the Nuclear Regulatory Commission, of course, guarantees that won’t happen. The plan is exactly on track. But we could. And so either with existing, you know, nuclear fission technology, or there’s actually a significant number, you know, now of new nuclear fission startups, as well as fusion startups working on new designs. And so this would certainly be a great use case for that.

Tyler: So, if the nations that will do well in the future are strong in AI and strong in energy, thinking about this in terms of geopolitics, which countries rise in importance for better or worse?

Marc: Yeah. So, okay, different things. So, I would add a couple more things to that, which is, which companies are in a position to best invent these new technologies? And then there’s a somewhat separate question of who’s in the best position to deploy because it doesn’t help you that much to invent it if you can’t deploy it. And so I would put that in there. But, I mean, look, I would give the U.S., like, very, very high marks on the invention side. You know, I think we’re the best. I think we have the best R&D innovation capability in the world in most fields. Not all, but most. And I think that’s certainly true of AI and I think that’s at least potentially true in energy. I don’t know whether it actually is, but it could be.

And so, you know, we should be able to forge ahead on that. You know, China is clearly the other country with critical mass in all of this. And you know, and you could quibble about the level of invention versus sort of fast follow and talk about, you know, kind of IP acquisition, things like that. But, nevertheless, whatever your view is, they’re moving very quickly and aggressively and have critical mass, you know, a big internal domestic market, and a huge number of researchers and a lot of state support. So I think by and large, you know, we’re looking for sure on AI. And then I think probably also in energy, we’re probably looking at primarily a bipolar world for quite a while and then spheres of influence, you know, kind of going out.

You know, I would say Europe is sort of a dark horse in a sort of a strange way, in that the EU seems absolutely determined to ban everything, to sort of put a blanket ban on capitalism and within that, ban AI and ban energy. But on the other hand, you know, we have this incredible AI company called Mistral, in France, which is probably the leading open-source AI company right now and one of the best AI companies in the world. And the French government has actually really been stepping up to help, you know, the ecosystem in Europe. And so I would actually like to see sort of a tripolar world. I’d like to see the EU kind of fully punch in, but I’m not sure how realistic that is.

Tyler: So let’s say you’re in charge of speeding up deployment in the United States, what is it you do? State level, local level, feds? What should we all be doing?

Marc: Of AI, specifically?

Tyler: Everything, because it’s all increasingly interrelated, right?

Marc: It is.

Tyler: AI, energy, biomedicine, everything.

Marc: Yes. Yes. Well, and AI takes you straight to chips, which takes you straight to the Chips Act, which..

Tyler: Exactly.

Marc: … is not yet resulted in the creation of any chip plants although it might someday. So yeah, I mean, look, the most basic observation is maybe the most banal, which is, you know, stagnation is a choice, decline is a choice. You know, as Tyler’s written at great length, you know, the U.S. economy downshifted its rate of technological change, you know, basically since the 1960s. And technological change as measured by productivity growth in the economy was much faster prior to the last 50 years than the most recent 50 years. And, you know, that’s just the result of just sort of, you know, you have big argument as to exactly what caused that, but a lot of it is just an imposition of just, like, you know, blankets and blankets and blankets of regulation and restrictions and controls and processes and procedures and all the rest of it.

So, yeah. You know, and then you could start by saying, step one is, do no harm, and so, you know, this is our approach on AI regulation, which is, you know, don’t regulate the technology. Don’t regulate AI as a technology any more than you regulate microchips or software or anything like operating systems or databases. Instead, regulate the use cases. And the use cases are generally regulated anyway. It’s no more legal to field a new AI design drug without FDA approval than it is a standard design drug. And so apply the existing regulations as opposed to hamstringing the technology.

You know, so that’s one, you know, energy exploitation. Again, and energy is just pure choice. Like, you know, we could be building, you know, 1,000 nuclear plants tomorrow. My favorite idea there, which always gets me in trouble, and so I can’t resist is, so the Democrats administration should give Koch Industries the contract to build 1,000 nuclear reactors, right? Everybody gets revenge on everybody else. The Democrats get Charles Koch to fix climate change, and then Charles gets all the money for the contracts. And so it’s kind of, everybody ends up happy.

Nobody yet has bid on that idea when I pitched it. But maybe I’m not talking to the right people. So, you know, look, you know, we could be doing that. You know, we’ll see if we choose to. Look, the chips plant thing is gonna be fascinating to watch. There was this really, you know, we passed the CHIPS Act, and in theory, the funding is available. And, you know, the American chip companies are generally pretty aggressive, and I think trying pretty hard to build new capacity in the U.S.

But there was this actually very outstanding article in the “New York Times” some months back by Ezra Klein, where he sort of goes through and he says, “Okay, even suppose the money is available to build chip plants, like, is it actually possible to build chip plants in the U.S.?” And he sort of talks about all of the different regulatory and legal, you know, basically requirements and obligations that get layered on top. And, you know, it was sort of speculating as to whether any of these plants will actually get built. And so again, I think we have here, we have just sort of a level of fundamental choice as a society which is, do we wanna build new things?

I can’t say how exciting it’s been, at least on the West Coast, how exciting it’s been for Las Vegas to get the Sphere, because, like, it’s now impossible to visit Las Vegas without… Like, everybody’s always complaining. The Egyptians built the pyramids, like, where are our pyramids? And it’s, “Ah, we have a Sphere.” And so just like flying into Vegas just gets your juices flowing, like, gets you all fired up, because, like, this thing is, like, amazing. And by the way, I’m just talking about the view from the outside. I understand the thing on the inside is also amazing. And so, you know, we clearly can do that, you know, at least in Vegas. Where Ben lives now, like, in London, I think they just gave up on building the Sphere. So that’s the other side of it. And so, you know, we do have to decide whether we want these things to happen. You know, it’s a little bit dispiriting to see the liquid natural gas decision that just came down.

Tyler: But are the roots of this stasis quite general and quite cultural? Because parents coddle their children much more, there are higher rates of mental illness amongst the young, young people, it seems, have less sex, along a lot of cultural variables. The percent of old music people listen to compared to new music, there seems to be a more general stagnation. So how would you pinpoint our loss of self-confidence or dynamism? Where’s that coming from?

Marc: Yeah. Well, so first of all, to be clear, we’re very much in favor of young people not dating, because that’s very distracting from their work at our startups. So that works out fine. Fortunately, in our industry, we have a long experience with not having dating lives when we’re young. So that works out well. So, it’s not all bad. So it is really interesting, I mean, you know, the view is an experience… And look, Silicon Valley has, like, all kinds of problems. And we’re kind of a case study for a lot of the… I mean, look, it’s not like you can build anything in Silicon Valley, right? I mean, our politicians absolutely hate us, and they don’t let us do anything if they can avoid it.

So, you know, we have our issues. The view from the valley is…yeah, a lot of kids are being brought up and trained to basically adopt a sort of fundamentally pessimistic or how to put it, like, stagnation-oriented inert, like, have very low expectations. You know, basically, you know, a lot of what passes for education now is kind of teaching people how to complain, which they’re very good at. The complaining has reached operatic levels lately. And so there is a lot of that. You know, having said that, look, I’m also, like, actually really optimistic, and in particular, I’m actually quite optimistic about the new generation coming up. I think that Gen Z, and then I think it’s Gen Alpha, and then it’s, whatever my eight-year-old is.

We’re seeing more and more kids that are coming up and they’re being sort of exposed to a full load of basically, you know, programming, sort of cultural programming, education programming that says you should be depressed about everything, you should be upset about everything, you should have low ambitions, you shouldn’t try to do these things. And they’re coming out with sort of a very radical, you know, kind of hard shove in the other direction. And they’re coming up with just basically tremendous energy and tremendous enthusiasm to actually do things. And so… which is very natural, right? Because kids rebel.

And so if the system is teaching stagnation, then at least some kids will come up the other way and decide they really wanna do things in the world. And so I think entrepreneurs in their 20s now are a lot better than certainly my generation, and they’re frankly more aggressive than the generation that preceded them, and they’re more ambitious. And so, no, you know, we’re dealing with a minority, not a majority. But I think there’s quite a bit… Like, yeah, every hour I get where I can spend with 20-year-olds is actually very encouraging.

Tyler: One emotional sense I get from your walk on music, Beethoven’s 5th Symphony, is just that the stakes are remarkably high. Now, if we’re looking for indicators to keep track of whether, in essence, things are going your way, greater dynamism, freedom to build, willingness to build, American dynamism, what should we track? What should we look at? How do we know if things are going well?

Marc: Yeah. So look, you know, I do not come here and do not come to the world with, like, comprehensive answers. I mean, so, you know, the overall answer is productivity growth in the economy, you know, is a great starting point. Economic growth is a great starting point. You know, so the overall questions are there. You know, most of our economy is dominated by, you know, incumbent institutions that, you know, have no intention, I don’t think, of changing or evolving unless they’re forced to. You know, certainly, most of the business world now is one form of oligopoly or another that sort of has various markets locked up.

And so I don’t think there’s some magic bullet to hugely accelerate things. Having said that, I think attacking from the edges is the thing that can be done, which is basically what we do, what Silicon Valley does. And then, you know, when you attack from the edges the way that our entrepreneurs do… You know, look, a lot of the times they don’t succeed. You know, it’s a high-risk, you know, occupation with a lot of risk of failure. But when they succeed, you know, they can succeed spectacularly well. And a lot of our… I mean, we have companies in the American economy that were ventured backed in the 1970s, and actually even some that were ventured backed in the 1990s and 2000s, you know, that are now bigger than most national economies, right? And so, you know, when this…was it Apple? I think Apple’s market cap is bigger than the entire market cap of the German stock market.

Tyler: I think that’s right.

Marc: Just one company. And, you know, Apple was a venture-backed startup, two kids in a garage in 1976, not that long ago. It’s bigger than the entire German basic industrial public market. And so, you know, attack from the edges. Sometimes you can get really, really big results, sometimes, you know, you just prod the system. You know, sometimes you just, you know, spark people into reacting, and that pushes everything forward. And then the other question always is just, like, what are the tools, you know, from our standpoint, what are the tools that startups have in order to, sort of, try to really change things? And, you know, there’s a bunch of such tools, but there’s always two that really dominate. And one is just, what’s the magnitude of the technological change in the air that can be harnessed?

And so we’re always looking for kind of the next super cycle, the next breakthrough technology, in which you can imagine 1,000 companies doing many different things, all kind of punching into incumbent markets. And AI certainly seems like one of those. And then, yeah, the other is just the sheer sort of animalistic ambition, energy, animal spirits of the entrepreneurs and of the teams that get built. And like I said, I think those are even… I think the best of the startups today are more aggressive, more ambitious, more capable. The people are better, they execute better than at least I’ve ever seen. So, I think that’s also quite positive.

Tyler: Who’s a social thinker who helps you make sense of these trends?

Marc: Yeah, my favorite is…James Burnham is my favorite who…

Tyler: And why Burnham?

Marc: Why Burnham? Yeah. So Burnham is a famous… Burnham is not famous, but he should be famous. He was a… Burnham is a fascinating story. He’s a thinker in the 20th century who talked a lot about these issues. He started out life as a lot of people do in the 1920s and ’30s as a dedicated Trotskyite, full-on communist. But he’s a very special guy. Burnham was a very brilliant guy. And so he was such a dedicated communist that he was close personal friends with Leon Trotsky, which is how you really know you really know how you’ve made it when you’re a communist.

And he would have these, like, huge arguments with Trotsky, you know, which is not the safest thing in the world to do. But apparently, he got away with it. And a very enthusiastic communist revolutionary through the ’30s. And then sort of in the ’40s, he is, you know, a very smart guy and he started to figure out that was a bad path. And he went through this process of rethinking everything. And by the 1950s, he was so far to the right that he was actually a co-founder of “National Review Magazine” with William Buckley, who always said he was the intellectual kind of leading light at “National Review.”

And so, you know, he’s got kind of works that he wrote that will accommodate the full spectrum of politics. But in his middle period, where he was trying to kind of figure out, this is like in the 1940s, he was trying to kind of figure out where things were going. And there were enormous questions in the 1940s because it was viewed as like a three-way, basically war for the future between communism on the far left, fascism on the far right, and then liberal democracy kind of floating around there somewhere.

His best, most well-known book is called “The Managerial Revolution” which talks a lot about the issues we’ve been discussing. And it was written in, like, 1941. And it’s fascinating for many reasons, part of which is, he was still mad about communism. So a lot of it is he debunks communism in it. But also, you know, they didn’t know who was gonna win World War II. And so it talks about this battle of ideologies as if it were still an open topic, which is super interesting. But he did this very kind of sort of Marxian analysis of capitalism, and he made the observation that I see every day, which is there are fundamentally two types of capitalism.

There’s the original model of capitalism, which he calls bourgeois capitalism, which you could think, like, Henry Ford is the archetype of that. So, a capitalist starts a company, runs the company, name on the door, ownership of the company, control of the company, you know, sort of dictator of the company, sort of, complete alignment of a company with an individual.

And then he talks about this other emerging form of capitalism at that time called managerial capitalism. And in managerial capitalism, you think about today’s modern public companies, right? Think about Walmart or whatever, any public company where, you know, in theory, there are shareholders. But really what there are is there are millions and millions of shareholders that are incredibly dispersed. You know, who own small…everybody in this room owns some, you know, three shares of Walmart stock in a mutual fund somewhere. You don’t wake up in the morning wondering what’s happening to Walmart. It doesn’t even occur to you to think about yourself as an owner. And so what you get instead is this managerial class of, actually, both investors like fund managers, and then also executives and CEOs who actually run these companies. And they have control, but without ultimate responsibility, right? Without ultimate ownership.

And the interesting thing he said about that is he said, “Look, managerialism is basically… it’s not that it’s good or bad. It’s sort of necessary because, you know, companies and institutions and governments and all the rest of it get to the point where they’re just too big and too complicated for one person to run everything.” And so you’re gonna have the emergence of this managerial class that’s gonna run things.

But there’s a flip side of it, which is the people who are qualified to be managers of large organizations are not themselves the kind of people who become bourgeois capitalists. They’re the other kind of person. And so they’re often good at running things, but they generally don’t do new things. They generally don’t seek to disrupt, or seek to create, or seek to invent.

And so, one way of thinking about kind of what’s happened in our system is, capitalism used to be bourgeois capitalism, it got replaced by managerial capitalism without actually changing the name. That won’t necessarily lead to stagnation. And by the way, that may be necessary that that happens because the systems are too complicated, but that won’t necessarily lead to stagnation. And then what you need is basically the resumption of bourgeois capitalism to come back in and kind of, at the very least, like, poke and prod everybody into action. And that, aspirationally, is what we do and what our startups do.

Tyler: Marc Andreessen, thank you very much.

Marc: Good. Great. Thank you, everybody.

Andreessen Horowitz is a private American venture capital firm, founded in 2009 by Marc Andreessen and Ben Horowitz. The company is headquartered in Menlo Park, California. As of April 2023, Andreessen Horowitz ranks first on the list of venture capital firms by AUM.”

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.