How do you stand up an effective national AI project? Is the world prepared for the Reformation-level societal change AI could bring?
Matt Clifford, according to Politico Britain’s most powerful tech adviser, joins ChinaTalk to discuss! He served as Prime Minister Sunak’s sherpa for the UK AI Summit, chairs ARIA, the UK’s answer to DARPA, and co-founded Entrepreneur First, a startup incubator with a strong presence throughout Europe and Southeast Asia.
We get into:
Tech Diplomacy & the UK AI Safety Summit: How countries are waking up to the watershed moment at the advent of powerful new AI, and the surprising commonalities in China’s perspectives on AI safety.
Organizational Design at ARIA: What are the challenges creating a world-class science project in government? How can you attract the best people and create the right organizational culture for success?
Open Source AI and the Global AI Race — How should we evaluate the approaches to AI across different countries and private actors? What’s the verdict on open source models?
Preparing for monumental changes — and why history cautions against expecting business as usual, and how fiction can open our mind to the possibilities.
Bletchley Park: The Calm Before the Storm
Jordan Schneider: Let's start with the Bletchley Park AI Summit. Why did it matter?
Matt Clifford: It's November 2023, and we are standing on the brink of very powerful new models that come out in 2024.
The AI summit was the calm before the storm that allowed countries, companies, and civil society to come together and say what we wanted to do collectively about the fact that powerful AI was coming.
Bletchley, United Kingdom. Day one of the UK AI Summit at Bletchley Park. Image Source: https://www.flickr.com/photos/ukgov/53301165387/
Jordan Schneider: What surprised you about your interactions with policymakers in the UK and in governments around the world? What was their process of realizing that AI was a phenomenon that they needed to care about?
Matt Clifford: Most importantly, I was surprised how much the UK was an outlier in starting to build state capacity in the AI space. I suspect this is going to be one of the big themes of 2024 and 2025. Events like this AI summit but also the general AI environment are making a lot of countries realize that this is something they need to get ahead of.
There are some obvious exceptions. In both China and the US, there's already been a lot of thinking about AI.
Relatively few countries, however, have considered how AI will change their country, economy, and society. That was a big surprise. Maybe it shouldn't have been, but it was.
Jordan Schneider: I think that's right. At least because it gives politicians and policymakers some skin in the game.
Matt Clifford: Yes, that’s right. Having DeepMind has been very significant. Not least because it's given politicians and policymakers access to people like Demis Hassabis and the Google DeepMind team. That makes a big difference. Although academia has become less relevant in AI than in previous scientific breakthroughs, having world-class universities with leading people helps a lot as well.
A lot of what happened in 2023 was the result of Prime Minister Sunak’s belief that there was a limited window of time to act and that the UK was small and nimble enough to act in a way that other plausible AI powers could not.
China & AI Diplomacy
Jordan Schneider: How did you guys think about approaching China? There was a lot of discussion about whether the PRC government and Chinese firms should be involved in these types of international discussions.
Matt Clifford: The UK government's position early on was that it wasn't credible to have an international conversation about AI governance and not include China, or at least not invite China.
There were all sorts of reasons why they might not have wanted to come, but we made that decision early on and senior UK politicians talked about that in public. The discussions you're alluding to were not universally popular within the UK or even within the Prime Minister's party.
I think it was the right decision. It wouldn't have been credible to have that conversation without involving China. Along with some of the other summit teams, I ended up spending time in Beijing and having those conversations. I would say it was much more pushing on an open door than I think any of us probably expected.
It's no secret that tensions between China and the UK, and the West more broadly, especially on issues of technology, have been quite heightened. Yet it very much felt from the beginning that this was a topic where people wanted to have the conversation, both at the company level and the government level.
Jordan Schneider: What was most surprising to you in your interactions during the build-up to the summit, as well as over the course of the week?
Matt Clifford: When we were in China, we tried to reflect in the invite list a range of voices within the range of limitations. This included government, but also companies and academics.
I was struck by how the taxonomy of risks people wanted to talk about was extremely similar to the taxonomy of risks that you would see in an online forum post or an EA (Effective Altruism) forum post.
I don't know enough about the history of that discourse to know how much of that is causal. It's interesting that when we went to the Beijing Academy of AI and got their presentation on how they think about AI risk safety governance, they were talking about autonomous replication and augmentation.
They were talking about CBRN and all the same sort of terms. It strikes me that it tracks a lot with our dialogue on AI safety, both formal and informal. It was surprising that we were starting with a very similar framework for talking about these issues.
Jordan Schneider: That is not surprising to me.
Jordan Schneider: It’s very much downstream. I don't know about the rest of the world, but presumably, this is the same if you care about AI safety as you do. To be involved at least in discourse or intention with that kind of ethos. Particularly in the West, if you wanted to do it professionally, get paid, and have this be a part of your life, you had to be in dialogue with these issues.
Matt Clifford: I totally agree with your point on the West. It’s the dog that didn't bark. What we haven't had is a splintered discourse. There are other topics where people don’t feed off each other as much in their discussions.
Jordan Schneider: Let me give you a critique there. I think the Bletchley Park (UK AI Summit) instance felt like an easy case, no offense. We said, look, we're all going to get here, we're going to say this is an issue and it's important. It's not that hard to have people look at new technology and say, this is a big deal and it's important, but I am excited and very nervous for your successors when they're trying to go beyond acknowledging the problem.
They’re saying, okay, here are some limits we're going to put on things. Here's how we're going to try to change the currents of this technological trajectory.
Matt Clifford: I think that's fair. Bletchley was trying to do two distinct things. There was a layer that was trying to be truly global. We said from day one, including publicly, that we expected that layer to be thin.
There was then an intra-West layer that I think is quite thick and getting thicker. We can talk about that if it's interesting. Maybe at risk of stating the obvious, but one of the reasons it was worth doing the thin layer was to create dialogue. Looking back, that moment in time was important because it was particularly conducive to having real dialogue between China and the US, or the US and the UK.
It was a moment and potentially quite a narrow moment where China — at risk of speaking monolithically, felt very behind. You can imagine a world where that wasn't the case. Where even the very thin stuff that you can achieve with the Bletchley Declaration wouldn't be possible because they think, why would we engage in this at all? We’re ahead.
This was a window of time where it felt like the export controls were working, and people wondering whether China could find a way forward.
We're even now, I would say. Huawei chips are looking like they’re catching up. They're not near the H100s, but they are less behind than people expected. In the last couple of months, there's been a resurgence of confidence in the Chinese AI ecosystem such that even if Bletchley was in May. I don't know if we would’ve gotten even that. I don't know, maybe that's too pessimistic.
I quite like the idea that we captured something in writing from that window of time, which hopefully creates a bridge into periods where technology, technological progress, and the balance of power is more uncertain than it was.
02/11/2023. Bletchley, United Kingdom. The Secretary of State for Science, Innovation and Technology of the United Kingdom, Michelle Donelan meets with China's vice minister of Science and Technology, Wu Zhaohui for a bilateral meeting at the AI Safety Summit held at Bletchley Park. Image Source: https://www.flickr.com/photos/ukgov/53305398988/in/photolist-2pdq6Zf-2pdcHNM-2pdcHP8-2pd4ghe
Jordan Schneider: We'll see. I don't know. If it was only that window, then what's the legacy?
Matt Clifford: What I'm saying is, if that window passes without anything concrete, then it's just a window in time.
If you anchor a set of norms or opening for discourse in language that could be agreed at that period of time however, I think that is important. It's small, but it's important. But I think it also enabled and amplified a lot of track two dialogue.
What was already happening elevated it and I expect that to continue, we'll see. It's encouraging to me that China was very keen for its academics to attend. They were very keen for some of its companies to attend and, again, that wasn't taken for granted, but I think it did create new relationships that would otherwise not have happened.
Open Source AI Safety
Jordan Schneider: Speaking of open source, we've got Llama 3 on the horizon. Mark Zuckerberg giving this Verge interview saying, “I want to make it open source for everyone.” Maybe, probably.
Do you see frontier models being open source as like a stable equilibrium that governments around the world are going to be cool with?
Matt Clifford: It's interesting. The debate on this has changed so much even in the last six months. I think it really comes down to. How do you model threats? This is a part of the safety discourse that needs strengthening. So much of the time, open source advocates, closed source, or anti open source advocates talk past each other because they don't just share up front what their threat models are.
Often open source advocates talk about, for example, how Linux is more secure because there are many more eyeballs watching. That makes sense if the threat model is something like that. If the threat model is about capabilities being used by bad actors, then obviously that's very different.
I understand all the counter arguments to that too. It's funny, I suspect that right now the most compelling anti open source argument in the US is actually the one that we just discussed. Which is that if it weren't for powerful open source models, then China would be a lot further behind.
I think that sort of hawkish security oriented discourse is still the most powerful one in Washington. To be a little bit skeptical about the matter, I would say that open is a great strategy when you're behind.
Jordan Schneider: Sure.
Matt Clifford: Llama 2 would be completely unremarkable if it wasn't open. It's possible that by the time Llama 3 is out, it would also be completely unremarkable if it wasn't open. Will the fact that Meta has had to tell this story publicly for a long time harden an ideological view within Meta that this is a great thing which will override any other considerations? Possibly, but I would think that the history of Mark Zuckerberg demonstrates he is an extremely pragmatic guy rather than an ideological one.
Jordan Schneider: Yeah. He goes from asking Xi Jinping to name his kid to testifying in Congress saying don't anti-monopoly me, because we're a bulwark against them.
Matt Clifford: Yeah, and if I were a big open source ideologue, I would be pleased to have Meta's support, right? You would rationally be a little bit skeptical that's a long term, true believer commitment.
Evaluating Global AI Development
Jordan Schneider: Sure. What do you think about these developments? We have the US with NAIRR, we have the UK playing around with some national cloud, and we have Dubai and Saudi Arabia, instead of buying Ronaldos, they’re buying A100s. Is this something that is relevant? What are the worlds in which this is a complete waste of money, for countries to be investing in their own national cloud architectures?
Matt Clifford: Yeah, I don't think so. If you look at the UK context, why does it make sense for there to be any sort of notion of sovereign computing at all in the UK? Clearly, UK companies can rightly expect to be able to access inference in the cloud from allied countries for the foreseeable future.
It'd be weird if that stopped, and if it did stop, probably the UK's got bigger problems then, so I don't think that's a particularly compelling argument. Given that it looks like access to computing will be a bottleneck for AI progress for a while — and the world is somewhat bottlenecked around TSMC capacity, I think it is smart to have some national allocation in place.
For example, resources that you can allocate to strategic priorities separately from whatever happens to the price of inference on the open market. I don't think that needs to be very large. Thinking about startup and academic needs at a relatively small scale makes sense.
What the UK's trying to do is clearly a completely different scale from what the UAE is trying to do. How does it make sense for the UAE? It might be significant there if — a lot of people are thinking about — there will be another sort of credible AGI focused company? Or is that played out now?
It’s possible — Mistral is a hugely impressive achievement. But is it going to compete, even with the funding it’s got, with a Google or a Microsoft? While the founders would say that it doesn't need to and that that’s not the goal, the answer is that anyone wanting to do that today needs a kernel of extraordinary talent, someone that has actually trained one of these models before, and access to a truly enormous balance sheet willing to go the distance.
What the UAE has done at the scale — and a couple of other players potentially, is that it's built the optionality of being one of those balance sheets.
That could turn out to be of big importance. I'm not sure that it is today. I'm not sure that I think Falcon or any of its connected projects are credibly in that category, but I suspect that as we keep adding zeros to the minimum viable training runs to be in that game, I think being one of the small number of balance sheets in the world that can plausibly be a sponsor to a genuine AGI intention project — that could be of real importance. Particularly if you believe that your other geopolitical resource of choice for the last however many decades is going to be less important over some period.
From Entrepreneur to Civil Servant
Jordan Schneider: Sure. You've been running boot camps for a long time. I'm curious, what kind of boot camp do you think you would want to have designed for yourself before your first stint in government?
Matt Clifford: Oh, wow. In a way, the summit was about the length of a boot camp.
Jordan Schneider: Yeah, right?
Matt Clifford: I only worked on it for ten weeks and then had a handover for two weeks afterward.
In terms of the things that I didn't know how to do and had to learn quickly, I think a lot of it is just about different modes of influence.
One of the reasons I'm excited to be back at EF and doing what I'm doing is because it makes you appreciate just the huge value of equity as an alignment mechanism for people acting together in a complex environment. I'm certainly not suggesting that we should be using it, or that we should make it happen in government, but it's interesting that when you don't have that mechanism, modes of influence become very different.
Maybe a crash course in thinking about incentives and decision-making, both within the UK system and particularly outside the UK system, would have been very helpful. I'd like to think I brought to the table a reasonable amount of knowledge of the AI landscape, the ecosystem, and the players, but of course, when we're dealing both with countries and companies, that's actually a relatively narrow slice of the discourse they're having with the UK.
Without getting into too much detail even if you just think about the relationship between the UK and big tech, the summit was happening in parallel to this big piece of legislation online, the online safety bill, which various people in big tech were not thrilled about.
I came into it with this relatively narrow view of what the UK wants in AI. What can we trade here? Of course, in reality, this has to be contextualized within the full bandwidth of that relationship. The same goes for countries. That would have been helpful too.
Jordan Schneider: You had one line in a podcast from three years ago. It was a throwaway one. It's “I'm excited to raise people's ambitions. I don't have to be a civil servant. I can go be an entrepreneur.”
Reflecting on that — maybe another version of the boot camp, less for you but trying to incentivize and select for the type of people who you think could make a positive impact in civil service. What would the screening or benefits be?
Matt Clifford: I don't remember that podcast, but I suspect I was partly talking about the extraordinary difference in how ambition manifests in the UK, the US, and Singapore.
I still believe that for most ambitious people, starting a company is by far the most flexible and scalable way to exercise ambition. What is interesting about governments is that there are a small number of things that they can do that are really hard for any other organization to do.
The kind of ambition that it requires to dedicate your life to that is probably quite different in that not only does that equity not exist, but it should not exist. You are trying to figure out how to do this in a less self-interested way.
My view is that AI is going to be an area where state capacity will be very important. A big question for governments will be — even if all you want to do in your framework is to regulate it, you still need more state capacity than most countries have today. It's going to be a very live question, given how few people have a real depth of expertise in the space. How are you going to attract those people? What are you going to do?
I've been struck by the success that the UK has had in attracting talent to the AI Safety Institute. It feels like this technology is in a moment of relative malleability in terms of how it plays out, at least in the West. I think the mission is extremely clear at the Safety Institute. A little bit like building a company, the early talent density has been really high.
As a result, it's easier to attract new people. In terms of whether they are the same people that you'd want to build companies, would you screen differently? There's a certain type of entrepreneur that succeeds because they have an extraordinary tolerance for pain.
Maybe that's all entrepreneurs, but there's a certain type of business that is high friction. I'm thinking of anything to do with long sales cycles or just grinding through a very regulated space. I think the government is a lot like that. You do need on average a higher tolerance for pain than you do in parts of the private sector.
Again, where incentives can be aligned financially, we at EF do select for that. We select people who seem so determined and so ambitious that they're going to be willing to grind through some pretty unglamorous stuff to get to the outcome.
Thinking about how I would select ambitious people who want to work in government, I would probably over-index on that.
Jordan Schneider: I'm probably over-indexed on that. That’s going to be an important part of it.
It happens every once in a while, right? It's not always people like you who could be okay if you don't get a salary for three months. Where you get back to entrepreneur versus public servant is that you're there because of some idealism or motivation to make the world a better place or see some change in the world that you feel strongly about and have the grit to see it through.
But the entrepreneurs are also okay with pivoting. Especially when you're talking about something like this, I don't see the AI safety person pivoting to transit policy or pivoting to saving the NIH or what have you, because they're there for this thing, not just broadly excited to build a business. Maybe some people are just excited to play in a bureaucracy.
Matt Clifford: I think that's right, but there is a way of slightly adjusting the language to make it make sense, which is something that we care a lot about. It’s agency — you want people who believe that they and their actions do change the world. You want people who not only believe that but have a track record of doing that, whether on a big or small scale.
One of the reasons that people have been attracted to the Safety Institute or ARIA for that matter is just that. It's very clear how they create a canvas for agency, and it's very clear how you can translate your individual actions into something that has an enormous impact or potentially enormous impact in a way that rhymes with a lot of the way that entrepreneurs think.
A lot of the reason people are attracted to entrepreneurship as a vehicle for their agency is that at least it feels like you're starting with a relatively low-friction sandpit — mixing metaphors there, a low-friction environment in which to do that.
A lot of people join EF because they don't want to have a boss who tells them who they have to impress to be able to get the tools to do what they want. I think both in ARIA and the AI Safety Institute, there's a sandpit being created where a lot of the normal frictions that people associate with government and bureaucracy that stand in the way of having an impact have been removed.
Either because of the narrowness of the focus or because of the removal of some of the traditional public sector barriers. Now, I do agree that if you're trying to reform the pension system, it's less obvious how you might remove those frictions.
I'm not sure if you can do it consistently across the board, but I do think that if I were forced to take some element of the public sector that was not R&D funding or AI and ask how we can try to replicate some of the learnings from those two initiatives, it would be about how you can convince people that the mapping between their agency and impact will be pretty direct.
Jordan Schneider: I think that's the key. You've gotten this very charmed little exposure to the public sector because you got the new organizations, you got the top cover, and you got funding —
Matt Clifford: I’m playing on easy mode!
Jordan Schneider: Yeah. What breaks my heart is — presumably, this is in the Ministry of Defense context as well — that in the DoD at least, you have these hotshot tech people who decide to go work for the Department of Defense. Eighteen months later they write a 2,500-word LinkedIn post saying, I had all these awesome things, I saw all these problems, and I wasn't able to do anything because the system ate me up and spit me out because the organization itself was not interested in change.
Matt Clifford: I agree with that, but I know people in the UK who are much more talented and capable than I am, who've had experiences more like the one you just described, which were profoundly disillusioning. I totally buy it. I think there is a risk of a kind of learned helplessness about this, and therefore unless you start with a new organization, it's impossible.
Some of these things are just lessons in leadership. Often, it's more about air cover, as you said. Is the actual political willingness to get something done? About clarity, what is it that needs to be done? One of the things that I took from the experience of helping to set up the Institute and then doing the summit, was that government is very powerful, and politicians are very powerful.
There are counter critiques of this. If you read Dominic Cummings, a lot of his argument is that politicians have very little power, and they can't get things done. I'm sure that's true a lot of the time, but when you have clarity from the top, urgency from the top, and real air cover, things can and do get done.
Again, I haven't read all of these two-and-a-half-thousand-word LinkedIn posts, but when I've talked to people that have tried to do very hard things, I think nearly everything that goes wrong is downstream of either lack of clarity, lack of, vision or lack of leadership enough of the time. Enough of it's worth continuing to play the game.
Building ARIA
Jordan Schneider: Let's talk about ARIA for a second. You were investing in deep tech before it was cool. What did you see that was broken or off in the ecosystem? How are you excited about what ARIA is doing and how it's tried to organize itself to address some of those issues?
Image Source:
https://www.aria.org.uk/
Matt Clifford: Yeah, the big lesson I have taken from my career at EF is that sometimes if you want to increase the supply of something that seems limited, you must innovate the mechanism of funding. That is the rhyme, if you like, between EF and ARIA. It's how I got excited about ARIA. I think that is at the core of the EF thesis and why after 12 years I'm still super excited to come to work every day.
I think you can increase the supply of great companies. I think you can increase the supply of great entrepreneurs. Over the last five years, I became very interested partly because exactly as you said — a lot of what we find at EF is deep tech.
How would you increase the supply of great science? It was no coincidence that this was around the time that some of the progress studies discourse around science was slowing down. Maybe the good ideas got harder to find. That was getting very popular.
I was preaching to the choir the idea that maybe if you wanted to increase the supply of great science, maybe you needed to do institutional innovation and fund it. That's what I’ve been doing for a decade.
In terms of what I think of the problems?
I'm always very hesitant to say that the system is broken, and that ARIA fixes it because ARIA's about 1 percent of UK public sector R&D spending. There are loads of great outcomes from the 99 percent and there are loads of things in the 99 percent that ARIA would have no competitive advantage in doing.
I see it more as a Michael Nielsen idea that I'm stealing. We've just explored very little of the space of how to fund science. It's less that we need to move wholesale from broken point A to miraculous fixed point B. It's more that we should be trying more stuff.
What certainly seems true is that the amount of resources needed to generate a given breakthrough seems to be going up and up. What does ARIA do? Lots of things. Again, I'm not saying necessarily this is how Elan or ARIA itself would describe it, but I see that the core unlock of ARIA is primarily.
I'm very conscious that if all you've got is a hammer, everything looks like a nail. If you've spent your entire career thinking about how you get the most talented and ambitious people to pursue a particular path, then everything looks like it's that sort of problem.
Just painting various broad brushes in my life in academics, I feel I've seen this a bit. If you think about the incentive mechanisms that exist today, a lot of them push you towards incremental work. To progress in your career, you need publications, and you need those publications to be cited.
Long periods of time without positive results are not celebrated. Even if they are — and celebrate is the wrong word, it's literally just a mechanical incentive. They're not incentivized, even if it's in pursuit of something that everyone agrees would be extraordinary. One of the ways that ARIA works at a very simple level is that it changes at least one corner of the R&D ecosystem.
It creates an alternative set of incentives. We can try stuff, even if it is very unlikely it will result in a publication or a citation. It just unlocks a set of activities that would otherwise just be undersupplied by “the market.”
ARIA as a Learning Organization
Jordan Schneider: Sure. Let's go down a level. How did you guys think about the organizational design to make that a reality?
Matt Clifford: I'll step back — you said in your intro that it's like the UK's take on DARPA. That's a tweet-length description and that's probably the best one. The most compressed one.
There is certainly a lot of inspiration that we take from the DARPA program manager model. The idea that you want to empower visionary scientists to form science. You want to have as few veto points as possible so that you're not dampening variance through your selection mechanisms.
These are all present in the DARPA PM model. We've stolen with pride, as the saying goes, for ARIA. One of the things that ARIA doesn't have that DARPA does is the “D.” That was a conscious decision that the team made, and I think the right one. Not least that we just don't have the scale in the UK to have that pull through.
What it means in practice is that you're also having to think about the right programs as we select the right people to be program directors and help them think. We also need to think about the eventual scaling path for those programs. It's not going to be pure pull.
I think the single hardest thing probably in some ways for ARIA, though, relative to DARPA, is just that it's not a well-trodden path in the UK. Traditionally, this is not what people do. There is no role model around going from being a super ambitious world-class scientist to being a super ambitious world-class founder of science.
A big thing thinking about organizational design that we've done is to ask, do we make ARIA a learning organization? How do we make it a place where people learn how to do this role together? It's new. One of the things I'm most excited about now with the first cohort of program directors in place, is that I feel that they're learning from each other whenever I’m in the ARIA office. They're learning how to do this new thing that no one's done before.
Before ARIA existed when I was writing about what I hoped it might be, I talked a lot about this idea of needing to take meta risk. Where you were not just taking risks at the project level, but you were taking risk in the kind of organization that you built. Putting that learning at the heart of it. Ilan and the team have done a phenomenal job of that.
Jordan Schneider: How so?
Matt Clifford: I think they've really said, we're going to try things that no one in the UK has tried to do before. For example, the things that are public now that you might have seen, people are interested.
As well as having the major programs, we've got this idea of seed opportunities, which is that in any funding, organization, you have this explore/exploit trade-off. In general, most R&D funding is more comfortable with exploit — double down on something that works.
Rightly so. You're going to want to spend most of your dollars that way, but, for example, we're launching this idea of seed opportunities, which is where we say, hey, we have this main thrust of the program, but there are all these other things that we are interested in as we, start to frame this program.
We don't think they're necessarily cool, but if you've got something in space, we like to fund it anyway at the seed level and see what happens. Now it might turn out nothing comes out of that. It would be easy to point to things like that if they don't work. Why did you spend the money this way?
In general, governments are not comfortable with that kind of thing, or haven't been historically. But I think it's the right kind of risk to take for ARIA, that we're going to try a lot of stuff that we will only be able to see its success as a portfolio.
Navigating Bureacracy as a Founder
Jordan Schneider: Portfolio, for sure. Matt, I read your book, and my biggest takeaway was the realization that I am a founder.
Matt Clifford: Alright, that's what it’s there for!
Jordan Schneider: Yeah, and I was like, oh wait, I'm doing this. I'm, running a company now. As ChinaTalk has turned from a hobby into my job, the question of what I should be spending my time on has really opened up.
I want to play a clip from Marquise Brownlee, one of my favorite tech influencers who talked about the sort of stages that you go through. I think he was speaking specifically as a creator, but it applies also to founders of normal companies and politicians as well.
Being a creator on YouTube — if that's what we want to talk about, is just like being an octopus. Meaning if you get creative, start making videos, and you have all this fun with the platform — you are doing several different full-time jobs all at once.
You are a full-time writer. You are also a full-time cinematographer. You're behind the camera. You're a lot of times in front of the camera. You're a lot of times a full-time editor. You're also managing the inbox. You're also doing the invoicing and working with brands. There are taxes, financial accounting, and also just the PR relationships, management, and all that content strategy.
That's a bunch of different hats. That's like an octopus with eight arms doing eight different things all at once. My only advice is when getting help, you want to find things that you specifically want to cut off, but you can't cut off every arm. This is another weird part of the analogy, but fun fact, a lot of octopi — octopuses? Octopi? Anyway, they have three hearts.
Weird, I know, but they can't cut those out. They have some core functions that always stay with them. If you are a creator in some way, there is a part of the game — there is something that you fell in love with at the beginning. It's really worth figuring out what that is and just keeping that.
When you see your founders over time, you have this concept of edge, right? It's alright, I'm the computational biology guy. I should probably do a computational biology thing, not like a food startup.
But pretty soon, if you're running something and it's growing well, you're not spending your time on computational biology. I'm curious, what advice do you have? What have you seen that has worked or doesn't in managing that tension?
Matt Clifford: Yeah, this is at the heart of a lot of your previous questions about ambition as well as in the clip.
One of the things that he talks about is the idea of whether something scalable or not. Is there some inherent scalability or lack of scalability in creative work?
I think that as founders go through the journey, they come to find relatively quickly — usually within the first couple years in my experience — to either build a real taste for, or a real distaste for, alternative sources of leverage in what they do.
One of the reactions I had watching that clip was that it’s absolutely a reaction that I see from founders. Hey, I got into this to do amazing computational biology, and now I'm managing computational biologists. What happened? But I also see people have exactly the opposite direction, which is, wow, I now get to hire computational biologists who are a lot better than me. I get to be the architect of what they do and get leverage through that. I would really hesitate to say one is right and one is wrong.
I think they're just different paths. It's very common. One of the tensions in the book and in the EF model that we talk about a lot internally is that on the one hand, nearly every great firm has co-founders. On the other hand, over the very long run, usually one person becomes the dominant executive.
Not always, but usually. What's interesting to me is to look at the journeys of the people that co-found, but don’t stay. I think it'd be easy to think of that as either a form of failure or a form of falling off the wagon, but they're just more—
Jordan Schneider: Hardcore about not running a business.
Matt Clifford: Right. It's the Steve Wozniak, it's the Joe Gebbia. What they're doing is finding a different scaling mechanism.
One of the big mistakes in tech discourse is to too quickly see bureaucracy in inverted commas as the enemy of speed. Actually, like Don Wright, bureaucracy, meaning managerial hierarchies with process, is one of the best scale mechanisms for impact. Let's be honest, today Mark Zuckerberg gets a lot of leverage through bureaucracy, as well as through tech.
Now, what's really interesting is that most of us do have to make a decision at some point in a founding journey about what sources of scale and what sources of leverage we are attracted to. I guess where I disagree with the clip is the idea that the only authentic way to do this is to stick with whatever your three hearts are. Right?
Managing Attention
Jordan Schneider: It's interesting, because he's specifically talking about it in the creating content context, right?
This is where I see attention for me. On the one hand, I would love there to be ten people who can spend their whole lives thinking about work professionally, thinking about China and AI from a Western comparative context. That doesn't exist now, and maybe I am the person because I have a platform or whatever to bring in the funding to do that.
On the other hand, that means I probably don't have 15 hours to read about medieval history to ask you weird questions for the next 20 minutes. I love doing that. My horror is to turn into Lex Friedman and show up having smoked some weed in the morning and ask you about aliens. I could see it going both ways. I don't really know. Give me some coaching here, Matt. How do I answer these questions?
Matt Clifford: The question is attention. It's also not attention that has to be fully resolved. Nearly all the great CEO practitioners I know deliberately reserve time for craft because very often that craft is — to use a Peter Thiel-ism, the sort of secret in the business. That's the bit that gives it the right to exist. I think you can also go through phases. Frankly, I'm going through one now. I spent 12 years being EF CEO and spending less and less of my time with founders. Even though I would say I didn't get into — I didn't start EF because I wanted to run an international, highly regulated, money management business. I never stopped spending time with founders, but it became a relatively small part of my working life.
I'm now going through a phase where I'm able to wrap that up. Alice is taking over for the team and taking on the role, but what I would say to you is I think there can be a sort of trade-off between craft and vision. One of my favorite books — which has nothing to do with tech, is After Virtue by Alasdair MacIntyre, and he talks about the difference between seeking goods of excellence and goods of effectiveness. By the way, he's very clear. He thinks you should seek goods of excellence.
To me, it's not as clear-cut. He was an academic probably for a reason, but his point is that there is a lot of virtue in being excellent at something without scale. Getting to spend as much of your time as possible in doing work that brings you joy through the internal content of the work is a good virtue.
He's a proponent of the Good life, capital G, as a good way to be a human. He's quite skeptical about goods of effectiveness, which are money, power, and all these things.
I would say that nearly all civilizational progress requires people to seek both. At the risk of getting too philosophical, a vision of a good life probably involves phases of both.
I don't know. I guess the thing I would say — I'm a big fan of ChinaTalk, a long time listener, first time caller. I would encourage you to not tie your identity too much to either one of those modes, but to pursue the one that feels most important at the right time.
Entrepreneurship & Parenthood
Jordan Schneider: Speaking about important at the right time, how does parenthood change entrepreneurs?
Matt Clifford: Oh, I have two kids, we're recording this on my eldest’s sixth birthday.
What it certainly does is make you much less complacent about time. One of the amazing things about startups and one of the things that I've most loved about building Entrepreneur First is that it's a greedy objective.
It can take anything you throw at it. There's so much that you could do that you very rarely feel like you're hitting diminishing returns in terms of your time. There's just so much more you could do than you have time for. Personally, as an ambitious person with a lot of anxiety about whether I'm maximizing the amount of impact I could have, having a greedy objective — making this company really valuable — is great.
But the problem is you then have kids. Turns out, they're the same. Personally, even though I found parenthood hard at times — I think people exaggerate sometimes, particularly early on, about the intrinsic rewards of parenthood — but it's certainly a greedy objective. I have never felt that was capping out on the amount of time I could spend with my kids and its value to them.
At least for me, it was the first time in my life I had two of what I call greedy objectives that could take everything. It forces you to be a lot more intentional about what you're trying to do. It marked a sort of turning point about being much more specific about — to go back to the last segment of the conversation — about what the scale mode was for EF.
Felix, my eldest son, was born literally as we started to internationalize and think about our partnership model and what the scale would be. It certainly encouraged me to be much more intentional about what if I was not going to be able to follow the greedy objective of company building. How does the company stay big? For me, it created a bit of an identity crisis.
I have always worked crazy hard on EF and for me, marriage — I hope my wife doesn't mind me saying this — marriage didn't change this for me.
It was quite easy to feel like I had an EF identity and this quite broad non-EF identity that was all the things I cared about and liked doing outside of building this company, which is also really important to me. I found the compression of that part of my identity on becoming a father really hard.
I felt suddenly, by the time I've been a founder and a father, it didn't feel like there was much left. I don't mean the time use, just how I conceived of how I could spend my time. I found that really hard. In many ways, a lot of the things I've done since came out of that feeling and my reactions to that feeling.
I had to stop writing it when I took on the ARIA role but for four years I wrote this newsletter, Thoughts in Between, which is a lot about AI, China, and public sector R&D funding. A lot of things I ended up doing.
I did that very consciously because I wanted to just create this little bubble that could be me. That wasn't me as founder or me as father. I found that a real sanity, and importantly it made me probably a better founder and a father. Anyway, one of the things that most annoys me is when people say — because we focus largely on early career people, that’s a big part of what people say — haven't you seen the data that the average age of a unicorn founder is 42? Yes, I have seen that data — but I also did train as a statistician.
I learned something called conditional probability. As a 24 year old the bar to start a company is pretty low. You don't need that much conviction in what you're going to do. You can afford to do the EF style exploratory period. I'm not yet 42, but I'm getting horribly close to it.
When I think about it now, if I were in a paid corporate job, the bar to me deciding to become a founder would just be so enormously high. One thing that parenthood does is that it accentuates the idea that opportunity cost is a real thing, and it makes founders much more intentional.
The tradeoffs feel very real, the cost of failure feels much more real. I see this a lot with founders that get to a certain scale and realize that they're not going to be huge. The cost of not being around starts to be a thing.
But I think that it means when someone is starting a company and they have kids, — just the level of focus that I see — the shift in the level of willingness to say no, which is a really valuable skill, becomes a lot higher.
Jordan Schneider: Yeah, that’s interesting. There's no time for cosplaying anymore.
Matt Clifford: Absolutely.
Jordan Schneider: Speaking of cosplaying — immersive murder mystery games. Maybe just reflect on — you had this cute story about how you use ChatGPT to do branching fiction with your two and six year old.
What's the investment thesis around parenting and AI? Why aren't there a thousand startups doing that? How do you see it? How is parenting going to change? Five, ten years, thanks to all this stuff.
Matt Clifford: I spend a lot of time with my kids using AI. It's really fun as a starting point. But I also think none of us can really predict what the labor market is going to look like for someone who's six today or three today.
It feels like in almost any positive scenario, the willingness to see these tools as tools over which you have agency, the way you can make decisions, and the way you can make it do things are part of any positive future.
I think there is an investment thesis here. I think the big lesson from Minecraft is that you can create really immersive experiences by giving kids enormous amounts of control over a sandbox world, and I think AI massively expands what's possible there.
My 3 year old is obsessed with pirates at the moment, and the number of pirate maps that we've created on Chat GPT, the number of pirate stories, images of pirates, digging up pink jewels, his favorite car, is amazing. What I hope is it makes him think this is something he has some agency over. For me, the murder mystery thing is a little bit different. I've always been fascinated by simulation, by the idea that you can — if you can simulate something — then you really understand it. It's really underexplored outside some of the hard sciences.
I think simulation in history is really something. The first thing I studied was history, and one of my favorite writers is Ada Palmer, the historian and science fiction author. She teaches this class, I think she's in Chicago, where she basically makes her students simulate a papal election from the 16th century and see whether the outcomes are the same and what you can learn from that.
I remember reading about that and thinking, that's an amazing idea. So I started writing these — for fun to do with my friends and family — murder mysteries, which were effectively like social simulations. I don't know, I'm a real sucker for that idea everywhere it appears.
History’s Lessons for Radical Change
Jordan Schneider: So why is it the 14th, 15th, or 16th century? Why are these time periods interesting in Europe?
Matt Clifford: I think because there are so many things that come together at that point. The reason I've always found it — particularly the Reformation — just so exciting is that today, particularly in the West, we've gotten so complacent about what scale of historical change can happen, particularly people of our generation who don't even really remember the Berlin Wall or anything like that.
Sure, we've had a global financial crisis and we had some surprising elections in 2016, but really, we expect steady, slow, progress — or worse, stagnation, but I think what the Reformation teaches us is that sometimes your entire world will fall apart in ways that were completely unpredictable and they'll fall apart for reasons that completely map to potential realities today. Like the rise of new technologies and the rise of and spread of new ideas.
Martin Luther posting his 95 theses in 1517, marking the beginning of the Reformation. Artist, Ferdinand Pauwels, 1872. Image Source: https://commons.wikimedia.org/wiki/File:Luther95theses.jpg
I just think that we are totally complacent in general. I'm not a doomer on AI in the way that some people are, but if you assign no probability to truly monumental change, at least at the scale of the Reformation, I don’t know where you get that level of confidence.
Jordan Schneider: You had Trump. You had COVID. You had the Ukraine war.
I think those — Trump really first — started to open my horizon about how different the next 50 years could be than the previous 75. It really seems like that isn't something that has resonated for more people, and I wonder why for most people who grew up in western developed countries, these events haven’t done more to shake their sense of normality.
Matt Clifford: Yeah. I think one thing that really brought it home to me was how even among educated elites, people don't like thinking in terms of multiple possible outcomes. The whole thing with the 2016 election and Nate Silver's predictions, on it.
He got so much grief about having said there was a 30% to 33% percent chance of Trump winning. It made me realize just that for most people if something is more than 50%, it means it's going to happen.
Although I don't find P(doom) discourse very helpful, I had the idea that you should be thinking in terms of scenarios. Some of which have meaningful enough probability that you should be building somewhat for a world in which it takes place.
That, to me, is a very natural way of thinking. But I've come to realize, particularly over the last seven years, that it's a very unnatural way for most people to think.
Jordan Schneider: One of the reasons I was not all that interested in British history was because it just seemed like passé, like it's been done a million times.
I've spent this UK trip, two weeks, reading Simon Schama's history of Europe. Just one of the nice things about reading history of something that's been picked over a lot is that the better books are better. You can optimize for what you're interested in. I'm like a sucker for style, and if someone can write, I will be able to find someone who can write really stylishly. It's just not something that you'd necessarily find because I only have three books about Ming China in English.
Why is British history interesting?
Matt Clifford: One of the strange things about reading British history is that Britain does seem to be pretty special.
Jordan Schneider: Yeah.
Matt Clifford: Maybe less so today, and I think patriotism is pretty unfashionable — but I'm pretty patriotic.
I didn't like the history of Britain in the world, particularly its role in the industrial revolution — which is very clearly to me the best thing that happened in the history of our species. There’s something very special about the Britain that made it happen here.
I don't think I deserve any credit for that nor does anyone alive today deserve any credit for that. But it's part of why I wanted to build EF. It's part of why I was willing to do the ARIA thing and some of the AI stuff. I am actually very optimistic about the role that Britain could play in the world over the next 50 years.
I do think that, although there are lots of things that need fixing here, the culture of great institutions and of science and technology remains really strong. There Britain is — to channel my inner Tyler Cowen. I think Britain is extremely underrated today.
Jordan Schneider: Tell me about the Name of the Rose.
Matt Clifford: So for me, the Name of the Rose is probably the greatest novel, which is a very big claim. Why is The Name of the Rose so special?
Basically, because it asks us to take seriously the idea that ideas are really so powerful, so exciting, and so dangerous that people will kill for them. Then they place that in a context where we see that with benefit of hindsight, ideas really did matter.
The conceit of the Name of the Rose is that there's this monastery that's going to host this great debate between the Pope and his ideological enemies. That gives you the framing that ideas matter. But then there's this micro drama that plays out about this book in the library, that might be so dangerous that people will kill to prevent people reading it.
I think that it's just written in a way that you get absorbed in the Agatha Christie style murder mystery of it. You're being drawn into this — without giving anything away — millennia old, puzzle about what might the most exciting forbidden knowledge really look like.
Jordan Schneider: It's on the list. The reason why I haven't read it and why I haven't read a lot of fiction, is that there's always more facts that I want to learn and inject in my brain. So make the case, Matt.
Matt Clifford: Yeah. I think that the enduring case for the book as a format is stronger for fiction than nonfiction. Books have reduced as a proportion of my reading and the proportion of my book reading that has been fiction has increased. That's partly because I think AI — which has clearly become the major nonfiction interest of my career — is almost impossible to follow through books.
There are a tiny handful of good AI books because it changes so fast. Whereas fiction has been changed relatively little by technology, so the book format remains great for it. But I also think that the case for reading the classics, for reading fiction that has stood the test of time, is that it's relatively rare that we get to see — it's an extraordinary opportunity to see the greatest minds play out on a completely blank canvas.
I love facts too, and I read a lot of nonfiction in papers and stuff. But in a way, it's like the absence of constraint. What does Shakespeare want to tell us about interiority? What does Hemingway want to tell us about despair? There's a blank canvas there that makes it boundaryless.
Jordan Schneider: Matt Clifford, thanks so much for being a part of ChinaTalk.
Matt Clifford: Thanks so much for having me.
A wonderful explanation of the challenges in implementing change in large bureaucracies: "I think nearly everything that goes wrong is downstream of either lack of clarity, lack of vision, or lack of leadership enough of the time. Enough of it's worth continuing to play the game."