Cerebral Valley Double Feature: Stability AI CEO Emad Mostaque & General Catalyst's Deep Nishar
Hey everybody, it's Eric Newcomer. You're listening to Newcomer. Just got back a couple
days ago from the Cerebral Valley AI Summit that I co-hosted with my friends at Volley.
In Hayes Valley, the heart of Cerebral Valley, we're bringing you a two-parter podcast episode
from the conference. First, I'm going to have an interview with Stability AI CEO Amad Mustock.
I think the audience was definitely on the edge of their seats for this one. Amad is
a very charismatic speaker. He floated an IPO, which got some news coverage. I can't say
I totally understand everything about Stability. He wouldn't give up all their partnerships.
I probed on Apple and Amazon. I try to figure out how much of stable diffusion they controlled
the open source project that's really helped fire the starting gun here. And Amad signed
the letter that said we should pause AI development. So really an interesting time to have a conversation
with him. I think he'll really enjoy it. And then in the second half of this episode,
we've got an interview with General Catalyst Deep Nishar. He was a top investor at Softbank.
He was an executive at LinkedIn and Google. And now he's running General Catalyst Growth Fund.
I thought we had a great conversation about how large language models, generative AI,
and other developments in artificial intelligence are impacting health investing. I asked him
about what it's like to invest many billions of dollars, especially the $100 billion Softbank
Vision Fund. I thought it was a great conversation and give it a listen.
Thanks to Samsung Next for being the presenting sponsor to the Cerebral Valley AI Summit.
Samsung Next invests in the boldest and most ambitious founders. Tell us about your company.
We'd love to meet. Reach out at SamsungNext.com.
Very excited to be able to talk to Amad Masakh from Stability AI. Welcome to the stage. Grab a mic.
Turn it on. This is what you've been all waiting for. All right. Well, it's already come up today.
The letter is on everybody's mind. Should we pause AI research for six months? You signed onto it,
which like I literally emailed you to make sure it was true because I found it so hard to believe
why did you sign onto the letter? I saw that Bill Gates had signed up.
You might as well. No, look, I mean, I think the letter kind of says basically what OpenAI
said in their AGI thing, where they said if the time comes, you should slow down. You should
invite regulation and look at runs above a certain size. Like, it kind of had that plus a few crazy
things. So I was like, I'll leave the crazy things to the side and agree with the spirit of the
letter because we have no idea what's happening with these models that are coming. It's really
going to ramp in six to nine months at the TPUV5s, the H100s, another clum on. I know a 30,000 H100
cluster coming online. That's like eight GPT3s a day. And we have no idea how these large models
work, what actually happens with them, or how to align them because it's a bit crazy.
I was talking to someone from mid-journey yesterday who was sort of saying like,
we don't really understand how our models went from like six fingers to five. Or like, is there
that level of like, I have no idea why this has improved that much? Or, I mean, you know, why
don't you understand your models? No, we do have an understanding of models at a certain size,
but basically you see emergent properties as you scale. And again, as you hyper scale to the
models that are the letters specifically, so it's GPT4 and above, of course, nobody knows what
GPT4 level is. But let's just say it's big, right? Then you get these emergent properties and,
you know, danger comes in a variety of factors. Like my view on AGI is that we are really boring.
And the age I just won't bother, so it'll be like in her where it's like bi-c later, right?
Like as long as there's a poo poo poo. That's a nice view of it. That's my view, but I could be
wrong. And you know, again, if you look at the open AI kind of AGI manifesto, it says,
this could be an existential threat. And we're going to realize that, and the bollocks, of course,
you're not. If you did, then you wouldn't be doing it, you know? Like, it could have the effect of
overturning our democracy, taking away our freedom. This actually sounds like anti-democratic,
anti-American. It's one of those weird things whereby we never believed that it could happen,
but now we're seeing the technology go around the world. And so I think that it's reasonable now,
because to be honest, nobody's going to train a model of this size anyway for six months,
because you need the new compute to come in, to put this in the public idea of saying,
let's have an open discussion about transparency, governance, and sovereignty.
Because there is Zee-
You want open AI to stop development or just not release anything?
No, they should release, so they should do it, but they should become transparent and properly
governed, because what's the governance of open AI? Nobody knows. What's transparency?
Completely opaque?
They stop publishing basically for GPD for it.
Well, I think that's one thing. And again, they say that's competitive. I don't mind that.
So like, for mid-journey, I gave David a grant to kick it off, and said,
just do what you want. It's complete grant. I want equity. I think you're doing amazing stuff,
and it's amazing. I have no problem with private models.
What I have a problem with is that there is a technology that they themselves say could wipe us
out and could overturn society. These are big things. And that's not me saying it. It's them
saying it. It's them saying it. And there is zero transparency, accountability, or governance there.
So I think that there should be, and they themselves say they welcome that. Let's get that ASAP.
Let's get that conversation going in the public, because there's anyone here want to build something
that could kill us all? Hands up. It feels like there's a comfort level where it's like,
oh, if it's only 10% kill us all, and 90% I'm a world historical figure,
people are comfortable with that. Well, look, Sam did the interview on ABC,
where we got asked the question, if there's a 5% chance that this could kill us all,
would you push about to stop it? It was like, no, I would slow it down. And if bollocks, right?
I mean, you stop it. But the thing is, nobody knows, because this is an unknown, unknown, right?
You must be hard at work, or I hope, I mean, for the sake of open source,
working on extracting all the wisdom out of GPT4 and turning that into something open source,
right? Or how do you? Yeah, transfer transfer learning. Yeah, that's how it works.
Where are you on that? Or like, where is the open source world sort of more broadly?
So we have our place in the open source world, you know, and we'll be building
benchmark models of every modality. So we have language models, chat models, other things coming
out, and you can do transfer learning. I think the question is that, actually, if you've stepped back,
what happened is a lot of these labs were like AGI, and it was this unobtainable thing,
and it stuck more layers. So basically, what we have now with our models is that they are really
talented grads that occasionally go off their meds. And we fed them with crap, so they're obese,
as well, you know? So what we've got to do is feed them better stuff, free range organic,
you know? What is that? What is that? Well, it's better data sets. So we go from
trillion to 25 trillion parameter databases. What is the appropriate number of tokens for a
language model? What should it know? What should it not know? Because right now, what we do is
RLHF, you have this whole shock of thing. I like to think of the disheveled grad that's worked too
far, and then you teach it to be a human again, right? Well, that's that sort of picture of the
monster, and then on the outside, it's sort of the human reinforced smiley face. Because again,
we feed it junk, we feed it the whole internet. I'm like, let's just put some standards around
what data we should feed it. So you have outer bounds kind of sample things. Let's have some
level of transparency over the large models that could be agentic or could be used for bad purposes
in a very effective way. And like my focus is on small adaptable models that are hyper optimized
for the edge, given the nature of my company. Are you building any proprietary models? I mean,
some people think, oh, stability AI is working as like a consultancy, or how do you think about
what is the product that you'll sell? It's very simple. Would any of you here,
send all your internal materials to GPT4 for your company and your knowledge? No. What you'll need
to have is free range organic models inside your VPC in your cloud, and I'm going to create the
benchmark model for every modality country and sector. And that can be completely open. You need
to be, remember this whole thing about interpretable AI, orderable AI? They need to be open. It doesn't
mean that you can't use commercially licensed datasets. You can do that for the sectoral variants.
And so basically, anthropic open AI, everyone like that, is public data to private models. I'm
public models to private data. And so I play on the other side of the firewall. The reason I'm
saying this is not because I want to build a GPT4 level model. I don't think that's necessary,
because I think multiple models working together are as efficient. I sign the letter because I think
they themselves say it's a threat. We all know that it's a threat, but we're not having a proper
public discussion and demanding the key thing. Like build private models, make them amazing. I use
GPT4 every day. It's the best therapist, right? But have a lot. How so? Have you tried talking to
it? I played a role-playing game with it. The promise is then forgot everything. I find the
memory limits really in here. So it's a really talented grad with occasionally office meds with
a bad memory. We'll fix those things, right? We'll balance it out. Well, embedding stores are great.
I actually uploaded all my emails to the OpenAI embedding's API, again, fantastic product.
Some of my emails may be answered by AI. Do you think plugins is the death knell for open source?
Or if everything flows through OpenAI, are you worried about plugins? You can't send your private
regulated government other data to black boxes. It just does not work. So I think in the future,
there will be amazing proprietary models and there will be amazing open models. People will have a
hybrid AI experience. The AI on your phone that knows everything about you, you're not going to
send that to Microsoft or to Google if Bardev gets good or anything like that. You will have your
own model, right? People are starting to talk more about local computing or even on your phone
running some of these systems. And in particular, some of the stuff, are you in any discussions with
Apple or that seems like a company where people don't know what's happening?
I mean, look, they're very privacy focused. There was a press release of an iOS update
December 1st, putting stable diffusion supports on the neural engine. Wasn't that interesting?
Right. Yeah, everyone is reading between the lines, so you're saying do read between the lines.
I can't say anything. Look, basically, so my co-founder, Joe and I, he set up Imagine Worldwide,
it's our charitable arm. We've been deploying tamas to refugee camps around the world with
adaptive learning. And in 76 months, we teach literacy and numeracy in one hour a day to kids
in refugee camps, right? Using basic adaptive learning. So we got the remit to educate all the
kids in Malawi, right? Building special bonds with the World Bank and others. I'm going to build
the young ladies illustrated primer for 100 million kids in Africa in the next few years.
We have custom silicon coming to take these language models to the edge. So it's going to
teach the kids and learn from the kids. So this is my objective function. 10% of the equity of
stability is set aside for that. So this is actually one of the things we haven't talked about publicly
yet, but it's the first time we are properly. That is the objective function. So I'm going to be a
default open AI on every single chip out there for private data, for personal data. And that's what
I built stability to be because then it allows us to continue to be open. An open AI to be realistic.
Sorry, can you tease that? I just don't, what's the mission? Say it again?
Well, my mission is to bring, so our mission is to build a foundation to activate humanity's
potential. My personal mission is to educate every child in the world and give them their own
tutor and AI to help them. I want to call it one AI per child, but everyone's like, no, that's
a terrible idea. So I have to come up with a very unfair answer. Well, you're raising money from
investors. You know, there are reports that you're out there trying to raise another round,
light-speed and code-to-our already investors. Like, what's the monetization model? Or, you know,
like people understand your association with stable diffusion, but I feel like they're still
trying to understand sort of the business model. We build models of every single type, right? So
the simplest business model is modeling agency, as I like to put it. Where we are working with a
few partners on really customized models and variants of this. But the big model is to be the
default base model provider for every sector, nationality and others. And there are probably
40 odd announcements in the next two months around that. So you go to a company and you say, okay,
we have this model we can bring to you, help you implement, and it'll be your own data that you're
training, not W and the Microsoft. It's not us, it's a network of about 24 partners that will be
announced. So we provide the base models, we simplify it for them, because these models are like game
engines. Since we launched stable diffusion, we've only seen five companies in the world actually
train their own from scratch that we know of. Because why would you if you have a pre-trained
model? And you remember the Wii U to breath for the wild? Optimizing these neural net architectures
is incredibly valuable. And being the entity that creates default models that everyone can take and
extend, open and proprietary version, not proprietary, commercially licensed versions of those for
healthcare, education, others, is a very great place to be. And that's what we're focused on.
Because then we never have to compromise on proprietary. I mean, again, I used to be a hedge fund
manager. Like, I built this with business model first and cooperative first and the partnership
first approach. So, yeah. Yeah, well, given you brought up the hedge fund piece of it, I'm interested
in, yeah, you've talked about it some, so I don't want to talk about it for too long. But
what inspired you to make the leap to AI? I mean, were you playing in crypto at all for a
period of time before AI? So my college roommate, Ben Delo, who set up BitMEX and was Britain's
youngest billionaire. So I got into crypto in 2011. The rumor is like you had acquired a lot of
GPUs for crypto and you're repurposing, that's false. That's false. I got into AI when my
son was diagnosed with autism and everyone told me the most secure treatment. So I built an AI team
to do literature review and drug repurposing, focusing on neurotransmitters to ameliorate his
symptoms to allow him to have a ABA. So I did that and he went to mainstream school.
You're amazing. Yeah. So that was when I got into it and I saw the power of it and I told
everyone, look at this and they're like, who the fuck are you? You're a hedge fund manager.
Right. So then we did Imagine Worldwide and then I led a United Nations AI
in
stability AI and what it contributed to stable diffusion. Can you talk about that? What's
the size of the engineering team funding? Yeah, just like the stable diffusion kicks so much
of this off. I mean, I said that in my newsletter the other day, you in some ways were the starting
gun to open AI starting to open up a lot of their offerings. Do you see it that way?
Yeah, potentially. I think this was happening anyway because the technology got good enough to
cover revenue around it. So my approach was basically to go and fund everything and give grants to
everything because it was so amazingly able to bring- From money you made as a hedge fund, man?
Yeah, pretty much. I built a cluster without any funding of 4,000 A100s. It's good to be
have some flexibility working with the partners at Amazon and others as well. Amazon in particular,
we're amazing for that. I think that the thing is we were kind of collaborative and this is what
we continue to do. We just funded RWKV with millions of A100 hours. It's an RNN based
transformer with 14 billion parameters that works. RNNs work. Do stuff off-piece like that.
Sorry, but just on the stable diffusion origins. So it's like money training.
We were funding the entire sector, all the notebooks, everything like that. And then
Robin Romback, the University of Heidelberg, one of the authors of latent diffusion,
which was the precursor contact in me and said, can we help? I was like, yeah.
How did you know them? Well, just because we knew the entire sector. And I was like,
latent diffusion is a really promising thing. Let's scale it aggressively and let's put
engineering towards it as well. So basically, stable diffusion was a collaboration between
Comphiz, where he was a PhD student, stability, and Robin leads our team at stability, along
with all the other authors of latent diffusion, apart from Patrick Asa, who is at RunwayML.
So it was a three-way collaboration. And so stable diffusion one was released like that.
And then we were like, this is a bit too good because it could do. Even though we had the
safety filter, it could do not safe for work and all sorts of other things. And so I was like,
I don't feel comfortable with stability putting that stuff out because, you know,
we work with like Thorne and a range of others, just like we offer opt-out and other things people
don't. So stable diffusion to one-wards is basically all stability and kind of the team in there.
You've taken more, well, there was a little scuffle where, you know, Runway updated the
stable diffusion model. You said they're not supposed to and then said it was okay, or what
happened there? Yeah, pretty much. So like, we Comphiz released 1.4 because it was the PhD student,
so Patrick also run was there. And then basically, we in our collaboration agreements were very
arm's length. So like with LYON, we gave them Compute and they built the data set and it was
amazing work on the OpenClip and other things because I don't want to get in the way of kind of
people. But we had an agreement whereby any of the contributors could release it. We thought
that meant unanimous, it didn't. Right, you want to create like an organization that runs it all
together. Yeah. They were sort of like, okay, we're going to release it. Thank you for spending money
on Compute. Well, we did the kind of exploratorium kind of big event that we did. And then we had
an agreement that in Painty be released. I was in a meeting with Janice and Nvidia and they got
released and everyone was like, oh my God, what happened? So then there was, we said that can't be
it because there was an agreement, take it down. But they thought it was from lawyers when our
lawyers never contacted me. I said, let's turn that around. It's fine. You have the authority to
release it. It's kind of your decision if you want to release it. I don't think it's the right
thing. I don't think we had an agreement. And then we moved on from there because there was
disagreement. And I said, okay, we'll take stable diffusion 2.01 words and we'll run with that. And
then run way of run with Gen 2 and things like that and build amazing things and they've gone
proprietary. I mean, do you use fine? Do you feel like you fired a starting gun that you regret in
any way? I mean, if you're signing this letter, you're worried there's a technology that's out of
the control and you were helping to sort of get the ball rolling. I don't know. Do you have any
apprehension about that? No, it was coming anyway. And again, I saw the quality of GPT-3 and other
things and this was coming because what happened is that you've moved from research to revenue.
And this is fantastic. I don't think there'll be any programmers left in 10 years. Why would you
need programming language when you have instant kind of thing? And you look at all this drops and
this is causing. This would have accelerated anyway. And did I release a trillion parameter
model like some people know? I released a model that fits on an iPhone. And that was our objective
function when we were building it. It's just the fact that it was open and it worked and
it has made the world more creative. So I think the regulation of bigger models was coming anyway
because of H100s and TPUV5s and this scale up because scaling was not linear. You had this massive
drop-off. Then we had kind of pipeline parallelism and a whole bunch of other things that allowed
us to get more linear. And now with H100s, we can basically scale pretty much linearly.
Yeah. Stability right now today. Like employee, headcount, revenue, anything you can give us just
to like understand it. It feels anything. People. I wear 170 people. Okay. Now we've got another
27 start now. H100s and 60 people, another 27 starting soon. People joining us from all sorts of
places. We're going very hard, very aggressively. Like I said, there'll be dozens of announcements
in the next month about various things. I wish I can't give them more detail than that.
I mean, Amazon, you've mentioned Amazon and they have various points. What's the status of your
relationships like with those? Well, they're super close. Again, we're partners for everyone
because we make life easier by creating standardized stable series models, right? And so we're very
appreciative Amazon. They kind of built our first cluster, kind of gross accelerated and look
forward to deepening that relationship. We touched on this early in the conversation, but you know,
GPT-4 is just looming so large. Like what's your view of what it's capable of, how close other
people are to it? Like what's your assessment of it? I think that it's an incredible feat of
engineering. I think it's the thing. Opening up the best engineers in the world and they've done
amazing kind of work there. I think that they were just focused and they delivered something that
has real value. I mean, again, I think all of us probably use co-pilot for development now where
we can. It was not regulated by that. Like I said, therapist, all these other things. In terms of
how far away other people are, they have the head start because ultimately they worked with
Microsoft and Microsoft had singularity to create this hyper optimized cluster that could scale
almost EC2 style with all the kind of resonant support. You compare that maybe to the OPT-2
logbook, OPT logbook from Meta where they were like, on the 22nd of December, customer service
to Lita Dar cluster. That was very painful reading, right? Because these things are a bit more like
art than science. So I think the other teams are just trying to get their act together, but Google
will come through in that they will have that level of model. They're bringing out some walls
in here with the institutional efficiency. NVIDIA is the one that people don't realize is going
fully in on this. In terms of their own multi-modal, they're launching their cloud, they're launching
foundation models as a service. And they have amazing engineers and they've been keeping Stu
Monet effectively. But you can see it's coming. Is there any one you think is sort of a sleep
of the wheel or? Yeah, somebody just shot it Amazon. I can possibly comment. No, look, I think
it's difficult because this is no one's ever seen anything as fast as this before.
The integration, because like I said crypto before, like all I've ever done is lost money in crypto,
plus two. It was like not your keys, I lost my keys. You know, I should come up with a good
quote for open source, not your weights, not your brain. There we go. You guys can take that.
So like they try to create a customer. And Dot AI bubble is the other one coming up.
So, you know, the Dot AI bubble as hundreds of billions come into here. So again, journalists.
Are we in a bubble? Yeah, if you declare that right now, I have of course not. This is bigger
than 5G or self-driving cars. There's like a hundred billion, a trillion dollars of investment.
Six, seven billion dollars has gone into this sector. And it's far more impactful than those.
But I was making this argument on stage earlier that, you know, the big companies basically have
an incentive to overpay because they want companies to turn around and spend money
on their services. Do you buy that? I think I think at the moment, a lot of the stuff is just
POCs and it hasn't really integrated. But then you've seen Microsoft integrate like nothing else.
But even then, look, Bing is not top of the app store. Fundamentally, because it's still a bit crap.
Right. You know, like when founders come to me, I say, build good products that solve problems.
Most of the stuff is still surface level. They're not thinking about the data journey,
the user journey, the user experience, you know, models with embedding stores and all these other
things. And so I think we have to wait for the next wave, but things are useful right now.
And I think as they're useful right now, a thousand companies in the next year will spend 10 million
dollars each, which is a 10 billion dollar market. Just trying to figure out what Earth is going to
happen. You're talking about having companies focus. And I get some of your message that for
stability, it's okay, you know, make it your own. We'll sort of help you make your own. But like,
are there going to be specific use cases to reach out to customers? Or are you going to narrow in
some way what your specialty is? No, so our specialty is to build a bench-bot models of a
modality sector nation. Isn't that huge? Like, that's huge. It is. But someone's going to do it.
And again, with our partner network, it's a really useful, valuable thing. Because you don't want to
have a million Unreal engines, right? Or consoles. I think you want a level of
standardization, but you also want to be able to build your own if you want. So like Dream Studio,
for example, we put a huge amount of work into the enterprise offering. We're just about to open
source the latest version. And just say, let's the community build it, right? Because we would
think about vertical encryption. You can't do everything. And this is the thing. Everyone wants
to do a little bit of everything. Find your niche, find your customer, find your distribution.
But with the pace this goes, I think partnering is generally the best way. Because you can't build
a sales team for your demand. Like, my inbox is hideous. That's why I needed the embeddings API
and to create my own chatbot, right? Like, demolishing a partner to outsource sales altogether.
Yes. Interesting. Because I'm helping them. Are you going to get acquired? No. Okay. I'm an IPO.
You're going to go public. I'm an IPO. I'm going to build one of the biggest,
best companies in the world. But that year's, you're saying in years or year?
Faster. Faster. What? The market needs an investment that has every modality, every sector.
No. Next year. I can't no more. Within the next three years?
Look again, this is the biggest thing of all time and we're just at the start.
Yeah. So I'm a public markets investor that knows all the public market investors and the
market needs to have something like that. But it's hard. You have to build a great company first.
Then I think you can't just IPO with, you know, crap. You have to get amazing revenue,
amazing partners, distribution of an edge. So we've been executing, we're 17 months old.
Right. It's insane. I think we've been executing pretty darn well. But at the same time, we haven't
done everything perfectly. Like, hyperscaling is hard. So we'll continue to scale. We'll continue
to be in our specific area, which is building benchmark models and supporting the open source
and academic communities. Right. And hopefully everyone will be able to build on top of that and
you'll be able to build cool, awesome stuff. I'm going to open it up after this question. So think
about your questions. I mean, AGI. Like, that sort of looming in the background of the letter when we
open up the conversation. Like, do you have a view on how close we are to AGI and how
scared you are or aren't of it? I mean, like I said, we'll never be less than 12 months away,
according to everyone. You know, like, there's nothing that can change that, right? I have no
idea. I think AGI, people have different definitions of it. Like I said, my base thing is that we're
just really boring and AGI won't really care about us. It'll just bugger off like Scarlett
Johansson and her, right? You're boring. Then when are you scared of? You're not scared of AGI.
You're scared of jobs replacement. Jobs replacement. I'm scared of, you know,
use of this technology in a proper, aggressive way. And I'm not scared of AGI. Like, I could be wrong.
So I'm not the type of person who's like, I'm 100% right about everything. There is a potential
for existential threat. That's not my base condition, but it's a probability. And much of our world
is a girdic. Like our science, when my son is a girdic, a thousand tosses of the coin is the same
as a thousand coins cost at once because our system's going to only absorb so much data.
The Gutenberg press allowed us to scale by formalizing information but lost a lot of the
complexity. So our system standardizes us, right? This AI allows us to have infinite context and
other things. And that's incredibly powerful, but at the same time, there is an existential
threat and that could take humanity to zero. There's existential threat where our freedom
could be completely replaced by entities that could control our minds effectively.
But nation states using non-AGI to manipulate people.
No, but there's a whole scale here. There's a whole scale of threats that we have to come
together as a scientist to realize could be happening faster than others. If they don't,
fantastic. But if there's ever a time to discuss it, it's now. If you try to have this discussion
six months ago before chat GPT or a year ago, it would not be appropriate. Now,
now the people can see it. They see the fear. Okay, question. Anybody? Bold?
Boom. Daniel, can somebody give him a mic? Thanks. This is awesome, incredibly inspiring hearing
you talk, especially this fast, wild, live. Got a question about something earlier that you said,
which is your ambition is to create public models that people can then use with private data.
But that doesn't talk about the fact that you train the model that is public on private data.
And I'm interested in your thoughts on the Getty lawsuit, particularly that part of the complaint
with the two soccer photos, which I thought was really damning. And as you think about going
forward as a potentially public company, how worried are you about the plain misbar in the United
States and weaknesses that you and companies like yours have around intellectual property rights?
I mean, man, you know, I can't talk about lawsuits publicly, right? I can't. But look,
I think I can say some things, though, regards to that. I think that people scrape off a whole
bunch of things. And there's legislation that varies around various countries. There's an
input argument and output argument. And we have Mark Lemley from Stanford leading our defense.
He wrote the book on fair use. So, you know, we're reasonably confident about that.
We are the only company to offer opt-out. And it's not a legal or moral or ethical obligation.
I think it's the reasonable and right thing to do.
So...
So...
Like people can crawl in and say, pull this out of your training model?
Yes. So we'll be making an announcement about the next version of stable diffusion,
the number of images that have been opted out. It's huge. And I think it's the right thing to do.
And I think people should do that because governments have come to me and said,
should we allow you to call everything for anything? And I'm like, no.
We should have standards for not crawling, standards for opt-out and other things,
because it's the right thing to do.
All right. One more question.
So first, amazing talk. Thank you. Thank you for being here.
You're still not very clear with us on how are we going to make money.
If the model is out there...
I sort of echo that.
It's open and it's free. I can grab it and run with it. I don't need to pay you anything.
I mean, this is why we don't have any open source companies at all.
You know, they see people pay for surety. They pay for standards.
And even if it's open, you can pay variance of that that are auditable,
interpretable with commercially licensed data. That, again, is fully there.
And this is what you need for education, health care,
regulated sectors, industries, governments, and more.
So, you know, just having a model that's stand that isn't good enough.
But there's more to that. But again, I'm a hedge fund manager,
so I don't want to give away my arbitrage opportunities.
You're going to see my business model properly in a year or two when I give away all my secrets.
That's a good question.
The last question is, like, what advice would you give to the audience here?
You know, I mean, this is sort of a set of people who believe in the story that AI is disrupting
everything. How should people position themselves for this? Nobody knows which businesses have a
moat, right? I think there's a lot of concern. Should I be going to a business with foundational
research? Should I be focusing on some product use case? Like, who's safe from disruption?
Or what businesses would you tell people to go to build right now?
Well, you should go into regulated businesses that will have super normal margins, right?
Because you have pricing powers. The question is, do you have pricing power?
Because with a 32,000 token context length in GPT-4, you can put a 20,000 word instruction manual.
They say AI is really good at reading instruction manuals.
And people are really bad at reading instruction manuals.
So you're saying, be a lawyer, be a doctor, be something where you're protected by the government?
Oh, no, no. No, not those either.
Well, you can run a law firm or a doctor or something like that, but entering that definitely not.
Again, it's businesses with pricing power versus businesses that don't. But this
transforms information structure flow, which is incredibly powerful.
And yeah, I think that for those launching companies in this space,
you partner with people that can give you scale because you won't be able to build scale in time.
I think that's a very important thing. And everyone is looking for an answer
because their boards and CEOs are bugging them. So offer a good answer and make them feel
comfortable and you'll make loads and loads of money.
Great. Well, thank you so much for coming up. You're really appreciate it.
Thanks a lot. All right, that was our conversation with Stability AI CEO, Amad Mustaf.
Now we move to my conversation still at the Cerebral Valley AI Summit with General Catalyst
Investor, Deep Nishar. I think it was a great conversation about how artificial intelligence
is impacting health. So give it a listen. Very excited to talk to Deep Nishar. Now he was,
you know, leading the $4.6 billion, I guess you have access to it, General Catalyst,
leading growth and not quite the, you know, $100 billion that maybe you had access to during
your time at SoftBank. Obviously also an experienced zet at LinkedIn at one time.
Yeah, so very excited to talk about sort of the market, where I'm sure we'll touch on the letter
and your points of view. I just wanted to start off like, I mean, in particular,
you spend some time in healthcare, which is I think something we haven't talked about as much
here in terms of AI. I'm curious like what you think people should be tracking in terms of like
generative AI hitting the healthcare space and what you're looking at and what you're excited
about there. It's a great way to start because we haven't talked about it. You know, for the
people in the room who are not kids, you may remember that there used to be something called
genetic algorithms in AI long before neural nets and DNNs and GNNs became popular. So
there is something to be learned even in the way AI is progressing that came from what we know
about genomics. There are probably three places where AI can be very helpful and we're already
seeing a bunch of companies, you know, we've funded a few of them. The first is protein is like the
lifeblood of any cell and it's the workhorse of what happens. We know very little about how proteins
work and of course with alpha fold, everyone's now like, you know, thinking about protein.
I know I'm always asking, you know, I was doing folding at home back when I was in high school in
my laptop, you know, the network. It feels like we've been working on the protein folding for like
for ages. Yeah, for a really long time. But the thing is proteins are like really, really tiny and
they're moving all the time. So the way you try to figure out the structure of a protein typically
is like you take cells, you freeze them, like you do cryo, yeah, then you put like x-ray diffraction
technologies, you look at shadows and then you try to figure out what that protein structure is,
like pretty arcane and archaic. Two Nobel prizes are given one on that particular technique.
Now with alpha fold, you can actually figure out how proteins are folding in a dynamic way
and the proteins are the locks, right? And the drug molecules that you create is the key.
So the first thing you can do is, you know, in terms of biotech and AIML is we can actually
put in and predict how the proteins are going to fold. So once you understand the target, you know,
this was here, he can tell you all about it later. You can actually figure out the right keys,
the molecules that can go and fight against it. And wither. Yeah. So that's one thing. The second
thing you can do is you can actually figure out the chemistry that's going to be the best in terms
of hitting those targets. A lot of times when we model drugs, there are side effects. And, you know,
that's why you like read all the things, you go to the pharmacy and they tell you like, you can
get nausea, you can get, you know, diarrhea, etc. The reason that happens is because you're not hitting
the target in the appropriate way, because the chemical structure is like little bit off.
You can do that better. And the third, the most important one is when you know both the biology
of the target and the chemistry, you still have to find the best possible thing that will hit it.
And that's like the throughput screening. And now with AIML, you can actually go through
hundreds of thousands of drug candidates and call it down to something that could be the most
promising and hit that target fast. And we saw an example of that. One of my portfolio companies,
Weer Bio. They are. Which is called Weer Bio VIR. Sorry, Indians V and W sound.
Just make sure we can Google it afterwards. I always want to know.
They came up with an antibody for COVID that worked all the way up to the Delta variant.
And that was like 85% effective. So it was more effective than when Paxilovid is today.
And the way they came up with that antibody was running like very high throughput AIML screens
against antibodies that were available for the SARS-CoVid virus. And that worked on COVID.
I didn't ask you this in advance. I hope it's not too personal. But like,
are there ways where your personal health care is better because of some of these companies? Or
are you able like being at the cutting edge of looking at these companies, able to get like
some advanced personal health care? Or how much is it like available yet? Or is it only if you have
like some edge case like disease where it's really useful at the moment?
A lot of this, I mean, thankfully, I've personally never had to use any of these techniques.
But I've had friends like very recent case friends mom was in India and with an infection
that they couldn't figure out. And she was in very bad shape. There's a company called
Kerius here that takes a blood sample and runs it through 2000 pathogens. And within
14 hours can tell you what the infection might be, which is very different than what happens
right now. So what happens right now, if you have an infection, your doctor looks at you and they're
like, okay, chances are these four or five things, they'll take a blood sample, they'll send you
to the lab and they'll check for those four or five markers. If one of them comes out great,
they know what to give you. If it doesn't come out, you should out of luck because then they have
to send it back. And then again, and if you have something that's a little bit rare or different,
you could be in dire straits, especially if you are like immuno-compenerize an older person,
a younger person, or have some other immunological diseases, that can be a problem. So we were able
to help her and get that blood sample from India. It's amazing. In terms of like large language
models, I feel like I saw a viral one on Twitter where somebody like their dog was sick, they've
been to the vet, they put it into Chachi BT and then Chachi BT sort of gave them the most common
thing. But then they went to another vet and said, is it this most common thing? And it was in a
previous vet had missed it. I mean, it feels like with animals where we have some comfort with that.
Are you seeing the large language models applied to human medicine yet? How optimistic are you
that they'll be helpful? So you know, the funny thing, Eric, so I had the opportunity to start
Google's mobile efforts like in 2004, 2005. So our team built like the first Google search on
mobile devices. And I'll tell you like true story, one of the first emails we got back saying,
thank you, was the following. Father had a newborn. The newborn was born with like, you know, had
some issues. The doctor said we have to have surgery right away. The parent was like, that seems like
a very drastic surgery on a newborn. We should check it out. They did a Google search. They found
some condition and they asked the doctor, it's like, could it be this? So Google doctor, this was 2005.
Okay. So as long as information is available, there are enough techniques to go figure it out.
You don't need large language models for that. The place where large language models and all these
search techniques help us is medicine is a very empirical science, right? Like doctors, they go
through like five years of cramming shit in their head and then they have to recall it.
And they have to recall it in the context of what they know. So, you know, I come back with a fever.
Let's say I've just been to Africa or India or some places where malaria is very common. If I
don't tell my doctor that I was just in a place where tropical diseases are common, they're not
going to look for malaria in Palo Alto because they're like, there are no malaria mosquitoes here.
So like they wouldn't even check for that because their empiric evidence doesn't tell them look for
that. I need to know enough to like tell them that I was just...
It's like you're saying this shortage of time with doctors, whereas the unlimited time with the
LLM, it's like, okay, you could enter everything. Yeah. The LLM could flag like, here are the key
things that you might not have communicated your doctor. Exactly. It's a great decision support tool
for them. And by the way, that's a much better co-pilot, I think, than like, you know, coding
co-pilot, like let engineers do their jobs, but like doctors can save our lives.
So let's give them a quick answer. You're more excited about the medical co-pilot. Yeah, that's
exciting. I mean, there's sort of a subtext with the letter here, right? Like one of the strong
arguments against this letter that we've been talking about all day, you know, calling for a
halt in research, is that like, what could be lost, right? I mean, there could be, you know, in six
months some great medical technique. I'm curious what your view on the letter is, especially as
someone thinking about sort of the medical work that's happening. Yeah, I mean, I know one argument
that was made here earlier today was like, oh, it's a huge cost to humanity if we stop everything
for six months bullshit, right? I mean, humanity's gone on for hundreds of thousands of years. We'll
go on for a lot longer. We'll die by suicide, not by homicide because LLM's didn't show up one day.
Having said that, I think that- That was a strong argument for stop because it's going to,
we'll be able to do the six months down the road. You're like, we can stop. You know, progress is
never stopped. And I don't think, like, the stop part is a red airing, like, we should all agree and
accept that, that people are saying it because what we have built to date, right, is the first time
in history that a massive technological shift is both opaque and stochastic. We don't know how it
was built and what it's doing, and it's not deterministic. So like, if you go to chat GPT and type the
exact same prompt like five times in a row, you do not get the same exact response.
I just want to underline, we're having like the most insider AI conference in the world.
And a recurring take is we don't actually know like why it produces it. Like that's like a truly
amazing- That's a feature of how new networks work, right? Because they traverse, you know,
themselves, and they are assigning the weights by themselves. And no one is out there like
looking at the weights and saying, okay, you know, how did these things work out? Like,
these are giant matrices within matrices within matrices. Whoever had the matrix, smart man,
right, or woman, whoever made it. The point is that's what this letter is all pointing us to.
So go back to biotech, how we started the conversation. There's a very fundamental
conference in Asilomar, which is not too far away from here, where the key researchers of the day
got together. And they said, look, we know where this technology can go. Eventually,
like, we can change the way, you know, fetuses are designed. Exactly. And they sat down and they
came up with a set of rules. And they said, look, there are some things that we just shouldn't touch,
even if they are available to us, not in the name of science, not in the name of research,
because they are fundamentally bad for humanity. All right, I was literally arguing
with someone about this last night. Yeah. So the argument is basically, if we didn't do human
cloning, then we can stop talking a lot about this. And yet somebody did go and do human
cloning, right? I elsewhere. Right. Yeah. Well, as we're trying, I mean, that's gotten like the
second largest population in the world now as of like March 1st after India. But I just don't
think there's the same, there's not the same strong moral intuitions with like, AI. I mean,
people are worried, but like, I feel like human cloning, you pull the random person,
and they're more worried about it. Like, is there enough? That's in our face, right? That's in our
face. Like, we can see it and we don't want like, you know, like, where does it stop? You know,
you can say, look, if a baby is going to be born with like a loss of hearing, like, but the pen
want them to be able to hear, you'd say, yes. But then there's a whole set of people who are like,
why do you assume that's perfection, etc. In our case, like, in the case of AI, it's like,
oh, it's software, you know, what happens if it generates like some video here or there,
and you know, and then the next thing, you know, like, people are creating like really bad stuff
on it, and there is no way to stop it. Is there any like, I mean, it's a practical industry,
anything practical you'd say we can do or? A first step could be just like, if you take a page out
of the biotech, a cinema or conference, it's like, you know, you have, and I'll miss a few people,
but like, let me, you know, a Jeff Dean, a Jan Lekoon, an Andre Carpasi, like, you know, Greg Rockman,
not collude, like, get together and talk about what's happening. I think the reason they talked
about AI labs in that letter, most likely, is because that's where the bulk of the research is
happening, right? It's not like, there are hundreds of thousands of places where interesting
applications are being built, but the core fundamental research, like, get a Percy, get a
Chris Ray, like, get folks together, at least just talk about it. And being the one thing I learned
being very early at Google, LinkedIn, and now at GC, we place a lot of importance on is intentionality
of what you build. Okay, just letting things sort of go in their own way, and then your bad
consequences, you're like, ah, it was in my hands, like, I give the tools through the world and the
world decided to do a bad thing, but no, things were in your hand when you could have done something
about it. You build it, I mean, it sort of came up in the stability conversations, like, you
fired the starting gun in a certain way. Google, I mean, you work there, you know, Google well, like,
getting in our more, like, strategic heads, where are they in this? I mean, there's a lot of people
making fun of them relative to Microsoft, like, what's your view on how Google is positioned here?
Well, if you just look at the data between Google and OpenAI, 50% of all the important algorithms
and models that have come out, in the last five years have come out from those two places,
with Google contributing more than OpenAI. And Google, obviously, the transformer paper itself
came out of there. And Bert and, you know, a lot of other things along with it before and after.
So you're bullish long-term on Google's position here? I still hold the stock I got in 2005. That's
savvy. In terms of, you know, you're someone who's had access to, you know, billions, money that's
unimaginable to me to deploy. I mean, can you invest too much in a good thing? Like, I mean,
SoftBank was very bullish, especially with Vision1.2 on AI. It doesn't necessarily make
it great returns. And, you know, you can just sort of pour more money than sort of technologists
can absorb. Or what's having sort of experienced the Vision1. Like, what lessons do you learn from
that pace of capital deployment in the face of promising technology? So, you know, I learned a
lesson like 20 plus years ago when I started a company in enterprise infrastructure software.
And that was all about like managing APIs before APIs even existed. So that was the right idea,
but probably 10 years too soon. I think SoftBank had a similar idea, which is AI is going to change
the world. And we started investing in AI companies in 2017. So the previous generation
to LLMs. And unfortunately, the pace of investing was such that before you could hit this generation
of AI companies, the fund sort of depleted itself. So the same lesson again, which is like, you can
have a great idea, but you also have to have it at the right time. And that was one of the challenges.
Now, having said that, some of those companies are now actually in the new world doing really well.
Like, some ANOVA systems, you know, which has built like very specialized AI chips that
has both hardware and software stack and can run, you know, all of these models like very,
very efficiently, you know, in Citro that's in the drug discovery space, Y and AI, that's,
you know, one of the things that no one has talked about here is like we are all building
companies in the valley to sell to other companies in the valley. There's a whole world outside of
like 94022 zip code where people have a ton of money and they don't know how to deploy any of
this stuff and they are willing to give you a lot of money to buy your software. Right.
How do you look at the growth rounds competing against a Google Microsoft Amazon? Who you know,
like the money is going to flow back to them. How do you value, how do you compete in those deals?
Like, has that pushed you to hold back at all? The idea that you're competing against companies?
Are play a different game? The investment business is not a wait and watch business because, you know,
that's like timing the market and I honestly don't know anyone who times the market consistently
well all the time. Our job is to make the best possible investment decisions given all the
information you have at that point in time and that might mean like you wait six months until
a company matures a bit more, you go early, you go in at the right price, etc. So value and
valuations have divergence and convergence over periods of time and in the public markets,
I think value and valuations have converged quite a bit. In the private markets,
they are a little bit diverse right now. Right. Especially in this space, right? I mean,
this is the sort of the challenge. I feel bad for like software companies that are on the border
line of AI because they get thought too much of software. They're, you know,
their multiples are destroyed. Is it just the promise of AI or like, do you think AI evaluations
come down to earth this year or you think they're just going to keep going up?
I think we should realize one thing which is like, you know, AI by itself doesn't do anything, right?
AI is the fuel that powers an engine that actually, you know, goes in a car that takes you from
place A to place B. So the AI that we are talking about has to go in some software that's solving
a real problem that is a high enough value that people are willing to give you money for that.
So that's how we should be thinking about these companies like no, it's not technology for technology
sake. That's interesting. But unless you have the iPhone and the App Store alongside it, like by
itself, iOS doesn't do much for you. If somebody has a burning question, I'll throw it to you.
Otherwise, I'm going to keep going with the last one. Oh, sure. Hey there. What do you think about
the role of AI in like personalized health and behavior change, like society wide?
I think AI can be a good aid there just like any other software. But the fact of the matter is,
like when you're talking really about personalized health care, most of us know that if we eat right
and exercise, unless we have some genetic issues, we should be reasonably healthy. Yet we don't do
it because it's a lot easier to, you know, eat that really nice chocolate cross outside as
opposed to like go for the granola or, you know, whatever. So AI can just keep nudging us.
Eventually we have to take the action. This is why like taking a little bit slight tangent,
the GLP one drugs, like osm picks of the world are all the craze right now, even for non-diabetics,
although they are meant for type two diabetes, because it's an easy way to lose weight, right?
Why go to the gym for an hour a day and like eat salads when you can eat pancakes all day and
just take a pill at the end of the day?
Yes. So that's less about AI, honestly, and that's about monitoring your biomarkers.
And that is actually very key. So, you know, if people really want to understand what's going on
with their body, say for sugar, having a CGM is great directionally because for some people,
like eating a mango may really spike your sugar for other people, it may be watermelon,
and you can eat one or the other. But we don't know that,
generically, the doctor will just tell you don't eat any sweets. Well, that's kind of hard, right?
Like, it's like the card which says like when you're 40, like don't eat sweets, like, you know,
don't have a spouse and like don't have fun, don't drink, and then it's like worth living.
All right, well, thanks so much. Fun to talk about sort of the actual health implications of these,
not just sort of chatting and so super fun. All right. Great.
That was our episode. Thanks for listening to my conversations with
Mod Mistock, its stability AI, and Deep Nishar, a general catalyst. I'm Eric Newcomer. This has
been Newcomer. I wanted to shout out our conference presenting sponsor, Sam Sumnex,
Oracle, and NVIDIA for sponsoring, and of course, Volley, the AI Voice Games Company,
who hosted the event, and my good friends, helped me put on the conference. Follow along on our
YouTube channel. It's at NewcomerPod. Also, you can read the newsletter, subscribe to the
newser, and newcomer.co. Thanks to Young Chomsky for the music. Thanks to Tommy Herron for the
audio editing. Thanks to my chief of staff, Riley Kinsella, for being in all hands on deck. We'll
be posting an extra episode this Thursday from the conference. Look forward. Thanks a bunch.
Goodbye. Goodbye. Goodbye. Goodbye.
.