Hey everybody, it's Eric Newcomer.
Welcome to Newcomer.
I have a great episode.
I spoke with venture capitalists, John Tarot, a partner at Medrona, a former product manager
in AI and machine learning.
Amazon, he's someone who really helps me.
He helped me prepare for the Streetville Valley AI Summit.
And so now happy to let you hear directly from him, someone who can really translate
the inner workings of cloud and Amazon Web Services and machine learning to the masses.
So great to get to talk to John and talk about Amazon bedrock, their new partnership
with anthropic and stable diffusion, the developments in open source, some of the complications
of chaining models.
And then we get into baby AGI towards the end of the episode.
I think it was a great one.
So give it a listen.
Before we get to the episode, a word from our sponsor, Newcomer is brought to you by
Vanta to close and grow major customers you have to earn trust.
But demonstrating your security and compliance can be time consuming tedious and expensive.
Until you use Vanta, Vanta automates up to 90% of the work for the most sought after security
and privacy standards.
Save time and money on compliance with Vanta's enterprise ready trust management platform.
For a limited time, newcomer listeners get $1,000 off.
Vanta go to vaint.com slash newcomer to get started.
Now our episode.
Welcome to the podcast.
I feel like we talked a lot in the lead up to the conference.
I've been downloading my listeners on AI world, but I feel like everything moves so fast
when we started talking about what we talk about this week.
It's like, oh my God, the world has already changed since cerebral Valley.
Anyway, welcome.
Hey Eric, great to be here with you.
Before we get into like our slate, there's a lot to talk about because, you know, Amazon
is making big moves.
I think, you know, I've talked a lot about open source.
There's a lot more there.
Competition and foundation models and maybe we'll end a little bit with a baby AGI.
But before we get into any of that, you just talk about, you know, you were a founder and
then at Amazon, can you talk about working on product at Amazon a little bit and getting
into VC?
A little sort of quick background on yourself for people.
Yeah, sure.
So I'm a partner, a major owner of Injorhoo for an early stage firm.
I invest in AI companies and data companies who build products that customers love.
And before this, you know, I've been a VC for about 14 months, a very short period of
time.
I spent almost 10 years at AWS, where I was a product leader in AIML, edge compute, IoT,
and data.
And that was really a very special time and place and learning experience for me because
at Amazon, the role of product is holy is who is our customer and what problem does she
want to solve and therefore what should we build?
And how do we build a really big business on top of that?
I was fortunate to work with some really talented teams and some really special customers in
exotic new technologies.
What was machine learning like at the time?
In Amazon Web Services, like what?
Because we're about to really dive into that in a second.
But like what historically has been the capability of AWS when it comes to AI and machine learning
tools?
So you know, you want to back up even a little bit further that Amazon has been an early
pioneer in machine learning from almost the beginning in terms of suggesting, you know,
products and products you should buy and logistics, optimizations and all kinds of other things.
Machine learning for the purpose of Amazon consuming it to serve its own customers better.
And so there's a really long and rich history of innovation there.
In AWS, there's what's been considered a layer cake of AIML offerings, where at the lowest
level, there's really broad support for compute in GPUs and really fast network to do training
and inference, but on third party silicon from NVIDIA and Intel and also first party silicon
from Amazon.
With popular frameworks like PyTorch and MXNet and TensorFlow.
The next layer is sort of a flagship platform called SageMaker, which is really useful.
That's what everybody's striking.
All the AI companies are trying to strike partnerships with SageMaker right now, right?
SageMaker is a platform for training and inference and pipelines and data for companies who want
to train and execute their own machine learning models.
And it's been explosively successful.
Customers in almost every industry you can imagine use SageMaker to train and execute
their models.
And that's sort of the cutting edge of AWS sort of up to today recently.
It's very, very advanced.
There's a third layer too of proprietary AI services, where AWS uses the underlying technologies
to offer higher-wounded functionality to its customers for computer vision and document
AI and speech to text and text to speech and call center analytics and the list goes on.
And so depending on how customers want to consume these technologies, they have been able
to choose where in the layer case they want to start.
Great.
And I promise the listener, I find the cloud computing services extremely, it's hard to
follow if you're not buying them.
But I brought you on because I feel like the best I found at explaining them to people and
now that you're in an adventure world, you know sort of the Amazon world, but then now
you have to talk to everybody else all the time.
Madronar, you guys invest in Runway, which is obviously super hot right now.
And then in the data space, Motherduck.
So I start paying attention to VCs when it's like, oh, those are two buzzy companies at
the moment, definitely in very different spaces.
All right.
So that's some background on you and sort of what Amazon has been working on.
But like the news of this week is Amazon bedrock.
Basically, you know, Microsoft obviously had this huge partnership with OpenAI.
$10 million sort of investment has given Microsoft's cloud offering Azure, this huge head start.
And this week Amazon responds.
Can you sort of explain the response and then we'll sort of assess whether we think
it's competitive?
Yeah, sure.
So Amazon announced a slew of generative AI announcements on Thursday, April 13th.
The centerpiece of which was called Amazon bedrock.
And Amazon bedrock is a service that allows you to access large language models via API
as a WS customer inside your own cloud environment, inside your own security envelope.
And in classic Amazon fashion, there is a wide selection of which models you can choose.
You can choose a first party model from Amazon, a family of models, which are called Titan.
And there is also a selection of really powerful third party models from AI 21 from stability
AI.
And perhaps most surprising of all from anthropic, which is an anthropic is kind of aligned
to Google.
The large language model, or now people are saying FM foundation model that I think most
people today think is the closest competitor to open AI.
Do you agree with that?
I mean, so that in particular was a notable one, that partnership, right?
That's right.
I think people are excited about anthropic, certainly because of the amount of capital
and compute and talent resources that they have to bring to bear certainly indicate a
very high level of ambition.
To compete, TechCrunch reported they want like $5 billion.
And then, you know, they've raised like 300 or something from Google, it seems.
So they have capital, they have access to compute, and they have talent, which allowed
them to be ambitious about the frontier of capabilities in the way that open AI is ambitious
about that.
And so, yeah, I mean, a company can announce a partnership and you get a lot of sexy names.
Like, what's your sense of how serious bedrock is and what this means for Microsoft too?
So I think the key here is that senior leaders of enterprises are deciding how to structure
their cloud partnerships.
Am I going to be all in with one cloud provider?
Am I going to split my workloads across clouds?
Which pieces of which workloads will go to which clouds?
And that's one reason why AI and AI capabilities are so important because they're going to
be critical for the next generation of almost any kind of application in almost any kind
of vertical.
And here, AWS is signaling that you're going to be able to get the most advanced models
and a wide selection of them right here on AWS.
That's the show of strength.
Because the key is like, and you were sort of explaining this to me before, so I'm cribbing
from you.
But you're keeping it in sort of the security of Amazon Web Services.
So you know, it's like safe and you're not like exiting that service or am I butchering
that?
So there's a concept of what I call the security envelope, the virtual private cloud that for
enterprise customers who believe this is important, which is increasing number of them.
The data of that enterprise should not traverse the open internet.
It should stay within a security boundary defined for that customer inside the cloud
that they're using for a given workload.
And for Amazon, it's sort of like you buy AWS stuff from us.
You don't need to go out and venture out when you have these AI problems and buy something
from someone else.
You just stay in our world and integrate with people like anthropic or stable diffusion.
And then you stay Amazon's customer.
I mean, that's sort of the logic.
That's right.
And so the customers can, they can use their AWS environment.
They can use their AWS virtual private cloud, the bundling and the pricing stuff is not yet
clear, but there may be some benefits there as well.
Microsoft has GPT-4, you know what I mean?
And I think the sense is still that it's pretty far ahead of everybody else.
Like Amazon can tack on as many partnerships as it wants.
If it doesn't have GPT-4, is it just sort of uncompetitive?
So the developer excitement about GPT-4, it's hard to overstate.
That's a very powerful piece of technology.
And so I think initially at the moment, the fact that bedrock lacks GPT-4, or a real answer
to GPT-4 is a drawback.
The fact that they have anthropic among others, including these other sophisticated players,
stability AI, AI-21, and Amazon itself, these are kind of multiple shots on it.
And so I think that it's a very important thing to do with the AI.
And I think that's a very important thing to do with the AI.
And I think that's a very important thing to do with the AI.
And I think that's a very important thing to do with the AI.
And I think that's a very important thing to do with the AI.
And I think that's a very important thing to do with the AI.
And I think that's a very important thing to do with the AI.
It's the biggest cloud, but that's going to be a question for AWS and OpenAI to work out.
So there's sort of the strategic question for cloud providers of how much I've integrated
AI stuff into my offering.
There's this sort of separate and related competitive situation, which is there's a ton
of demand for GPUs, like particularly NVIDIA, what they're A100s.
They're going to be H100 soon, right?
These days.
These days, center chips.
And so, you know, they're selling them and NVIDIA is selling them, Oracle selling them.
Like, do you have much of a read on where everyone's positioned in terms of just making
a lot of money off this GPU arms race?
Well, NVIDIA certainly positioned very well.
And people are like, why don't they make more?
You know, they seem like very consumer oriented, not enough of the enterprise chips.
What's astonishing is that we still seem to be supply constrained on silicon.
And you even hear about companies as they scale, trying to do clever things to secure
access to GPUs.
It is a major competitive advantage.
Clever thing?
That seems euphemistic.
It would is euphemistic in terms of striking deals, either compute for equity or strategic
alignments or what have you.
But it is a primary CEO and board level agenda for a large and growing AI company to make
sure I have enough pick your metaphor, real estate to keep building on fuel for my engine.
So, I mean, yeah, these big cloud companies have an opportunity to get equity in a bunch
of cool startups in exchange.
Somebody referred to it as like left pocket, right pocket, or even more uncharitably, I've
heard it referred to as like revenue round tripping.
I mean, there is sort of a dynamic where it's if, you know, like the Google investment
anthropic money is flowing out, but money is going to flow right back to me.
I mean, that's the same with Microsoft and open AI.
It makes sense for everybody.
I don't know, you have any observations into the dynamic or just makes other hard for
other people to get GPU access.
You know, the information covered recently that even the mighty Amazon, Microsoft and
Google are all supply constrained on GPUs.
Astonishing to imagine that.
Astonishing.
And having to make choices about starving one hand to fill the other hand.
A startup founder was sort of explaining to me that in some ways the GPUs don't run quite
like the compute cloud.
Like it's not quite as liquid.
It's like harder to give some to one company and some to the other.
Whereas companies using the GPUs need to get a big block of them, which makes it harder
to spread it around, right?
Do you get a sense of like if they're just sort of companies are like hoarding access because
when they need a lot of it, they need that capacity?
I shouldn't comment about the state of what's called serverless GPU.
Okay.
Except to say that's sort of the Holy Grail.
Everybody would like to have serverless GPU and it's hard.
But it does mean that sophisticated companies are hoarding access to GPU and placing advanced
orders and hoarding advanced orders because it's so clear that we're just not going to
have enough for a seeable feature.
We've talked about sort of Amazon with bedrock, the partnerships, sort of this GPU race, the
new Nvidia chips coming on that everybody's excited about.
I feel like you know, you wrote this great piece, I don't know, two weeks ago now or what
on open source and already it's like, oh, there's a bunch of new stuff that's happened
and just to summarize for the audience.
Like I think a big theme at the conference in some ways was that there are these sort
of foundation model companies that are closed and trying to compete.
But then there's a whole sort of almost alliance of open source companies, whether it was
hugging face or like replet or stability AI or you know, runway even has sort of some
open source elements competing.
So there's an interesting sort of tension between the closed and the broader open source.
What's your view of the strength of that open source alliance and AI right now?
Yeah, so one way to think about it is that there's kind of a proprietary world and an
open source world.
And the proprietary world includes models that are fully owned and managed.
Open AI is a great example of that.
In the open source world, which is sort of organized around hugging face, open source
models are shared and remixable.
And you can either download them and run them off the shelf or you can retrain them or modify
them.
And some of the popular ones include Bloom or OPT and there have been more recent ones
from Facebook called llama, which spawned a lot of a l l a m a it's an L L M joke.
And there is an enormous pace of innovation and enormous amount of energy in this space.
As an example, the CEO of hugging face organized a meetup in San Francisco some two weeks ago
and in a bit less than a week, there were 5000 RSVP some 4700 insane.
We give it at the Exploratorium and the excitement around it is because of control and flexibility.
Companies that may not have as much galactic technical capability as somebody like open
AI can take one of these models off the shelf and start from that place and apply their
own capabilities in their own data.
Here's an example.
Bloomberg used a model off the shelf called bloom, which unrelated unrelated no relation,
even though they sound similar, they took the bloom model and they modified it and they
retrained it with their own data.
And of course, Bloomberg is a data company.
They have access to very special data and created something called Bloomberg GPT that
is proprietary and they published an academic paper alongside it and wrote in the paper
that this would not have been possible had they needed to start from scratch writing the
model architecture before they trained it.
Right.
Like, okay, sure, whatever they're going to use is slightly less sophisticated than GPT
4.
But first of all, their data is the big competitive advantage anyway and they don't want to give
it up to GPT 4.
So given the data is driving it, maybe they don't, you know, they can use something slightly
less sophisticated and on open source.
And then there's, I guess, a further argument that using the open source thing, they're
able to do other things besides just training their data that might make it more useful
to them.
That's right.
You know, Runway is another example of this that Runway creates AI enabled video editing,
film editing.
They were used to edit the film everything everywhere, all at once, which one best picture
of the Oscars, a bunch of other exciting things like this.
And in order to create that functionality, there was no off the shelf proprietary model
available when they started building Runway.
So instead, they started contributing to open source and working with the open source community
to actually create altogether new capabilities that pushed the technical envelope.
I used to work at Bloomberg, you know, and in running sort of a very small sort of financial
news upstart.
And it feels like I should be able to disrupt them with new technology, but it's amazing
given sort of the data advantages that they have this huge data set.
In some ways, it's possible that AI really just helps, you know, the huge incumbents
because they've learned everything, you know, Bloomberg about the financial world.
Do you think like AI will end up being a tool helping the incumbents stay in place?
Right?
I mean, I think Sam Lesson made an argument that, you know, in the mobile revolution,
people thought it'd be great for startups, but like, you know, Facebook and Google figured
it out.
And so the argument, you know, that the incumbents will figure out AI and in some ways benefit
more from it than startups.
Do you have a point of view on that?
I would say that there are four kinds of strengths that you can have in AI to lend themselves
to larger companies, typically, to startups.
So the four sources of strength that I can see are access to customers or empathy with
customers rather, access to talent, access to data, and access to compute.
Access to data and compute are brute force kinds of assets that somebody like Bloomberg
might have more access to than an upstart.
But customer empathy and access to special talent are industry learns again and again.
But startups are really well positioned in that.
And that's how David's beat Goliath.
Yeah, you got to figure out what your customers want.
The big company goes out of touch with it and sort of think so I hand down to them what
they should want and it becomes a less desirable place to work probably is your loss and a
huge bureaucracy, all that stuff.
So the advantages of startups are understanding their customers better than anybody else and
attracting the most ambitious, impatient, optimistic, slightly crazy talent.
And small teams can change the world like that.
So we're talking about the Bloomberg example.
We came onto this because the thesis is tons of activity and open source.
And this was an example of them applying open source LLMs to build their proprietary financial,
I don't know what we call model, I guess.
What else is happening open source right now that's worth flagging.
So there's a model that's been released by Databricks called Dolly, which is a cloning
joke, the OLY Dolly.
And now comes Dolly 2.0, which is unlike the first one, Dolly 2.0 is.
A license for commercial use.
You can just take it and start building, take it and start building it.
The creation of Dolly and Dolly 2.0 is almost funny in the sense that Databricks used off
the shelf tools from Databricks.
They used a relatively small amount of data.
They used a two year old open source architecture and 30 to 60 minutes of training time, which
is a small amount of compute.
And we're able to demonstrate, I haven't seen the quantification, but what Databricks
describes as remarkable performance relative to chat GPT.
Databricks itself is like an open source company that has proven that it can build sort of
profitable things around it.
I mean, it's in the data lake business, right?
I mean, my sense with Databricks is that it's like a use case where businesses don't necessarily
want to have to do the thing that Databricks provides.
And so like it's safer to build this around this open source technology.
But I mean, do you think this Dolly effort is just brand building for them?
Or like, do you think it's something they're going to be able to monetize around?
And I guess the bigger question is independent of Databricks.
Are you optimistic about companies that sort of facilitate open source projects monetizing
around those projects?
So Databricks is brand building and they're also making a point that notwithstanding all
the fear that may be there and the intimidation of these sophisticated technologies, here is
a cookbook, here is a recipe to create your own fine-tuned chat TPT model that understands
your own data.
Think about a company that needs to enable its customer service agents with data from
its own private knowledge graph.
And Databricks is trying to make a point that you need not be as sophisticated as open AI
to actually start to do this for yourself.
With that Databricks hopes that customers will say, okay, well, I'm going to use Databricks
tooling to do that.
Why is it Dolly?
Like, it's because they're cloning.
They're cloning chat TPT.
Yeah, they're cloning chat.
Well, there is something there is a D-A-L-L-E image model.
I don't think that's a reference to that.
I think the reference.
I imagine the reference is to cloning chat TPT.
It's a joke.
It is amazing overall.
Like, open AI wants to protect, you know, it's not publishing as much.
There is more of an effort to be proprietary.
But I don't know.
In the air, like so many open AI people leave, you know, I interviewed David Luan at Adapt
and I saw that their head of product just went to Spark to invent.
Like, it feels like there's so much like leakage of knowledge out into the rest of the startup
world.
Like, unless open AI just started suing people all over the place, does it feel like they're
really trying to hold on to their secrets or they're letting them sort of filter out?
I think the core way that knowledge gets distributed in this part of the world is through papers
and code.
Yeah.
And open AI publishes less of both than their peers at Google and DeepMind and Amazon and
what have you.
Yeah.
The Economist had a really nice chart about this that showed the pace of academic publications
in AI from a few top labs at a few top universities.
You don't think whispers are enough.
It's not enough for somebody to be able to like go get coffee and be like, oh, we're
probably doing this sort of like it being in a paper makes it much.
Just is a big difference.
There is an open source model called whisper from open AI, which is now what you're talking
about.
The human leakage of information.
Yes.
Knowledge wants to be free, but it's not the same thing.
Yeah.
Interesting.
And as another indication of the pace here, Eric, clam, D'Alang, the hugging face CEO tweeted
just today that since the launch of chat GPT, there have been more than 200,000 open source
models posted to hugging face.
I mean, it's insane when people can build.
Like, I mean, I was reading about someone who recreated their like friend group chat using,
you know, stuff that was just available and just cobbling it together and sort of put
out a whole recipe for that.
It still feels like no one has recreated GPT for right?
An open source.
I mean, that's sort of the big thing that's still looming all over all of this.
Like, how close will open source come to replicating GPT for?
I don't hear a lot of people claiming that open source is going to get to the frontier
of capability anytime soon.
Yeah.
And depending on who you ask, the frontier of open source is anywhere from six to 36
months behind the frontier for proprietary.
Yeah.
But you also, I think it was Ben Stansell who wrote about a labor market for AI models recently,
where not every task needs to be performed by the same model.
Right.
We see in a medical setting or what have you.
So you're going to start to see ensembles where these things can be mixed and matched.
And you can optimize for benefits of latency or cost or fine tuning versus off the shelf
or what have you.
And the other point is that as the pace of the open source models increases, they will
become useful for more and more of the ensemble.
Right.
At rotation.
What's the state of like chaining things together?
I guess is a key piece of all of this.
Like, I mean, Lang Chain being sort of this Zyka AC company around that, I don't totally
understand all the chain.
What's it mean to piece all these models together and like what sort of this state of the art
at the moment there?
There's two kinds of concepts here.
One is called chaining, where I can actually ask a sequence of prompts of a model.
And the model will remember as context the questions and answers that came before it.
So think about asking in chat GPT, who were the last five presidents of the United States.
And then in the next question asking, what were their ages when they left office?
Right.
You don't need to repeat the question that you just asked.
It's going to know that you referred to the presidents of the United States.
Because that's memory built in that particular system.
But you're saying that's effectively a type of chain.
That is a chain.
And actually chain together prompts and build up context for more sophisticated types of
inference.
Yeah.
There is also a concept that's old and well understood in machine learning called an ensemble,
where instead of using one single model for everything, we can actually use a combination
of models for slightly different parts of our application.
And you can make either really broad brush decisions that certain kinds of questions
always go to model A and others always go to model B.
Or you can make more fine grain decisions that parts of each operation may go to model
A versus B versus C.
Right.
And so one reason that's useful is like, oh, this seems like an easy task will send it
to our cheap model.
And this seems like a hard task will send it to our expensive.
You hear that talked a lot about.
But then there's also that people are trying to create sort of a thinking process for these
machines, even though like LLM's fundamentally are just generating like they're filling in
the blank and sort of like let's fill in the blank.
But at the same time, there's an effort by sort of piecing them together to create more
of like a thinking pattern that we're structuring.
Can you explain that a little bit?
Yeah.
These large models are trained on a huge corpus of facts.
But they're actually best thought of as reasoning machines, not fact machines.
And for a question like the president of the United States, they're going to be able to
get it right typically.
But for more fine grain facts that were that really matter as the crux of a decision, what
you actually want to do is feed those facts into the model as part of the chain.
This is the data stack.
You can use any part of a database technical stack to retrieve that information.
So let's take an example.
Let's say that Eric Newcomer wants to take a trip from New York to LA.
Now the LLM could reason that the steps to do that are find out what are his preferences
and then find out what flights match those preferences and then suggest a bunch of options
that match the preferences in that order.
But at each step, it's going to now need to retrieve information and know what this one
database says, Eric likes to fly on Delta.
And here's his Delta number.
And then in the next link in the chain, say, I'm going to call out to some API and retrieve
the set of Delta flights on a certain date that Eric has specified.
This is a fun way we're building out the AI Expedia or whatever in some ways here.
If I'm a company that's building the AI Expedia, I'm just thinking out here.
I haven't really thought this through before.
On the one hand, I have plenty of time to think about it.
So it makes sense for me to build out each step because I'm like, oh, I know it's going
to search for your preferences, search for the flight, blah, blah, blah, blah.
But alternatively, I guess part of the appeal of the large language models is they think
about things differently than us and their surprising answers and maybe they decide not
to, in one point, trust our preferences because they should be overrated.
I guess how do you sort between where humans should impose the structure while turning
over some of the questions to the LLM versus just the black box approach?
I think the co-pilot metaphor that Microsoft uses ended up being the answer to that, that
depending on the sensitivity of the operation, before I spend $X,000 on a plane ticket, I'd
like to probably see the answer before I book it or before I make some large hedge fund
financial transaction with this kind of technology.
I might like to actually see what it wants to do and what are the steps that it went through
to get there.
Yeah, you don't want to just say, okay, go ahead.
The ultimate let it run sort of application is this, what is it, baby AGI you were telling
me about or like, I've seen some of it on Twitter, but this is sort of the fun cutting
edge of the moment.
What is baby AGI?
The baby AGI is the creation of somebody named Yohei Nakajima who is a VC investor who's curious
and invented a, what he says, if you look at him on Twitter or you look at some of the
speaking he's done, he heard all these references to GPT-4 being everybody's ideal co-founder
and he said, what if I could cut out the human entirely and make it the founder?
Right.
So he created a program that uses GPT to define its own tasks and then order the tasks and
then execute the tasks while reordering them.
And you could set the first instruction to say, baby AGI, create a company and baby AGI
will decide that the first step is to create a business plan.
In order to do that, it needs to look up what are the key elements of a business plan and
it will create this stacked order of tasks and then start executing them one by one and
eventually even spit out a Google Doc with a business plan for the company that it wants
to create and then it can move on to registering a domain name and registering a company with
Stripe Atlas and it hits the limits of what it can actually, like you can't interact with
Stripe Atlas right now, right?
Well, LLMs can actually talk to APIs.
This is basically like plugins and yeah, with GPT plugins are an example of that.
They're a bunch of examples, but just in the same way that applications using LLMs can
retrieve data, they can actually call out to other APIs to take actions as well.
And this gets back to your question about under what circumstances should we allow them
to do that.
This is where it really gets terrifying because this is where it's like, okay, yeah, I mean,
an AI could, if it's just prompting itself over and over again, can convince itself it
should do something nefarious to build a business or there are lots of profitable legal businesses,
you know, and I sure Silk Road, 2.0 or whatever, you know, but yeah, I guess it's felt to me
like its connection with the outside world has been the barrier.
Like, can AI really like, can it build a business yet or like, could it accept a payment yet?
Like it feels like doing that on its own feels.
I would say that I don't hear too many people worried about that part yet.
Right.
I would turn it upside down and say that agents and retrieval and access to APIs have created
a really powerful new way for founders to differentiate their apps.
Because for founders who either may choose to use some open source model or a proprietary
model, until now that deep level of science was one of the few ways that they could actually
build something defensible that other people could not easily copy.
We've seen any number of examples of that.
But now when these apps that are running on top of LLMs, FMs, whatever you want to call
them, now the founders can actually bring in data that is special and can let their
apps talk to external APIs that are special.
And that, that lets the apps riding on top become much more powerful.
You use the term agent, which I'm hearing more and more.
Is that like an AI persona or how do we think about agents or how do you define?
It's being used in a few different ways, but baby AGI is one example of an agent concept,
which is a little persona that is taking an autonomous chain of actions that may be the
only chain running or maybe one of multiple chains.
If I can extend our Expedia example from before Eric, I could say I might have one chain that's
working on flights and another one that's working on hotels, one agent that's working
at flights and another agent that's working at hotels.
It's all amazing to me.
The baby AGI example in particular, I mean, if you keep it running, first of all, that's
like going to cost like a ton of money, right?
I mean, that's a real constraint at the moment.
Building a business takes 30 years and if baby AGI takes that long, it'll run up a big
bill.
It feels in some ways like humans are like cobbling together like a brain in public,
right?
It's like, okay, there's the model, but then we're going to chain it to this thing where
you know, it just feels like at some point, you know, there are these systems where it's
like, I mean, just like a human brain, like a key piece of the human brain is that there
are different ways of thinking like sort of interconnected with each other and sort of
inscrutable ways.
Like it's just fascinating to me that we're just hobbling together sort of machine intelligence
in ways we don't totally understand.
Yeah.
But you know, if I pop back up, all of this is in service of building applications that
are really useful.
I heard somebody made an example yesterday that the ones who invented refrigeration made
some amount of money, but not as much as Coca Cola.
Yeah.
And this happens throughout history, like clockwork within 10 months of launching the
iOS app store, we saw Uber, which could not have existed without the app store, but was
not a very sophisticated app.
And so our industry is trying to do two things at once.
Number one, we're trying to create a stable enough platform.
And number two, we're trying to imagine what is the Uber or the Coca Cola, right?
That will sit on top of that platform.
And there is great work to be done in both of those areas.
I feel like the Coca Cola refrigerator one's interesting because in some ways the refrigerator
is the platform and Coca Cola is the app.
And I feel like the old school venture capitalist thinking was you want to be the platform.
Like if you can be Apple with the app store or, you know, Facebook with everybody integrating
it, you know, that's better than being sort of this great app.
Do you disagree with that?
Or do you disagree specifically in a I?
I come from a company in my last job that built a very successful platform.
You can certainly do that.
Exactly.
The Coca Cola for both sides.
I mean, both pieces.
You can certainly do it.
And look, Coca Cola, you know, has a trucking platform as well.
So there are platform businesses to be built.
I just think that there are by definition and infinity of applications to be built on
top of this new technology.
And so it is by definition bigger, even though the platform is really exciting and there's
really important work to do.
If you go back to the bottlenecks that I mentioned before, there is a bottleneck for
talent and data and silicon and platforms allow us to channel silicon and talent at least
so that teams that may have only modest levels of technical skill can still bring their own
visions to life.
Hmm.
That's really important because if you believe as I do that the best ideas are going to be
randomly distributed.
You want whoever has the idea for Coca Cola or Hoover or whatever to be able to use the
tools and to be able to access them so that they can bring that vision to life.
Right.
That's the mission our industry is on right now.
The amount of like sophistication, like it feels like the amount of progress that's happening
in terms of the actual models is far outpacing sort of the normal business world's ability
to like integrate them into their businesses.
I mean, would you agree with that?
Like are there a lot of customers circling back to sort of the opener of this conversation
for bedrock or it's like big enterprises just like like to know it's there?
How quickly do you think Coca Cola, great.
We'll be able to figure out ways to use these foundation models in their business.
I could be wrong.
Now that you mentioned it, I didn't plan this.
I think Coca Cola actually mentioned that they a partner of open AI or something like
that.
Yeah.
I mean, it's a lot of people who come up.
Yeah.
But it's meaningfully implement.
Yeah.
It is high on the agenda of everybody because it's, you know, what I'll say is sweeping
it up to be disprovable the next time we do this, but it feels that virtually every
kind of application in virtually every industry will be either reimagined or rebuilt with
this kind of technology.
It's that impactful.
It is categorically different than other things that came before it.
In the Coca Cola example, I mean, which food brand is going to be the first to be like,
this is the model, you know, the model said the Coke recipe should be like this.
And it's like new Coke is just like machine Coke or whatever, you know, the marketing value
of that.
I don't know.
It'll be interesting to see if there's a backlash.
So that's sort of idea.
But yeah, I mean, I forget if it was TikTok or Twitter, but I've seen people.
Yeah, I think it was on TikTok, someone had fed in their recipe to chat TPT and said,
okay, what would level this up?
And it was like, oh, you should add jalapenos for a little spice and like something for
texture and said, you know, then they out, you know, created some really pretentious,
you know, bite.
And they liked it.
I don't know.
Yeah, it'll be fascinating.
Almost anything I've tried using chat TPT for, you know, personal expense management and
project management and debate, you can sort of have an argument with it and sharpen your
debating skills, you know, as well as defining task lists.
And so it's, I think the other key thing to think about here, Eric, is that in previous
generations of technology, we treated computer science as an engineering task, something to
be calculated and optimized.
And in AI, we treat the models as almost biological organisms that need to be treated
with hypotheses and experiments just to understand what's possible because we don't know what
the frontier is going to be.
I think I said this on stage at cerebral value, but mid journey, like, you know, there was
a sense that like they went from six fingered hands to like five fingered hands and they
didn't quite understand why, you know, and that's what gets everyone teary.
I guess again, about a like a chat GPD five or whatever, it's, we don't totally understand
the models.
And if we just like add more power, new stuff could emerge and we have no sense of why that's
the case.
I mean, are you, do you worry about that?
I think we need to be really responsible in how we observe and manage and respond to
these things.
It seems like the only way to do that is by going through as opposed to somehow pulling
back.
It's going to be built.
So let's figure it out.
We should lean in hard and put our best minds and our best thinking on confronting these
questions so that we can answer them.
Cool.
Great.
Well, thank you so much for coming on.
I really do need to figure out how like newcomer, I want to the it's hard to access news like
to apply it to my own newsletter.
I haven't totally figured, I did the interview prep with it, but I need to get more creative
in how I can apply it to my sub stack.
The very best part is it's so complicated that it can only be learned in public together.
Right.
Great.
Cool.
Thanks so much for coming on.
I really appreciate it.
Thanks, Eric.
Appreciate it.
That was my episode with John Turo at Medrona.
I'm Eric Newcomer.
Thanks so much for Tommy Herron, our editor, Riley Kinsell, my chief of staff and young
Chomsky who provides the music.
Thanks to Vanta for sponsoring.
Check us out on our YouTube channel at NewcomerPod on YouTube.
Newcomer.co is a sub stack.
You can follow me on Twitter at Eric Newcomer and like, comment, subscribe, review on Apple
Podcasts if you can.
We'd really appreciate it.
Thanks for listening.
See you next week.
Goodbye.
Goodbye.
Goodbye.
Goodbye.
Goodbye.
Goodbye.
You