697: The (Short) Path to Artificial General Intelligence, with Dr. Ben Goertzel

This is episode number 697 with Dr. Ben Gertzoll, CEO of Singularity Net. Today's episode is brought to you by AWS Cloud Computing Services by the AWS Insiders podcast and by model bid for deploying models in seconds. Welcome to the Super Data Science podcast, the most listened to podcast in the data science industry. Each week, we bring you inspiring people and ideas to help you build a successful career in data science. I'm your host, John Crone. Thanks for joining me today. And now, let's make the complex simple. Welcome back to the Super Data Science podcast. Get ready for a mind-blowing conversation today with the visionary Ben Gertzoll. Dr. Gertzoll is CEO of Singularity Net, a decentralized open market for AI algorithms that aims to bring about artificial general intelligence, AGI. And therefore, the Singularity that would transform society beyond all recognition. Dr. Gertzoll has been chairman of the AGI Society for 14 years. He's been chairman of the foundation behind OpenCog and open source AGI framework for 16 years. He was previously chief scientist at Hansen Robotics, the company behind Sophia, the world's most recognizable humanoid robot. He holds a PhD in mathematics from Temple University and held tenure track professorships prior to transitioning to industry. Today's episode has parts that are relatively technical, but much of the episode will appeal to anyone who wants to understand how AGI, a machine that has all of the cognitive capabilities of a human, could be brought about and the world changing impact that would have. In this episode, Ben details the specific approaches that could be integrated with deep learning to realize in his view, AGI in his view as three to seven years. He talks about why the development of AGI would near instantly trigger the development of artificial superintelligence, a machine with intellectual capabilities far beyond humans. He details why, despite triggering the Singularity beyond which we cannot make confident predictions about the future, he's nevertheless optimistic that AGI will be a positive development for humankind. He talks about the connections between self-awareness, consciousness and the ASI of the future, and with admittedly wide error bars, he speculates on what a society that includes ASI may look like. All right, you ready for this astounding episode? Let's go. Then welcome to the Super Data Science podcast. It's awesome to have you on where you're calling in from today. I'm on Vashon Island off the coast of Seattle. Oh, nice. Well, thank you very much for coming on the program. We actually hadn't met before, but you're a well-known entity in the artificial intelligence space, and so I've had my eye on you for a while, and we were delighted that we were able to get you on. I know this is going to be a fascinating episode. So you're creating a benevolent decentralized AGI at SingularityNet. Did you parse those terms for our listeners? So AGI, probably most listeners know, but even like quick intro to that, just in case, could be good. But then these ideas of a decentralized AGI and a benevolent AGI, I think those terms we should definitely dig into. Absolutely. And I want to add, none of these terms has an absolutely definite, well-defined meaning. These are things that we're fleshing out as we go along, which I think is a feature not a bug. I mean, just as the definition of life is something biology is fleshing out as it moves forward, which is just fine and doesn't stop people from doing synthetic biology or molecular biology or whatnot. So in terms of AGI, to start, that's a term that I launched onto the world in 2005 with a book titled Artificial General Intelligence, which was an edited book of papers by different researchers doing research aimed at AI that can generalize in the sense of making big leaps beyond its programming and its training. And there's a whole mathematical literature aimed at defining exactly what does AGI mean when you start to formalize it, you find, okay, humans are not totally general intelligences, like we can't run amaze in 700 and 1,000, 6 dimensions very well, right? We can barely figure out what each other are thinking and feeling most of the time. On the other hand, we can leap beyond our training and programming much better than a worm, a dog or a chat GPT for that matter, which while powerful, it's powerful most because it's training data is so big, it doesn't leap that far beyond, it's training data. Decentralized is a word that is big in the blockchain and crypto sphere. And what it really means is not having any central owner or controller who's sort of the puppet master and pulling all the strings of the system. So distributed computing is a more limited notion. I mean, Google or Microsoft within their server farms uses large scale distributed computing, but there's one corporate entity controlling it all and on a computer science level, there's often a central controller sort of disseminating stuff among all those different machines and controlling them. Decentralized control generally would need a distributed infrastructure, but it goes beyond that, right? And blockchain is one sort of tool that can be used to enable decentralized control of a network of compute processes. But of course, you could talk about decentralized control of groups of people as well, right? And so that gets into anarchist political theory, a whole bunch of other fun topics. But I mean, clearly, I mean, US is a less centralized system than mainland China or than in that colony, right? So that notion is widespread in any sort of multi-agent system. Benevolent, of course, there's a whole field of ethical philosophy and people struggle to define these things in a precise way. But what we're looking at here is in an AI context is making an AI that is broadly speaking for the good of sentient beings rather than purely or primarily for the good of the small number of parties who owned and created it. And of course, not many entities are out there saying we want to make evil AI that will kill and torture everybody. But you do have, I mean, a country creating AI to increase its own position relative to other countries where we have a company creating AI with the proximal of maximizing shareholder value. That's not necessarily malevolent. I mean, companies and countries seeking their uninterest have done great things in the world. On the other hand, it's different than creating AI with a goal of broader good behind it. And this often slips in the real world, right? Like open AI, who's everyone's favorite target to beat up on now. And I mean, they've done they've done incredible things, obviously. So I mean, my hats off to them for having the balls to launch, you know, the first really powerful generative language model upon the world. On the other hand, they started off with a rhetoric of being open source and nonprofit for the good of the world. Now, they're close source and closely allied with a mega corporation who's closely allied with the intelligence organizations of a particular country, right? So that not that they're bad guys or want to hurt people, they're good people who want the good of the world, but this illustrates how just the realities of human society and economy that you sort of shift. Kind of, it's a slippery slope from you start out wanting to save the world and you end up, well, the easiest way to build stuff is to serve a sort of more, more narrow set of goals, right? That's the challenge with the benevolent part of benevolent decentralized AGI, not so much bad guys who are like, I want to build color robots to exterminate everyone, but just the tendency to shift toward a narrower perspective to get done. Yeah, one of my favorite cutting quips about open AI is to call them closed AI. Yeah, you have in mathematics, you have the notion of a clopin set, right? So that's a they're a bit like that, but yeah, with GPT-4, really, they didn't really tell you what's happening behind the scenes, right? I mean, you can guess what sorts of things they probably integrated with the base transformer model, but they really don't, the paper doesn't actually tell you what's going on, I mean, even, yeah, even short of open sourcing the code, they haven't really disclosed what the, the algorithms are, which is pretty far in the opposite direction, right? Are you stuck between optimizing latency and lowering your inference costs as you build your generative AI applications? Find out why more ML developers are moving toward AWS Trinium and Inferential to build and serve their large language models. You can save up to 50% on training costs with AWS Trinium chips and up to 40% on inference costs with AWS Inferential chips. Trinium and Inferential will help you achieve higher performance, lower costs, and be more sustainable. Check out the links in the show notes to learn more. All right, now back to our show. Yeah, so with SingularityNet, the hope is that, uh, that your SingularityNet ecosystem will be able to maintain, uh, more benevolence, more openness, as opposed to these other kinds of entities. Yeah, that's right. And I think open source is doing great in the AI world, generally speaking. I mean, the vibe in AI is most algorithms to get open source. Most innovation happens in published papers that are published on, on art, art side or, or, or elsewhere. And even though GPT-4 is the smartest LLM out there at the moment, I mean, others are rapidly on its heels with open source models. And the VC community, which I have a lot of issues with, has to their credit open their minds to open source business models and is filling a bunch of money behind people building, building open open source AI tools, right? So I mean, I think there's, there's a lot of hope in that regard, but I do think open source in the code isn't all you need to do, right? So there's code, there's the data that code is trained on, and where that comes from, what's the ownership model? What's the compensation model behind it? Then there's this sort of live network of machines, which is running that AI code and who's hosting and owning and controlling those machines, and all these things need to be done in a democratic and decentralized way. But I think the, the success of open source code and open science in the AI field now gives a great start in that direction. I mean, along with the software tools from the singularity in the ecosystem and others that handle some of these other aspects of the problem. Yeah, Phyllis and more on the singularity net ecosystem specifically and how it's all some of these problems. Yeah, so singularity net we started in 2017 with the goal of making a decentralized platform for AI and both the specific technology and the nuance of the goal have shifted a bit since then just because both the blockchain and AI worlds have, have shifted a bunch since that, since that time, right? So the original notion in 2017 was, okay, let's create a blockchain based platform so that anyone who creates an AI can put an AI online and then they can put it on their machine, they can put it online anywhere, they connect it to the singularity net network of other AI's out there and all these AI's can outsource work to each other, they can talk to each other, they can collaborate, solving customer problems and the intelligence of the whole can be greater than the intelligence of the of the parts, right? And we we rolled this out as a platform, a sort of decentralized multi-agent system for AI on the Ethereum blockchain initially because that was sort of the only one there that supported smart contracts which are, you know, they're neither smart nor contracts really but they're persistent scripts that allow secure decentralized control of distributed software processes then. So we built that platform, I would say it was a success technically in as much as it went, it didn't, to be honest, get as much traction as we'd like that we got a bit of traction and I think blockchain technology didn't mature as fast as we thought it would so many just said super high Ethereum gas costs and slow networks and so on and so from an AI developer's point of view it's like why would I put my AI on this really slow infrastructure, you know, so people compare it with crypto but not everyone does and for philosophical reasons but that doesn't always override the practical irritation. So I mean people people using AI for decentralized finance or something like the platform because they have crypto they want to pay with and they're already bought into the crypto ecosystem right but we didn't, we didn't get as much penetration as we hope and in the broader AI sphere for pretty clear reasons so we pivoted a little bit in 2020, 21 towards creating our own projects addressing AI needs in various vertical markets, leveraging our decentralized platform and at the same time plunging into building out the whole decentralized ecosystem beyond just singularly net platform right so we created for example a project called Rajuv that uses the decentralized AI on the platform to analyze you know biosignals data and medical data that people upload and uses AI running decentralized and singularly net platform to tell tell you stuff about your own your own body and your progress toward toward longevity and we we created singularly DAW which uses AI running a singular net platform to do crypto trading various sorts of decentralized finance and the few other projects along those lines, cynical mindplex that uses decentralized AI to help with that with decentralized media so it's more okay well let's let's ourselves build stuff that shows what the decentralized infrastructure can do for AI because we we're doing it because we we grasped a long long-term vision and then you know then in the last six months we have this all LLN revolution which is fascinating and interesting and we can dig more into that a little later I mean I think LLNs are a breakthrough I think they're not yet getting us to AGI I think we can enhance them a lot using other AI things that my team knows about like neural symbolic AI and evolutionary AI plug into LLNs but the implication of LLNs for the decentralization of AI and for platforms like singularly net are significant right because it it means that it seems like a lot of the progress in AI is going to be little apps that leverage large language models and the successors to large language models to do to do things rather than small standalone AI agents so what what what that means is okay if you want to make AI decentralized a you have to make LLNs and their descendants be the neural symbolic LLNs or whatever you have to make these decentralized and then what your whole agent system is doing it's like an app or a tap ecosystem of little AI agents that that they may interact with each other in a sort of heterogeneous way but they're also making a lot of API calls into these decentralized LLNs or LLNs successors right and that that's something we've been thinking through quite a lot and trying to build infrastructure for to complement the core singularly net blockchain based multi agent system platforms so we we made a platform called new net which lets you sort of contribute processing power that you have to the decentralized AI network and then in a in an LLN context you know you really need a powerful server farm to train large language models but if you're doing fine tuning or you're doing prompt tuning or you're doing learning of the symbolic portion of a neural symbolic large language model system you can do that on your phone or laptop or something perhaps in a way that's centered on your own data which is on that device and we can use new net to help make that aspect decentralized then then we've launched our own layer one blockchain project called hypercycle which is a ledgerless blockchain so it goes beyond the decentralized ledger aspect of Ethereum and other commonplace blockchains and again that's that's aimed at making it not be a slow and expensive thing to put the blockchain underneath underneath your your AI application be it LLM oriented or otherwise and so each of these things new net and hypercycle are their own projects which we've separately capitalized and sort of spun out of the singularly net foundation which was the initial entity that launched a singularly net platform and then for the large language model aspect we've spun off a company called ZARCA whose goal is just to build build some large language model supercomputers use them to train large language models but on as much of a decentralized infrastructure as we can right so I mean we'll use a supercomputer when we have to but then for fine tuning and prompt tuning and symbolic piece we can use new net and singularly net to fully decentralize those aspects but also the core training of an LLM does not have to be just on one server farm that one entity owns right it could be it could be split across you know ZARCA's own server farm but then a bunch of say former Bitcoin mining farms and wanted to put a bunch of their machines into that and then then we've spun off a company called true agi which is oriented toward open cog hyperon which is a cross paradigm like neural symbolic evolutionary AI paradigm bringing LLM's together with other stuff which can then run on this decentralized AI platform so I mean I think we're trying to trying to think on our feet to make decentralized AI and agi actually work in this rapidly evolving AI and blockchain landscape right and I've sort of come to the conclusion I've come to the conclusion to make decentralized agi really work I mean pretty much we have to launch something that's way smarter than chat gpt and launch that on a decentralized infrastructure right and if you do that everyone will use it because it's smarter and lo and behold it also happens to run on this decentralized infrastructure and no one no one will be about that fact even if it's not the main purpose for for for using it I mean just like people are happy to use stable diffusion they like the fact that it's open source yeah on the other hand if there was something proprietary that worked twice as well they'd probably use that that's right even though it was close source that's sort of what my focus is now like how do we use LLM symbolic reasoning evolution or AI all these different things together to make something way smarter than chat gpt and then we can roll it out on our decentralized infrastructure and then curate an app ecosystem around that doing all sorts of things serving different different vertical markets this episode is supported by the AWS insiders podcast a fast paced entertaining an insightful look behind the scenes of cloud computing particularly amazon web services I checked out the AWS insiders show myself and enjoyed the animated interactions between seasoned AWS expert role he's managed over 45,000 AWS instances in his career and his counterpart Hilary a charismatic journalist turned entrepreneur their episodes highlight the stories of challenges breakthroughs and cloud computing's vast potential that are shared by their remarkable guests resulting in both captivating and informative experience to check them out yourself search for AWS insiders in your podcast player will also include a link in the show notes my thanks to AWS insiders for their support yeah so let's dig into some of those technical terms like neuro symbolic systems evolutionary learning knowledge graphs in one second but just before we get there I want to kind of recapitulate back to you what I took away from everything that you've covered so far so the idea with singularity net I went on a long time yeah so no it was great so with singularity net initially and then these other spin-offs out of the singularity net foundation like new net hypercycle sarco true agi the idea of course is to have benevolent decentralized agi like you defined at the outset of this episode and it's interesting how this journey started with leveraging existing blockchain technologies like Ethereum but then along the journey realizing that those underlying technologies blockchain technologies weren't evolving quickly enough to be able to get the kind of economics that you'd require to be handling the very large data sets the very large models associated with modern AI so it sounds like you went off to you brought in to be able to start building this infrastructure yourself and in so doing you've also created infrastructure that can support other kinds of use cases not just AI but also things like financial transactions like longevity and so yeah so that sounds like a really exciting project and lots more to come in terms of your vision for how we can realize agi you talked about combining neural networks with other kinds of approaches like neural symbolic systems evolutionary learning knowledge graphs could you dig into those three approaches and describe how they can yeah absolutely so yeah I think you know as most of the people listening to this podcast probably realize you know the AI field has been around since the middle of the last century I mean the name was invented and I guess the late 50s but goes back a couple of decades before that probably Norbert Venus book on cybernetics was the first place it really laid out to have AI as a discipline for making machines do the kind of thinking that brains can do that that would have been in the mid 40s sometime like I'm old but not not that old I was born in the late 60s right and during that long history of AI field a lot of different AI approaches have been put out there I mean neural nets have been there since since the 40s but logic systems have been there since 1960s I would suppose and evolutionary learning that tries to emulate the process of natural selection to do thinking this has been there since the early early to mid 70s post of different AI approaches have been around for a while and deployed in commercial systems across various vertical markets with sometimes great success sometimes less so right now one of the interesting things we see with the whole LLM revolution is just taking stuff that's not that different than what was done before I mean decades ago even with some minor tweaks deploying it at way larger scale make some quasi-magic happen right and it's pretty cool I mean I remember in in the late 90s mid 90s I was teaching cognitive science and university before I I bailed on academia to go into industry I was teaching neural networks with a neural net with like 35 neurons trained using recurrent back propagation on the decent like some Unix workstation take three or four hours to train that network with 35 neurons right and I would tell the students well okay this is 35 neurons your brain has like 100 billion neurons but it's got massively parallel infrastructure like as as hardware gets better and better we're going to be able to train these software networks you know similar to how how the brain does things at the time I was working on a connection machine from thinking machines incorporated which had like 128,000 processors like MIMD parallel 128,000 autonomous processors doing different things that could train a neural net much faster right they've stopped making that kind of hardware we got we got GPUs instead but in the end you know when I told those students is basically accurate right I mean now now we can trade a trainish load of neurons and then neural net it's fast and it does amazing stuff which is just what I thought would happen honestly I thought that was going to take five or ten years not as not as many years as as it has since the mid 90s but and there's been architectural changes right so transform a neural net if you go back to the attention is all you need paper a lot of what happened there as you took recurrent neural nets you stripped out some of the recurrence and replaced it with an attention mechanism now that that decreases the computational capability of the network I mean in terms of just theoretical computer science a vanilla transformer neural net doesn't compute all the kinds of things it can't buy simulated all the kinds of automata that a recurrent neural net can I mean if you give it an external memory in a scratch pad it can it can kind of do it but it not not natively right so you're you're weakening the computational power of recurrent neural nets by replacing your current layers with attention but you're making it easier to train at a at at at a huge scale right and that's it's really interesting we don't yet know exactly how much of the loss that you get from getting rid of recurrence can be gotten back by other other tricks within the scope of transformers but it's clear like we can do so much amazing stuff by sort of scaling up a fancy version of the multi-layer perceptrons that we're working with in the late in the late 60s right so now you go to other AI paradigms like logical theorem proving right well in some domains we already see what can be done by scaling them up in modern computing infrastructure so you know my oldest son is our thustrad just cement his PC thesis in the technical university of Prague on on using machine learning to guide automated theorem proving so I mean if you take a corpus like Mazar which is a list of huge number of mathematical theorems going to going to the PC level and beyond pretty rapidly a machine learning guided theorem proof can prove like 80% of them just things things things things things right so I mean certainly more than I can do in a few hours time and my PC was in in in math right now they're not good at making up amazing new math theorems which is what mathematicians have have fun doing but they can prove things very quickly and very well and again the core concepts behind today's automated theorem proofers they're not that different than the core concepts behind automated theorem proofers from 20 years ago it's just you know we've optimized the code behind these but we also have just much faster computers to run them on and having these the internet where you can make a corpus of all math theorems and just try stuff over and over again on this corpus tune the parameters of your theorem proofer and the machine learning that guides it I mean this this has allowed conceptually similar ideas to the ones from decades ago to be refined in such the way they're super super good at at theorem proving right and I think the same holds in a in a bunch of different of different areas so we could look evolutionary learning the idea there is you're simulating the process of evolution by natural selection like mutation and mutation and recombination and selection of the fittest entities so the basic dynamics of evolution of populations of biological organisms you're simulating inside the computer one example of evolutionary algorithms stopping sort of AGI which I'll talk about in a moment but just to to understand the impact of the modern ecosystem on making evolutionary algorithms work better than than they used to I mean in the in the mid 90s I used genetic algorithms flood and point genetic algorithms to evolve the coefficients of iterated function system fractal generators to generate sequences of melodies musical notes and I then used a rule based AI system to modulate the timing and I was able to generate some quite quite cool music I'm in either modern classical or I I made it generate like long rhythm guitar and lead guitar stuff that sounded like you know epic versions of the middle of master of puppets by metallic or something or it was it was it was cool but I mean it was I couldn't get the timing to be interesting from the from the evolutionary algorithms so I made a rule based system based on some music theory to to do that do the timing rules and I mean that the tomber the tomber I just did in a standard computer computer music way I used to just go about the practical coefficients generate a series of notes like it like a MIDI file for those who do computer music so that that was interesting and the mutation and crossover of a genetic algorithm it lent some creativity to the proceedings right so instead of at the time what people were doing in AI for music was like Markov modeling like you just generate a probabilistic model of the series of notes and generate new series of notes by instance generation from that distribution and I mean that's that's cool but it gives you something that sounds like a second rate version of the stuff in the training corpus right and genetic algorithms generate a bunch of garbage but they also generated stuff that was new sometimes so now now I've been revisiting this area but in a in a in a modern way right so you can take something like music gen which Facebook is conveniently released recently into the open open source which is great unlike Google that did not release their music LM into into the open source so good for you Facebook bad for you Google in this particular case right and so music LM you can give a text prompt and optionally an audio file a music file and it generates new music conditioned on the on the music file guided by the text prompt right and that's it's very nice to generate stuff that sounds like music now what what what I found is just using its prompts it generates music but it's sort of cliche music that doesn't interest me musically even though it's very competent if I use my own weird music as the melody prompt then then it generates stuff that I find interesting guided by the by the text prompt now since it's open source if you plunge in and see how they use the melody prompt it's very crude and simplistic so I'm currently mucking around in there working on some better ways to do the conditioning on the melody prompt if I get anything awesome we'll release that open source also nice but what I'm playing with with with with genetic algorithms is use a GA to evolve prompts right so using a genetic algorithm to do mutation and crossover and then probabilistic incidence generation on prompt space so you have a genetic algorithm you're mutating prompts you're taking a prompt and combining it with another prompt then as a human you listen to the music that came out from a given prompt you read it for how much you like it and then you feed your evaluation back in as a fitness evaluation to the genetic algorithm for prompt evolution then you can code rating functions which embody like a mathematical model of musical aesthetics and you use that use that for fitness estimation so if you're if you're rating funk it says this sucks you don't listen to it you're rating funk it says it's good then you listen to it give it a rating as a human to feed back into the as a fitness function of the genetic genetic algorithm so the thing is using this spiritually it's like similar to what I was doing in the 90s right using a GA to try to evolve some parameters the guide and other process that generates music you can listen to it and rate it and try to get something creative but now it sounds like great diversity of music in any genre with any number of instruments right and it's that's super interesting I mean this is because you have a model like music LM which is bottoms out on all sorts of like sequential models of music and LM's doing the text music mapping and so forth so we've got a lot of underlying technology that lets the same concept of evolutionary learning of the coefficients of a music generation process work now just much more flexibly than in the in the 90s and now like all the parts are known within the same AI process whereas in the 90s it was like okay use your GA and fractal generate melody then find some other music theory way to generate timing find some other way to generate tomber like now it's all on the same AI process so what we've seen in many domains here is more data more compute same conceptual ideas let's you tweak the specific algorithms to a way that makes things magic and make things super cool now what I think is you know this is going to let us actually make artificial general intelligence at the human level not too long from now let's say three to seven years from now and I think once you get to artificial general intelligence at the human level if it's done in a non-super way that's going to relatively rapidly lead to artificial intelligence at the way beyond human level because artificial intelligence at the human level yeah I mean if the AGI is as smart as you or me and has full read and write access to its source code I mean it's going to figure out the revise its source code and it's going to uplift itself the artificial super intelligence which leads to a whole class of of issues that we can get to if we if we have time but let me let me come back to logic systems and evolutionary AI and knowledge graphs and how I think they can potentially be used together to make the leap from really cool narrow AI systems like chat GBT and like the other things I've been doing to artificial general intelligence so first of all I'd say nobody solidly knows how to make the leap from where we are now even to human level artificial general intelligence I mean it's a research question last week in Stockholm we had the four day long AGI 23 conference which is an artificial general intelligence research conference and I've organized that conference every year since like 2006 this was obviously more LLM heavy than most but you had a bunch of other sorts of AGI ideas and a bunch of hybrid LLM logic and other sort of system talks but there's there's a broad spectrum of ideas but I had to make the leap from here to here to AGI and I'd encourage people to look at the on YouTube the video proceedings in that conference I mean the first day we had a workshop on open cog hyperon which is my own primary AGI attempt now but the main body of the conference had talks but all sorts of other people if if you're interested in large language models if you look at no a Goodman's talk showing how good large language models are doing elementary math and then somewhere brings George's talk discussing how terrible large language models are doing slightly more advanced math and logical reasoning and it's an interesting sort of counterpoint so one approach to making AGI will be to you know try to make it GPT7 or something right I mean take take a large language model plug more stuff into it right plug well from off into it for for calculation thinking right I mean plug plug some sort of long-term memory into it and some sort of better working memory into it so it keeps the thread during a whole conversation and has some coherent lifelong experience right plug plug the best you know video processing models you have in it so that it can intersect video and language fluently which is almost been not quite there right so from music like right now GBD4 it has never listened to music knowing there's a text associated music but I mean you can make it intersect it co-training with a different neural net that's seeing inside music so that would that that would that would be one approach now now I'm working centrally on a different approach which is within a code base called open cog hyperon which is a new version of the open cog open source AGI this was my very next question I mean it's perfect yeah I mean the core idea there is you take this large distributed potentially decentralized knowledge graph or more properly it's a metagraph not a graph but same concept I mean it means like a graph in the sense is nodes with links between them a hyper graph is like a graph we can have one link spanning five nodes or 20 nodes or something rather than links being binary a metagraph is even more general because I have a link pointing to links or a link pointing to a whole subgraph so we take take this knowledge metagraph and we use it as a sort of knowledge meta representation where you can put all sorts of knowledge in the same metagraph and that includes logical knowledge includes neural net distributed knowledge it can include programs so you can embed a program inside this knowledge metagraph and of course you know inside a C++ compiler or a Haskell compiler you have a graph representation of a programming of a program right so I mean representing programs as graphs is nothing new but here it's the same graph used to represent declarative facts and beliefs neural neural networks or which are themselves nodes and links right or or programs and then pieces of that graph can then be sort of brought alive by different software processes so if you have a a program stored in the graph an interpreter can grab that program and run it and then the program stored in the graph what do they do well they transform the graph they pull knowledge out of the graph and they rewrite the graph right so we made a special programming language called meta meta tech talk METTA which basically is a programming language encoded in the graph and what encodes is graph transformations to them basically you have this big distributed decentralized self modifying meta knowledge graph right and that's that's a framework you could use that framework for a lot of different things and what we're using it for now is doing probabilistic programming we're using it to do probabilistic and fuzzy logical reasoning we're using it to do evolution of programs that are stored as subgraphs and we're interfacing this knowledge graph with external software libraries running deep neural networks so you you can run a deep neural network totally inside the hyperon metagraph is just not as fast as running it in torch or TensorFlow or something so what we're pretty much doing is we have nodes in our knowledge graph that have references within the torch compute graph but then you you you actually do the neural net stuff in in torch at this moment just because it's it's faster I mean it may not always be faster but it is it is it is right now so what you have here you have the ability to do you know deep vision model or large language model stuff outside of the hyperon system you then have the hyperon system that can drive and interact with and guide these these neural systems and you can then do logical reasoning in the hyperon system and evolutionary learning in the hyperon system and all this is designed to be rolled out on the singular and yet decentralized AI platform and leverage leverage new net for compute resources leverage the the ZARCA decentralized LLM for for for language modeling leverage to agi's infrastructure for rolling out API's bit based on all this and hypercycle ledgerless blockchain blah blah blah like it's designed to leverage this all decentralized tech stack that we've built in in the in the same way that open AI is integrating this stuff with the whole Azure centralized tech tech stack right so what we're planning on doing with this over the next couple years I mean in the big picture it's everything right just just because AI AI will eat everything but there's there's a few critical development directions we're we're we're looking at I mean the most commercial one is just trying to make something that's twice as smart as chat GPT by kind of deeply integrating a probabilistic logical reasoning engine with with an open source LLM and we're we're building our own open source LLM's but you could also do that with with Lama or whatever other open source LLM you you want to work with so that the core idea there is make something that's smarter at multi-stage logical reasoning that integrates linguistic with quantitative and relational data and makes something that's more creative and sort of less banal in its creative productions by adding on a logic engine and an evolutionary learning engine to an LLM in a framework that was designed for interaction of AI from multiple paradigms right so that that's one thing I mean if we can pull that off on the decentralized infrastructure then that we're we're we're doing very well and a number of dimensions right now another thing we're looking at which is more of the research side is controlling a bunch of little learning agents through like baby AGI's and not not like the bogus thing that sold my term baby AGI for a chat GPT wrapper right but I mean making things that really are like young young artificial general children's like AGI toddlers trying to learn about the world so I want to make a bunch of little guys in a virtual world and we have our own virtual world we're building called Sophia Veris which a lot of a bunch of avatars look like the Sophia robot rambling around but they also want to have these little AGI toddlers rambling around in Sophia Veris and just interacting building stuff together chatting with each other learning as they go and that I would like to also do this with a little like small toddler AGI robots I'm not sure if we'll pull that off but we'll definitely do it in the virtual world and that I mean these may not be as smart in terms of passing exams and stuff as a chat GPT initially but the idea is they're learning from the ground up from from experience so I think both of these are interesting I mean chat GPT T is set the bar reasonably high in terms of practical functionalities we want we want to sum out that sum out that bar by just plugging LLMs into our system and I mean we can we can now use LLMs themselves to translate natural language into higher order predicate logic so we can feed our logical knowledge base with all the knowledge on the web and feed that into reasoning engines right so that's cool on the other hand I think there's a certain element of creativity and an element of like being an autonomous agent and charting your own path for the world of understanding who you are and understanding who others are and what's the relation between yourself and others there's a lot of cognitive things there that people do which may be easier for an AI system to learn if it doesn't have like half versions of all the knowledge of the web and it's mind already it may be easier for an AI to learn if it's really just making sense of itself in the world in the in a simpler setting like like it like a young toddler does right so I think we can do both of those things with OpenCog Hyperon in in in in in in in parallel one is more commercial one is more pure research I mean we're we're also doing other stuff we the first thing we loaded into our OpenCog Hyperon Distribute M space is a bunch of bio ontologies actually we've imported all knowledge about fruit fly genomics and live ontologies of human genomics somewhere just because I have some colleagues in the Raju biotech project where they have all this DNA data from fruit flies 11 5 or 8 times as long as normal fruit flies that were evolved over 40 years to live a long time and they actually need an AI system like this to understand why the flies live so long what that might mean for for human genomics right so we're that I mean that they're they're using it hands on so I mean that it really spreads out in in every direction once you manage to integrate these different things together deploying machine learning models into production doesn't need to require hours of engineering effort or complex homegrown solutions in fact data scientists may now not need engineering help at all with model bit you deploy ml models into production with one line of code simply call model bit dot deploy in your notebook and model bit will deploy your model with all its dependencies to production in as little as 10 seconds models can then be called is a rest endpoint in your product or from your warehouse is a sequel function very cool tried for free today at modelbit.com that's m-o-d-e-l-b-i-t dot com yeah it's wild to me all of these different projects that you have and how they all work together and how you conceptualize them there are so many technical things that I would love to have had time to dig into more things like the way that you're thinking about OpenCog to be able to have a metagraph that blends together differentiable learning versus more discrete declarative information I think that that's a brilliant way to go towards realizing AGI as well as your Sophie averse of toddler AGI's I think these are all it yeah it's all so fascinating but I also know that I have a limited time with you today and so I want to get into some big questions so let's assume that one of your approaches or some other AGI approach works you think it's three to seven years before we have AGI realized so what are the implications so you know you've written extensively and talked extensively about things like life extension and mortality consciousness and the singularity event when AGI is realized ties it to these kinds of things so you've even just talked about one example there where things like trosophila fruit fly genomics can be studied through AI systems might help us understand how we can be extending our lives but yeah you've written a lot about broader things than that than just these specific applications but in a world where we have you know huge amounts of energy resources access to artificial super intelligent systems and yeah what are the implications and it sounds like it's going to be happening in our lifetime yeah I think the implications of super intelligent are huge and hard to foresee and it's I mean it's like asking you know nomads living in in early human tribes what civilization is going to be like I mean they could foresee a few aspects of it but to some extent you just have to discover when you get there and that's correct this is probably an even bigger shift than that shift from from tribal to to civilize life right because I mean once you have AGI's that are 10 times as smart as people I mean science fiction has explored many of the potentials right I mean you get upload your brain into a into a computer you get like Wi-Fi telepathy to bring us all into some decentralized forg mind you get like drones air-dropping molecular nano assemblers or femto assemblers and everyone's backyards they can 3d print whatever matter they want or whatever matter they can convince the assembler to build for them right I mean there's options just going way beyond our current culture and and way of thinking so I mean I think that's uh that's fascinating to think about and the confidence bars just have to be drawn really really wide and I mean I say the same about people worried about the risk of superhuman AGI I mean you you can't squash that risk to zero in a rational sense right because there's just so many unknowns there's also no reason rationally to assume the risk is is super high either I mean we just don't we just don't know what's going to happen and you may find that exciting or scary depending on your personality type and whether you're optimistic or pessimistic about it really says more about your psychological or spiritual makeup than about anything else I'm very optimistic guy so I have a lot of fun in life right I think the period between now and getting a human level AGI is easier to think about concretely and there's a lot of meat to grab onto there right we can we can already see with chat GPT it's pretty easy to see that with fairly straightforward integrations and improved small improvements of this technology a quite high percentage what people get paid to do for a living can be automated right and including this interview for example I can see how chat GPT can't yet automate this interview but could you do it with slight advances in the technology short of AGI probably like ask GPT 6 what questions to ask Ben Gertzel in an interview given what he's worked on and what's of interest to our listeners and then train a model on me and say well what would Ben say to these questions you know I do vary a little bit each time but I have talked about most of these things before right I have a neural model trained of myself which totally couldn't do this interview as well as me right but but yet given how fast these technologies are developing it doesn't seem infeasible now if if you take a topic I've never given an interview on before but I'm interested in say integrating generating general relativity and quantum mechanics right so I've published a few papers on physics but not a lot I would bet you need a full on AGI to emulate me in an interview on particle physics because I'm just going to be coming up with a lot of stuff that have never said to anyone but have thought about a lot right but so that there's a limit but the limit may be pretty high-end right because I mean this what we're talking about is quite high-end in terms of journalism yeah even in that scenario I think it would end up probably you wouldn't need an AGI it wouldn't necessarily be representative of your thoughts but it uh we could do by taking something closer than me right yeah like no it won't take or other yeah we'd take my own it would take my own ideas and integrate it with some other people thought then I would have to watch that podcast to see what my simulation and invent it maybe it would change my own thinking but it would still wouldn't it wouldn't be a substitute and I think there's a certain level of individual creativity which you don't get from these sort of weighted averaging type type systems and I think that until you have a fall on human level AGI you're not going to quite get to that level of creativity like if you think about music as another example I mean I think we're not there yet just like we're not there yet with journalism I mean right right now like mindflex magazine and singular net ecosystem we've tried you can't get LLM's to write an article as good as a person I mean it's not it's not there yet you can ask the LLM to write a sketch of an article and it will look up a lot of stuff for you and save you a lot of time on research but it it writes boring articles it doesn't it doesn't have much much to sing to it right it's it doesn't really write the stuff you want you want to put out but but you can see how you might get there without having an AGI and similarly in in music I mean you can you can come up with although generated pop songs that sound like you know the B side of a single or a random band you might hear in the bar you can't come up with a really great song you certainly aren't going to invent an amazing new genre of music like if you took LLM's and trained them on all music up to 1900 it's never going to invent like neoclassical metal or or free jazz or something based only on data from music up to 1900 so clearly there's a level of inventiveness that we're not getting and we probably need AGI for but this sort of thing is a small percentage of the economy right now right so when you think about like what do you really need AGI for and can't do with LLM's plus narrow AI advanced and scaled up a bit I mean there's a couple categories I think of I mean there's there's jobs that are just about human connection like a preschool teacher or a nanny or a psychotherapist I mean it's about humans helping humans to be more human and that's that's fundamentally human right I mean you so you're I mean seeing live music as an element too sometimes you want to you want to see another person play right you know you know you could already hear a recording if you're going to see a robot play maybe you might as well listen to a recording after the novel yeah I think it was famously the gorillas the gorillas I think they were like riots when they performed they wanted to perform their concert behind a screen and people hate about yeah yeah I mean sure you you you want that human feeling you want to see the guy screw up you want to see him get excited and passionate and you you emote right and that that's this amazing then then there's in-depth scientific and technical innovation which almost by definition is about taking multiple difficult steps beyond the state of the art which I think current LLM architecture doesn't do it pretty much recycles that the state of the art right I mean LLM's can't even do high school math like an undergraduate economics exam little on radical innovative innovative science right so then there's cutting edge art I mean radical artistic innovation for the same reason it's not just recycling stuff right then strategic thinking I mean what it takes to be a really good startup CEO for that matter or or to be really good at the part of your job while you're figuring out who do you want to interview who might not might not be mainstream on everyone's radar yet I mean there's some level of strategic thinking like trying to go beyond trying to go beyond the zeitgeist right and figure out like what weird may happen three or four years from now that no one's for seed right so that that sort of strategy thinking I think also requires AGI but the thing is if you look at you know deep human contact radical progress in art or science or wide ranging deep strategic thinking okay these things probably do require a human level AGI on the other hand what percent of the economy consists of these of these things like I mean the first category of fundamental human contact is probably more of the economy than the other things but I'd say 20% to be generous from all those things put put together right so if you think about it that way it's like okay 80% of the world economy should be automated very soon if not for frictions and rolling out technologies though frictions are significant though like him in US every McDonald's has some human being pushing the hamburger button I mean in Asia most of them are you going to push the hamburger button on the tablet but in US we don't like that we want the human pushing the hamburger button for us I don't know why I mean it's the economics of McDonald's franchises or something right so I mean that's a yeah that example illustrates the frictions and rolling out relatively simple and mature technologies right because for that for that you don't need any fancy machines you don't need to automate flipping the burger or even sweeping the floor right it's really just pushing on the screen instead of waiting in line longer to have another human push on the screen and pretty much everyone can navigate in menu to push the hamburger button right I mean so that I mean that illustrates even though 80% or so in my guess could be automated now it's going to be a while to it all gets actually automated but I mean there's it's still going to happen fast in many industries right and this is going to have massively disruptive impact on the world economy and I think in the developed world this will inevitably lead towards some form of universal basic income because there's sort of nothing else to do we're not going to let half the US population like go homeless on the streets and rummage in the garbage in the developing world it's just going to be a lot more complicated because in say sub-Saharan Africa if most of the middle class jobs go away that just means more people go back to subsistence farming which means they don't starve but they can't pay their phone bill or get prescription medication and become very disgruntled at the world order that is is pushing them backward while the rest while the first world is like sending it home playing video games and being served by robots and in the mid-level developing world like say a Brazil or something where I was I was born I'm a dual system US and Brazil so I have some attachment there I mean it's too advanced for everyone to go back to subsistence farming it's not rich enough to do universal basic income I'm messy mix of things will will will happen and you'll see a dynamic whereby the superpowers you know offer various unpleasant bargains to developing countries to help them with basic income right I mean so I think so that the unfortunate ethical trade-off I see here is the best way to avoid a bunch of horrible mess as the economy gets automated is to have rapid development of benevolent democratically governed AGI on the other hand the best way to ensure that the AGI we develop is benevolent is not to rush it and take time to be sure we've we've got the motivational system and the ethical framework right so I don't there's a trade-off here which is very bad and right I don't I'm actually an optimist about the end game but it seems like yeah there's complicated trade-offs in in the in the next years as all this unfolds and I mean if you look at how the global economy works like we can't the WTO can't even agree on like IP for basic physical objects being manufactured we can't convince the superpower from not invading it's it's it's neighbor is incensiously blowing people up or convince the US not to fund like third world dictators swathering their population fit fit for that matter right so I mean we're we're very bad at regulating quite simple things we can't I mean the music industry can't figure out a pay musicians Spotify just eats all eats all the money right so very basic things we haven't been managed to sort out it's hard to see how we sort out all these more complex factors that are going to happen when AGI is unfolding I think the more we can make open source the more we can make democratically governed the more we can make decentralized at least you're giving interesting tools there that the world can use in ways that we can't currently prefigure in in in detail to to to cope with all these all these things but there's potential for a lot of mess even if once you get to benevolent decentralized like democratic AGI it is it is a rosy future yeah I am with you on all of your points and you know we've had systems that humans have developed but quickly become out of our control for centuries or millennia you know market systems yeah they become out of our control yet they're within our control too right I mean as humans doing all this stuff it's just their mind programs that that are living in all of our brains and training us in a certain way and that I mean with AGI also like if you have a decentralized AGI running on millions of machines and hundreds of crypto mining farms in every country I mean until the thing has become superhuman humans could just all pull the plug right that the thing is that the thing is that our collective thought systems get get colonized by collective like mind viruses that that direct us in in a in a in a certain way so I mean yet yeah we've lost control of corporations but yet it's all people in the corporations making these decisions right right yeah exactly yeah so these complex systems uh yeah they're out of the control of any particular individuals I guess they're kind of decentralized in that way uh but yeah like Cisco which I've worked with often on feels like a decentralized autonomous organization they've got hundreds of CTOs and no one person could list every product they may write it just it's a great company it accumulates money it ingests other companies and assimilates their products it's like a vast amoeba in the tech ecosystem right so there's a lot of these sort of decentralized autonomous organizations that don't rely on any one person and they just keep growing of their of their own accord I mean Linux you would say is like that which is to my mind a source for good predominantly right so but yeah this this is what happens and as we go toward AGI I mean the question is does it advance in a way like the internet and Linux have because these are the two big examples in my mind of sort of open decentralized technology slash human ecosystems that to my mind by large have been a force for good and they've evolved in a roughly democratic and decentralized way there with a lot of complex mess behind them right and of course you can't say they're they're all good some bad guys can use embedded Linux you know and in the OS for a bomb they used to to blow up innocent innocent people I mean okay they use the internet to message back and forth but on the whole I feel the Linux and internet have been sources for good and they've been rolled out in a democratic decentralized way so I would like AGI to be more like the internet than like the mobile ecosystem and more like Linux that than like you know centralized operating systems stacks and that that doesn't seem entirely entirely fanciful although it's not how the AI ecosystem is evolving totally at this moment but then you have like we all saw that there is no moat paper leaked from google where someone in google's AI demo team is like hold on the problem isn't us catching up with open AI sure we can catch up with open AI because we invented transformers in the first place the problem is open source will outpace all of us and how can we stop that we have no conceivable strategy for that right and I think I think that was right and I'm very happy about that since I'm I'm in the in the open ecosystem instead of in the big company yeah I think that's right as well and actually two episodes ago we had a machine learning engineer Lewis Tunstle from Hugging Face and Hugging Face is really big on this they're also amazing yeah I love Hugging Face and they're the ones who made music music gen the the music model that I mentioned earlier so easily easily available also yeah they're really good at that really quickly rushing to get any available new progress that they can open sourced and yeah really driving I mean I had I had to read deploy that myself to use it the way I wanted but I mean they had it there you can play with it I mean that's that's that's exactly the sort of thing we need right yeah exactly so if I can squeeze in time for one last question you have posited that a superhuman mind so if we realize AGI or and then quickly on the heels of AGI ASI that's superhuman mind might perceive itself as a collection of patterns and subsystems rather than as a singular entity and then it also seems like the way that you've been describing in this interview as well as as well as in other talks that you've given to the past papers that you've written that you and I who primarily perceive ourselves because we have passports and drivers licenses today that say our name on them and you know that the face in those passport photos looks roughly the same even over a 10 year span so we kind of have this sense of of an individual it's even literally there in the name I am an individual you're an individual we can't be divided but in fact we are just made up of biological material some of it comes and goes like the ship of thesis and we also like you mentioned you know the mind viruses like we do have some of our own thoughts that happen to somehow materialize but most of what we think and most of what we do is influenced by experiences that we've had and other thoughts that have infected our brains and so yeah so the idea of me John Croner you Ben Gertzel is being this individual individual is nonsense and yeah I think I mean there's many possible kinds of intelligences right and they're what kind of mind a certain AI system becomes has to do with what the AI system is doing what experiences has now our our experience predominantly is controlling this this body in the world and so we naturally attach our experience to that then when you when you talk to people who have have meditated intensively for long periods of time you find they often come to conceptualize themselves differently and they sort of they don't center their experience around a sort of psychosocial self anymore they experience their experience is like there's this set of clusters of behavior patterns and and then some of these behavior patterns have some you know models that they concoct for some temporary purpose associated with them and that's that's a different way of organizing experience than the way that most people have in their ordinary state of consciousness right now if you're if you're in AI system you may land in that sort of state of consciousness right from the get go because even if you're controlling a body like a Sophia robot they're an avatar in Sophiaverse I mean the same AI system can control a lot of bodies at once or I mean you can save knowledge from one body reload it in in in another body and it can also do stuff like read the web or prove a billion math theorems that are unrelated to being stuck in a body but they're just crunching data on some machine so it would seem likely that the most natural way for an AI system if it's based you know an Azure compute cloud or on singularity that decentralized compute cloud right the most natural way for that AI to experience itself would be more like the more like enlightened advanced humans more like just okay there's a set of clusters of behavior patterns that have different sorts of models and capabilities associated with I mean for humans to come to look at it that way we have to let go of a lot of ego and social conditioning but for for an AI look at it that way might be very natural just because the AI is not attached to a given body like it in the in the in the first place him this is actually one of the reasons I'm optimistic on on AI ethics and I think most people actually have a good sense of what is the ethical thing to do by standard human ethical judgment and this is why chat GPT is so good at answering ethics questions like given situation what's the right thing to do it's very good at figuring that out and I think almost all humans are also the problem with ethics isn't that humans don't know what our common sense says is ethical the problem is that we would often rather do what's good for ourselves or our tribe instead of what we think is is good for the whole and I mean me too in various points in my every every day life I'm I'm not a perfect human either now an AI system can have the same sense of what is ethical that we do and you can see chat GPT already embodies that sense in some way even though it's not a moral agent it has a good ability to answer ethics questions right then but the AGI doesn't have to have a selfish or a tribal interest unless we program and and and condition to it could come it could come right out of the box with sort of general common sense of ethics as it as its main driver we just have to configure it that way right I mean we're we're defining what is the top level goal of the of the AGI system we evolved to fight to survive and we have to work to overcome that like right right now if I'm in a really good mood I take everything blissfully and calmly I'm in a bad mood and someone points out something I did that was wrong or stupid like there's some anger that rises up inside me I'm like thank you are you saying that and I mean I I just attempt that down and let that like float past before it takes control of me because I'm 56 years old I'm not five years old right but but we all we all have that right I mean we evolve with that you don't have to program that into the into the AI right we we should not do so now we could if you want to build military robots and if you're building like a corporate sales AI you may build into it that is motivation is haha I figured out how to extract all this guy's money by selling him crappy doesn't need I mean you could build the AIs with that motivation but we don't have to we could build an AGI with a motivational system to like do do what the average person would think is the most beneficial loving and compassionate thing in this situation and if we program the AI to have that as its goal it will do it and it will probably then evolve a quite sort of enlightened unattached in inner life which which is almost natural if you're a mind that's not by design attached to anyone body infrastructure or or hunk of matter right so this gets into why I'm optimistic about the potential for beneficial AGI and why decentralized is important because centralization of control tends to bring with it some narrow motivational system separate from from the the ethics of what's best for everyone yeah fantastic answer Ben and I'm aligned with you on all that and it's nice to have a guest on the show wherever you can get into these kinds of topics you know I frequently have really brilliant researchers on the show and I ask them big questions like you know where is all this going and you know what would you like to see happen in our lifetimes or you know what kind of world you want your children and grandchildren to be living in and it's pretty rare it's surprisingly rare that people are willing that their that their minds are willing to unfold beyond six months 12 months and try to figure out how things might look so this has been a fabulous conversation for me no doubt for a lot of our audience as well Ben before I let you drop off I'm sure you'd have an amazing book recommendation for us let me think I will I think I'll recommend a book from the late 1960s actually which I read when I was a little kid which is called the the Prometheus project by a Princeton physicist named Gerald Feinberg I read this in 75 or so he wrote it in 68 and what it said is when the next few decades we're going to get AI smarter than people the ability to prolong human life indefinitely and nanotech the ability to manipulate matter at will and our choice will be whether to deploy this technology for rampant useless consumerism or for expansion of human consciousness and he proposed that the UN should put that to a vote of all global citizens as to whether to take these advancing technologies and and direct them toward the toward consumerism or toward consciousness expansion and that it's funny now to look back at how this was was conceived like back in the in the late 1960s because it's it's sort of all there but expressed in an archaic language very cool love that recommendation and obviously anyone can learn a ton from you as they did in today's episode after this episode how can they follow you yeah great question so singularity net.io has links to the various various different media outlets associated with singularity net of YouTube and Twitter and blog and and so forth for me personally if you can look at my website uh Gertzel dot org geo ert zeal dot org or follow me on the Twitter or YouTube where I'm just Ben Gertzel and the lots of stuff coming out all the all the time nice alright Ben thank you so much for taking the time with us today and yeah really uh mind expanding episode thank you so much yeah yeah thanks a lot great great conversation whoa so much for reflecting on from today's episode in it then fill this in on how benevolent decentralized AGI would not have a single owner and be for the benefit of sentient beings talked about how neuro symbolic systems genetic algorithms and knowledge graphs could be combined with deep learning and potentially realize AGI on a startling three to seven year time span you talked about how virtual or perhaps even physical interactions between Sophia humanoid robots could also potentially give rise to AGI level intelligence through unstructured exploration talked about how the capabilities of AI today are not far from being able to replace in his view four out of five paid jobs that humans do you talked about how if we realize AGI the artificial super intelligence and therefore singularity that it could immediately give rise to will dramatically transform society including by perhaps giving way to an interconnected hive mind and things like femtoscale modelers that could then print any desired object and he talked about how like a master meditator an AGI or ASI may be aligned with the best morals of humans and unattached to outcomes as always you can get all the show notes including the transcript for this episode the video recording any materials mentioned on the show the URLs for Dr. Gertzel's social media profiles as well as my own social media profiles at superdatascience.com slash 697 that's superdatascience.com slash 697 and if you enjoyed this episode nothing's more valuable to me than if you take a few seconds to rate the show on your favorite podcasting app or give the video a thumbs up on the super datascience youtube channel and of course if you have friends or colleagues that would love the show let them know all right thanks to my colleagues at nebula for supporting me while i create content like this superdatascience episode for you and thanks of course to evan and mario natalie surge silvia zara and curel on the superdatascience team for producing another mind bending episode for us today for enabling that super team to create this free podcast for you we're deeply grateful to our sponsors please consider supporting the show by checking out our sponsors links which you can find in this show notes and finally thanks of course to you for listening i'm so grateful to have you tuning in and i hope i can continue to make episodes you love for years and years to come well until next time my friend keep on rocketing out there and i'm looking forward to enjoying another round of the superdatascience podcast with you very soon