WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed.
The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except
for errors at the level of individual words during transcription.
Artificial General Intelligence (AGI) may be achievable in the near future, with a potential to transform the world as much as the invention of agriculture. Deep learning has been relatively easy to achieve, suggesting AGI is possible. Transformer models such as GPT, GPT-3 and chat GPT have tapped into a general phenomena which is easy to understand, and scaling laws predict they will improve with scale. For AI systems to reach their full potential, they need to have a low-entropy representation to propagate processes in an unbounded fashion. Join the debate and explore the possibilities at Medi Uni!
The concept of Artificial General Intelligence (AGI) is a revolutionary idea that could potentially be achieved within 10 years. While there is debate as to whether AGI is possible, the speaker believes it is a real possibility with a probability of 50%. If achieved, AGI could be mass-produced and have a similar impact to the invention of agriculture, transforming the world. To discuss this further, seminars are being hosted at Medi Uni.
Deep learning has been relatively easy to achieve, suggesting that AGI could be possible in the near future. However, it may require more complex ideas or technologies to produce, as semiconductors are a difficult problem. Evolution works by simple algorithms executed at scale, and it may be possible for this process to create AGI. Scaling laws predict that Transformer models such as GPT, GPT-3, and chat GPT will improve with scale, and gradient descent in principle can produce an AGI, though no single algorithm is guaranteed to do so.
Transformers are powerful learning machines which obey universal and scaling laws. They are capable of performing tasks which require reasoning and can understand code better than humans in some cases. Surprisingly, scaling of Transformer models has led to the discovery of an unexpected power law behaviour and emergent capabilities which are not explicitly trained on. While it is unclear if this will lead to artificial general intelligence, it is an important empirical discovery which is as important as the splitting of the atom.
Advanced intelligence may be easier to achieve than previously thought, as Transformer models have tapped into a general phenomena which is easy to understand. Deep Learning has solved the problem of representing the world in machines, enabling reasoning. However, today's AI systems lack the resolution to make enough serial steps without error to solve their problems, and Transformer models have a similar problem. To reach their full potential, AI systems need to fill this gap and have a representation that is low entropy enough to propagate the process in an unbounded fashion.
AGI stands for Artificial General Intelligence and is a hypothetical technology that can learn to solve any problem a human can. It is not to be confused with Deep Learning, which is a technology used today to create AI, but does not have the same capabilities as AGI. The speaker is discussing why short AGI timelines are important in terms of economics and technicality, and not in terms of sentience or emotions. They are also hosting seminars at Medi Uni to discuss this further.
AGI (Artificial General Intelligence) is a concept that could revolutionise the world, and many people believe it could be achieved within 10 years. Whether AGI is possible is still up for debate - some believe there is something special about human intelligence that makes it impossible to replicate artificially. However, the speaker believes that AGI is a real possibility, and assigns it a probability of 50%. If AGI is achieved, it could be mass-produced, having a similar impact to the invention of agriculture.
AGI is possible and may be easier than previously thought. To date, deep learning has been relatively easy, suggesting that AGI could be achieved in the near future. However, it is possible that AGI may still be far off and require more complex ideas or technologies. Producing semiconductors, for example, is a difficult problem, so AGI may require similar levels of complexity. To determine if AGI is easy or not, it is important to consider why deep learning has been so successful so far.
Evolution works by simple algorithms executed at scale, producing complex systems that are more complicated than the algorithm itself. People often think that AGI must be designed by something more complex, but this need not be the case. It may be possible for simple algorithms at scale to create AGI, and it is not known how long this process will take. It is possible for complex systems to be created by simpler ones, and this is one of the most profound facts known.
Scaling laws predict that Transformer models such as GPT, GPT-3, and chat GPT will improve with scale, meaning more training time, more parameters, and larger datasets. This has led to investments of millions, tens of millions, and now hundreds of millions of dollars in these models, as people expect to see improvements in test loss. Gradient descent in principle can produce an Artificial General Intelligence (AGI), though no single algorithm is guaranteed to do so.
Scaling of Transformer models has led to the discovery of an unexpected power law behaviour and emergent capabilities. These capabilities are not explicitly trained on, but still appear at various levels of scale. This empirical discovery is as important as the splitting of the atom, and while it is not clear why it is happening, it is an important feature of the physical universe of learning. Although scaling can be done with GPUs, there is a limit to how far this can go, and it is unclear if it will lead to artificial general intelligence.
Transformers are a family of highly capable learning machines which obey universal laws and scaling laws. These laws are similar to those we understand mathematically, suggesting that the capabilities of these machines are not accidental. They are able to perform tasks which require reasoning and can be used to have conversations about code. This indicates that they are able to understand it better than humans in some cases, although not perfectly. This suggests that they are capable of reasoning in some fashion.
Advanced intelligence is likely to obey universal laws, similar to the way electromagnetism does. Through studying Transformer models, we have tapped into a general phenomena which is easy to understand. This suggests that advanced intelligence may be easier to achieve than previously thought. Mental models are provided to help understand how Transformers work, using words, images, sounds, video, and actuator measurements to understand the world.
Deep Learning has solved the problem of representing the world in machines in a way that facilitates reasoning. Transformers are a simple algorithm which can manipulate vectors to reason about the world. The attention mechanism in Transformers is a simple concept for those who have studied linear algebra. The gap between today's systems and AGI is that deep learning is good at producing vectors which can then be used for reasoning, but today's systems are still not completely there.
AI systems do not need perfect representations of the things they're working with in order to be generally intelligent. However, today's systems lack the resolution to make enough serial steps without error to solve their problems. This is similar to before the invention of writing, where deductions could only go to a certain depth. Transformer models have a similar problem, not having a representation that is low entropy enough to propagate the process in an unbounded fashion. This is the gap that needs to be filled in order for AI systems to reach their full potential.
The discovery of scaling laws and its relation to human intelligence has made it increasingly likely that Artificial General Intelligence (AGI) can be produced within the next 10 years. It is believed that the remaining steps to achieving AGI are not too difficult, however there are still some issues with memory and other things that are missing from current models. AGI could have a huge economic value and therefore many people are investing in it, so it is likely that the experiment will be run. Seminars are available to discuss the potential disruption that AGI could bring.
you know I was reminded by Billy that I should uh welcome formerly everybody to the festival so let me take this opportunity to do that so this is the second time we've run an event like this and this is the two-year anniversary I guess of the beginning of Mata uni for which Lucas was in attendance I believe and some of you also but for many of you it is a more recent experience so welcome so there's going to be a series of talks today uh they'll all be around here or uh in nearby locations um if at any point you get lost and you can't find the talk just reset your character so you can click on the Roblox menu and go reset character and you'll just spawn into the right place um and if you have any questions get in touch with either myself or Blinky Bill type it in the chat or in the Discord and we'll we'll try and help out all right so I'm going to begin so the title of my talk today is the case for short AGI timelines you may hear chickens in the background there's nothing I can do about that apologies right so I'm going to say very briefly what I mean by this title and then get into it this isn't meant to be a super detailed answer to this question that would take more time it's meant to be a higher level guide to the way I think about this problem or this phenomena and there are dedicated seminars that talk about this kind of thing at Medi uni to which you are welcome to come I'll say more about that later all right so let me uh say what I mean by AGI first so today we have artificial intelligence things like chat GPT gpt3 AIS that can look at this board and figure out what the text on it is most of them that work well are based on something called Deep learning these are not general intelligence foreign so the precise meaning of General need not concern us exactly but uh humans are General generally intelligent and we take that to mean that they can learn to solve a wine a wide range of problems so an AGI is a hypothetical technology we don't know how to build it or at least nobody has built it yet that we're aware of so it's a hypothetical technology that can learn to solve any problem a human could solve and to perform any cognitive task a human can perform [Music] I'm framing this in kind of economic terms or kind of technical terms it isn't about sentience or whether the machine can fall in love or have a deep emotion those are somehow much more subtle questions than just can the algorithm perform any economic task in front of a computer screen that you
could perform or any human no matter how smart could perform all right now it sort of goes without saying uh that if you can produce one such machine you can probably Mass produce it although maybe that isn't so clear necessarily but if one can be produced it's likely it can be mass produced and that means that the invention of AGI will be a truly historic event uh on the level of uh on the level of the invention of agriculture for example or perhaps even more profound than that so it's really quite a Sci-Fi concept AGI if it's invented then the world changes rather rapidly and quite dramatically which is why it's so remarkable that many people serious people uh considering AGI to be a near-term possibility as in within 10 years or something on that order so when I say the case for short AGI timelines what I mean is let's examine the argument for why it's possible that such a technology might come to exist within 10 years of course nobody knows it could be much harder than any current analysis suggests it's partly just guessing right we don't really understand how we got to the point where we are so we don't have a good basis for predicting where we're going to go uh nonetheless what I want to argue for is that it's not ridiculous right and even maybe you should assign something like 50 probability to this which is what I do so I think that uh the probability of AGI being achieved within 10 years is greater than or equal to 50 percent which is crazy okay so uh here's a high level sort of uh diagram let's start with just a question of is Agi possible I mean the a is playing a role here right so general intelligence exists here we are we're talking We Believe ourselves to be generally intelligent but can AGI exist artificial general intelligence you could say no right now if you say no to that then uh basically that means you believe there's something unknown and very special about human intelligence I'm not dismissing that it could be the case I don't think there's strong evidence for that at least not that's convinced me but you could say that well maybe the human brain involves quantum mechanics in some way uh and it's just or some other phenomena that's extremely hard to find an evolution chanced on it in some extreme rare event and without that secret sauce it just isn't possible to go from our current level of AI to general intelligence that's possible um so to say no to this question you have to believe in something like that I think so I don't find that convincing I
don't see evidence for that so I'm going to say yes I think it's possible if it's possible the question becomes is it easy Now by easy I don't mean is it as easy as installing a game on Steam I mean is it on the order of things that we routinely do as a civilization right I mean setting up sanitation for a city with millions of people is not easy in the sense that it's trivial but it's a straightforward problem that we can solve by deploying pretty straightforward uh algorithms sort of executed at sufficient scale and with sufficient attention but now if to say no to this question would be to say again that there's something uh really tricky about getting to AGI now an example of something I don't think is easy is producing semiconductors so producing the semiconductors that run today's artificial intelligence is very difficult it involves many many layers of Highly Advanced chemicals materials engineering physics it's also a sort of human capital problem organizational problem that's a hard problem to say easy here I'm saying that it's dramatically easier than that to produce AGI given the existence of the semiconductors right of course to produce an artificial intelligence in today's fashion requires a lot of computer science uh hardware and many other aspects of our modern technological civilization but given that the question is is it easy or not from where we are to get to something that is a general intelligence now to say no here is to say maybe it takes another 100 years right maybe we just have to we're waiting for some super difficult idea and until we get that idea we'll just be stuck that's possible I don't think it's very unlikely right it could be the case it's a genuine possibility the question you have to ask if you're going down this path is if you say no well why has it been easy so far so by the standard I'm using the everything to date in deep learning has been easy and well here we are we have algorithms that seem to be intelligent in some fashion not quite like us but getting to hear has been easy by my standard and if you say that no what happened there I all got booted all right back we go foreign [Music] that was fun uh sorry I'm sorry say I heard up to him you're saying it's been easy so far okay I'm glad the boards seem to have saved what they were up to okay there's a few people who maybe were in orbcam and don't know what happened I think they'll just continue uh maybe somebody can walk over there and um help them out in a minute
okay I'll just keep going um okay so you could say no to the easy question but I'm going to say yes because well it seems like the simplest thing to say it's been easy so far maybe it continues to be relatively easy okay so let's say yes to that question I'm going to put more meat Behind These choices in a moment and then the question is will we do it right so maybe it's possible maybe it's even easy in the sense we know what to do and it's just a matter of doing it uh will we actually choose to do it to say no there is seems quite perverse right uh well okay there are many things that are possibly even easier that we just don't want to do because they don't seem to have any particular value but building AGI seems self-motivating to be uh uh if you can build such a system it will have immense economic value military value and so on so I think we have to answer yes to this okay and that leaves us down here that doesn't mean it's less than 10 years right I don't know actually a strong argument for why uh the difference between 10 years 5 years or 15 years we could go into more of that in other sessions perhaps but um yeah maybe I'll return to that question of the exact time frame at the end okay so I'm going to go through three or four kind of high level points that shape my thinking on on these matters in the first is a kind of fallacy I see people engaging in which is essentially the form of creationism or rather the logical fallacy behind creationism so the reason people rejected evolution is that we have a strong intuition that complex capable systems can only be produced thank you by more complex capable systems right when you see an eye you presume that there is a being which is more complex than an eye which designed it and it's a radical surprising fact that this is not the case right um it is probably one of the most profound things we know that Evolution Works namely a simple algorithm simple algorithms executed at scale can produce highly complex systems that's not to say it's easy to find such simple algorithms uh or that the scale is easy to produce but there's an existence proof right it is possible for extremely simple algorithms at scale to produce systems that are much more complicated than the algorithm itself um the intuition people often have about AGI is that well somehow to design AGI we would have to be much smarter than that system itself or something along those lines I'm not exactly sure how people think about it but um that need not be the case uh it may be
that we find a simple algorithm that we execute at scale which is capable of producing an artificially generally intelligent system that's not to say that any given algorithm we have in our hands although we find soon is guaranteed to do that but any argument that gradient descent the method by which deep learning models are trained any argument that gradient descent cannot produce in principle an AGI needs to rely on some specific property of gradient descent itself or AGI itself it to bypass this objection right in principle it doesn't it doesn't seem to be any general reason why it can't be possible the second thing I want to mention are scaling laws and this is probably maybe I'll use the next board this is the biggest update I think that one has to make uh from the last five years of progress it's a topic that's much discussed here in our seminars but I think remains under appreciated uh outside of sort of the technical literature so I'll say briefly what scaling laws are so Transformer models so that includes things like chat GPT gpt3 have predictable improvements with scale okay so Improvement means test loss so just very briefly so a model like GPT is trying to predict the next token roughly speaking the next word given a previous sequence of words say sentences the test loss is something like it's the number of words it gets wrong technically speaking it's the cross entropy of the next one hot Vector of the token with some probability distribution that's output by the model but roughly speaking how many words does it get correct it's trying to drive that to 100 percent okay uh with scale means uh training time so the amount of flops you spend model size so the number of parameters in your model and data set size so the Corpus of language for example in GPT that you train on as you improve as you increase the scale you will get improvements you will drive the loss to lower values predictably and predictably means a power law I won't go into what power laws are foreign the results of these predictable improvements with scale are kind of predictable if you can predict improvements you'll make the Investments to get the improvements and we're seeing that play out with first Millions then tens of millions and now hundreds of millions of dollars worldwide being spent on training models like this because people expect to see improvements now I should say that we don't necessarily care about the test loss right so uh this is a measure of how good the model is but it's kind of a
technical interest what's really interesting about these predictable improvements with scale is that uh these improved measurements of the performance of the system translate to emergent capabilities and it's these two things together that is the large update I won't get into what the emergent capabilities are exactly the some of the original ones that were noticed was that were that if you train a gpt-like model on text at some level of scale it will start understanding how to do addition of two-digit numbers three digit numbers even though that's not something it's explicitly trained on so those are capabilities that we kind of assign conceptual meaning to whereas the test loss this kind of sum of errors on predicting the next word and large Corpus maybe doesn't have a clear interpretation but these emergent capabilities uh May okay so these two facts one that Transformer models a wide variety of Transformer models trained on a wide variety of types of data I have these power law behaviors scaling laws and that those scaling laws mean at various levels of scale that you get emergent capabilities these two things were not expected ahead of time transform models aren't the only kind of machine learning model that have scaling laws there are things outside deep learning even that have scaling laws but we don't tend to care about them because they they lack this second point it isn't the case with anything but Transformers as far as I know that we have a comparable understanding of the emergence of capabilities at scale and these two things together are I mean this is a deeply shocking uh empirical Discovery we don't understand theoretically why this is happening but this should not be underestimated I think these two things together in my mind seem about as important as the uh the splitting of the atom or something on that scale right we have discovered a new feature of the physical Universe to do with learning okay I'm not saying here necessarily that pure scaling is going to get us to artificial general intelligence it may some people believe that I'm agnostic so scaling I really mean quite serious scaling right so it's the difference between taking half the internet and like the entire internet as your training data set right we're talking pretty significant scales in order to get the current generation of systems um so you might ask the question well what happens if we run out of gpus I mean we can't scale forever right what if we run out of gpus is the
internet big enough do we have enough data to get to AGI and this question is getting at the point that even if uh pure scaling is enough to get to an artificially generally intelligent system that doesn't mean we can actually do the scaling right maybe we have to build a Dyson Sphere and then come back to this problem in which case well maybe we can have this discussion at Medi uni day uh 10 million right it's not a present-day concern okay um well that's a that's a valid question I don't know the answer to that question um I'll say a bit more about it in a minute but I think one is missing the point a little bit if you over focus on this objection um the real thing that you need to take away from this talk uh is that what we have discovered in Transformers is a family of Highly capable learning machines obeying some universal laws scaling laws we don't understand the laws exactly but they're similar enough to laws that we understand mathematically that we have strong reason to think that it's not some accidental freak occurrence it's an instance of a general phenomena which we're seeing manifested in these particular models happens and some of these emerging capabilities look like reasoning uh emergent capabilities now it's hard to Define what is reasoning and what isn't reasoning we'll probably never settle that question we'll just have highly capable systems that we kind of just agree our reasoning without ever having precisely defined it we're almost there already but a few years ago five years ago it was possible to look at Deep learning systems and argue about whether they're reasoning or not really nobody tends to have this argument anymore there are enough data sets that we think require reasoning systems do well on those data sets and when we examine to the extent we can the internal processes of those systems it looks sort of like something we would call step-by-step processing of learned representations in a way that we can kind of agree as reasoning we'll get better at that interpretation we'll get better at making them work on reasoning tasks but at this point I can have a conversation with chat GPT about code I'm writing and it arguably understands it better than I do sometimes right or at least certainly faster not perfectly but humans don't understand things perfectly either so to say that GP chat GPT is not reasoning in some fashion when you have a long conversation with it about a piece of code seems a bit perverse to me okay so this is the shocking fact this
last sentence and it's the obeying universal laws part that to me is the strongest argument for uh short AGI timelines so let me say a little bit more about that so I want to make an analogy to uh electromagnetism so you might think that if there's a universal law around you that will make a big change to the world surely you should notice it and that it shouldn't just leap out of nowhere and surprise you electromagnetism is a clear example of why this thinking is wrong so for centuries Millennia we knew about lodestones naturally occurring magnets and we thought this was a curious phenomena but we had no idea that the underlying cause of that magnetism the electromagnetic force was actually holding everything around us together right it's as universal as anything we experienced but we just had no idea that we had so we saw some tiny hint of it all right and the lesson there is that if you notice a phenomena which appears to obey universal laws right all magnets behave more or less the same you can do some measurements and you can kind of see that there's patterns in the way they behave that it's not some freak of some like it's not that magnets from some part of the earth behave completely differently to magnets from another part of the Earth when you see a system obeying universal laws like that the correct inference is that maybe there's a lot more of that out there and then indeed it turns out with further study that there is a lot more of the electromagnetic force and it explains a lot more of the world than you would suspect just looking at lodestones on your table similarly when we discover scaling laws for Transformer models the way I think about it is that we have just sort of finally tapped into with our level of Technology a very general phenomena which we were here for not really aware of and that is something like uh advanced intelligence may be easy and Universal in the sense that sufficiently General learning Machines of a wide variety of types may just manifest scaling laws and also emergent capabilities maybe it's just easy and our intuitions about it being difficult are just wrong and based you know on on their observation that Evolution may have not produced highly advanced intelligence like us very often but maybe that's just a misleading intuition okay uh quickly on with a few other points I want to give you a mental model for what these Transformers are doing so what if we could turn words images sounds video actuator measurements from
robotic arms etc etc into vectors and make a kind of linear algebra computer could learn programs which manipulated those vectors in order to reason about the world well you don't have to imagine doing that because that's what a Transformer is it can do all of those things now it isn't some super complicated uh mathematical object Transformer the roots of the idea go back to statistical mechanics to hopfield networks it's quite similar to an icing model in some ways there were precursors the neural turing machine the differentiable computer the attention mechanism in Transformers is is a very simple idea to someone who studied linear algebra so from some point of view it's it's a simple system the Transformer it's a simple algorithm which adds scale produces rather interesting systems and the point about the vectors here is the the real challenge for artificial intelligence from its Beginnings was intuition right we kind of believe that computers can do manipulations on sort of discrete entities very quickly and serially for long periods of time without error but the thing we never knew how to do was how to get intuition into the system right how to learn some kind of representation some fuzzy representation of the world which we could then do operations over in order to uh reason and the failure of traditional artificial intelligence was that there's just seems very hard to represent the world in our machines in a way that's actually facilitates reasoning at a higher level uh you know if you want to put into your system that a glass is half empty is it half full or half empty what's the relation between those you have to encode all sorts of things about the world that just quickly becomes completely unmanageable so by hand we simply failed to produce representations of the world in our machines that were useful for artificial intelligence but deep learning has solved that problem at least to a great extent so whether it's intuition about go positions or uh the English language or images deep learning produces vectors which on top of which you can do operations uh the intuition part is the part that deep learning is good at on top of that learned representation you can then try and do something like reasoning and that's the part where Modern systems are still not completely there so the gap between at least my opinion about the gap between today's systems and AGI is the following so I feel a bit watching GPT try and reason about things I feel a bit like watching a calculus student uh
who kind of gets it right they they've watched the lectures they've read the textbook they kind of understand a bit how things fit together but you can sense in their mind that it's a little bit the resolution is a bit low and what that means is that when they try and reason about the representation they have of the subject they can do a few steps but if they try and do too many steps in a row their ideas are too fuzzy and they'll make some error or they'll just try and put two things together that don't go together and they'll just be lost and not know why and that's a very common experience and as a lecturer what you'd learn to do is recognize that lack of resolution and the part that is low resolution and then you bring someone's attention to it you up the resolution until they're able to make enough serial steps without error to solve their problem it doesn't have to be perfect right physicists can use mathematics all day long and probably make very few errors even though they don't really know the definitions that's a fact about the universe you don't have to know the precise definitions of things to reliably make use of them likewise an AI system does not have to have completely accurate representations of the things it's working with in order to be generally intelligent that's a fallacy but today's systems probably don't have quite enough resolution in the way that mathematics is perfectly clear you can have a very good idea of what's going on and make many serial steps in order to deduce some highly non-trivial fact today's systems don't really have a way of getting escape velocity on that process they'll make a certain level of deduction at a certain depth but they're not able to sort of reach an unbounded level of deduction um in the way that uh yeah Lucas was just talking about inventing writing right so before the invention of writing we had a similar problem right we could only do deductions of a certain depth because we didn't have an external representation that was low entropy enough for us to propagate that process uh sort of infinitely or at least in an unbounded fashion it feels a bit like modern Transformer models have a similar problem they don't have some representation like that that is crisp enough to be error corrected in an unbounded fashion so maybe that's the Gap but that doesn't seem to me like a I think getting intuition into the systems was the hard part and that's behind us I think the this error correction part seems like the part that
machines are good at I don't feel like this is uh a profound problem and hence uh this observation together with the um what's on the previous board here the discovery of scaling laws is the reason why I think we're on the home stretch as shocking as that okay I want to make one more point and then I'll wrap up and we can have a discussion questions [Music] and this is just about the uh the economic value now tied up in the race for AGI not everybody with money believes in short timelines but enough to uh there's not a shortage of money anymore which means that you might think it's crazy to scale the system until it costs a billion dollars to train but if somebody believes with some reasonable probability that they might get an AGI if they do it it's going to be done we're past the point where this was a question mark the systems work well enough that enough people believe enough that we're going to run this experiment uh maybe we find out that actually we're not there yet uh and maybe we need more profound ideas but people are going to try so this isn't the obstacle um maybe four years ago it wasn't clear but at this point I think it has become clear so the question is how hard are the remaining steps we're going to try many people are going to try very hard of course we don't know the answer to that question uh but to believe that there's a super hard step remaining you have to think there's something special about the um the segment in between the current level of reasoning ability that uh Transformer models have and our level of reasoning ability and I just can't see what that is so maybe you can suggest some ideas in the discussion but I don't see any hard remaining steps the only one I see is the one that I just proposed earlier we have some issues with memory and uh I mean there are there are things that are clearly missing from today's models but they don't strike me as as profound things okay so as a result of um mainly the discovery of this Universal phenomena of scaling laws and its relation to things that we think are quintessential to advanced intelligence in humans such as reasoning um oh yeah so I think uh that within 10 years or so we have a good chance of producing and AGI what that means of course is a topic we should be discussing uh very intensely in the remaining 10 years and there are seminars here at Medi uni where we do that the disruption seminar in part and also the singular learning theory seminar so you're welcome to attend those but I'll stop now and open the