WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed.
The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except
for errors at the level of individual words during transcription.
AI is becoming increasingly powerful, with the potential to surpass human intelligence. The progress of AI is unpredictable and dependent on randomness and the productivity of individuals, rather than a predictable production function. To scale up, cost must be reduced through modeling and simulation. AI could lead to vast new possibilities and a paradigm shift in which systems move from narrow to general intelligence.
Intelligence is distinct from knowledge, and therefore cannot be analysed with the same tools as other technologies. AI and AGI require intelligence to put knowledge into practice, and are too unique to be analysed with existing tools. Technology and knowledge are closely related, as they measure the efficiency of desirable physical transformations, while intelligence is embedded in capital and costs. Disruptions can interact, leading to S-curves of progress over time within the same domain.
Technology progress is often said to follow s-curves, but there is little evidence to support this. However, in recent years, s-curves have been observed to occur more rapidly, leading to an overall acceleration of technological progress. This could be due to a compression of knowledge, allowing people to utilize existing knowledge more quickly. Intelligence may be a factor in this acceleration, though it is difficult to quantify. Intelligence can be defined as the degree to which one is able to extract information or do something with knowledge, and is related to the scaling exponents of large neural networks.
AI has the potential to surpass human intelligence, allowing access to realms of intelligibility not previously achievable. This raises the question of whether further thresholds will be encountered, and whether machines will be able to access these in a way that humans cannot. Machines have taught humans some simple ideas, but there is a whole range of deeper, clever ideas that humans can't understand. However, some have argued that with enough time, humans could achieve the same level of intelligence as machines. Despite this, there is still potential for machines to teach humans something more.
Sean Carroll believes that humans should have realistic humility when it comes to mathematics, and that some things are beyond our reach. Artificial Intelligence has already surpassed humans in some areas, and may one day lead to a paradigm shift in which systems move from narrow to general intelligence, leading to vast new possibilities. It is uncertain if this will occur, but it could have dramatic effects on the future.
Humans have made a fundamental leap in intelligence compared to other organisms, due to language, symbolic reasoning, abstraction and the ability to quantify. This could be further increased by scaling up speed and access to memory, but progress does not depend on a predictable production function. Instead, it is dependent on randomness and the productivity of individuals, so scaling up the amount of samples taken may not always increase the rate of progress.
Scaling up human institutions to produce knowledge may not always lead to progress, as it is uncertain and unpredictable. Instead, progress is seen through vertical increase, or depth, rather than breadth. An analogy can be drawn between increasing a quantity such as population and raising the temperature of water. This could be applied to Artificial Intelligence (AI) as scaling up could lead to human-level intelligence or beyond. Population size is an important factor in the likelihood of a genius appearing, while cost is an important concept in economics and personal lives. Reducing the cost of something increases the likelihood of it happening, which is done through modeling, simulation, and prediction.
Intelligence is a complex phenomenon that does not fit into existing disruption models. It is distinct from knowledge, as an agent or system with different levels of intelligence can achieve different results with the same set of knowledge. To avoid making the mistake of applying the wrong tool to the situation, it is important to have more discipline and humility and recognise that intelligence may be beyond the tool that is currently available.
Intelligence is not the same as knowledge, and is therefore not a technology in the way that it is typically defined. AI and AGI are different from other technologies as they require intelligence to be able to put knowledge into practice. AI and AGI are too exceptional and unique to be analysed with the same tools as other technologies, and it is important to be humble when attempting to do so. It is possible to analyse pieces of AI and AGI, but it is important to remember that intelligence is not static.
Technology and knowledge are closely related, as they both measure the efficiency of desirable physical transformations. Intelligence does not have a direct role in this, as it is embedded in quantities such as capital and costs. Disruptions are multiple and can interact, but each domain has a fixed contribution of knowledge that leads to an S-curve of progress. Over time, this can lead to further S-curves within the same domain.
Technology progress is often said to follow a succession of s-curves, overlapping and resulting in a continuous exponential growth and acceleration. However, there is little evidence to support this claim, and history has shown long periods of quiescence. In recent years, however, s-curves have been observed to occur in a shorter time period, and more progress is seen in multiple domains simultaneously. This could be resulting in an overall acceleration of technological progress.
S curves represent progress in different domains, and have been occurring more rapidly in the 21st century than in the past. This could be due to a compression of knowledge, allowing people to utilize existing knowledge more rapidly and produce progress more quickly. Electric vehicles are an example of a disruption that took a long time to occur after combustion engines. Intelligence may be a factor in this acceleration, but it is difficult to quantify this in any meaningful way.
Intelligence is often defined in terms of capacity to achieve a goal, but this is circular. A better definition is the degree to which one is able to extract information or do something with knowledge. This is related to the scaling exponents of large neural networks, which measure the increase in ability to make predictions when given an order of magnitude increase in data. Intelligence can be thought of as a measure of this scaling exponent, and can exist even if there is no learning taking place.
Intelligence involves more than just having access to data, time, or hardware; there needs to be a threshold effect to capture real world intelligence. This is demonstrated by the fact that no amount of time can cause a chimpanzee to understand calculus. This suggests that there is a capacity for intelligence that must be passed before value can be extracted from data or knowledge.
AI has the potential to exceed human intelligence, allowing access to realms of intelligibility not previously accessible. This raises the question of whether further thresholds will be encountered, and whether machines will be able to access these in a way that humans cannot. Chess players discussed the impact of AlphaGo on the game, noting that it has ideas beyond the comprehension of any human. People may feel despair at the idea of AI surpassing humans, but they hide this with the promise of collaboration.
Machines have taught humans some simple ideas, but there is a whole range of deeper, clever ideas that humans can't understand. People have settled into accepting that machines are better than humans and that humans won't be able to learn much more. However, some people have argued that humans have already discovered the fundamental tools of thinking and that if given enough time, humans could achieve the same level of intelligence as machines. This view is not accepted by everyone, and there is still potential for machines to teach humans something more.
Sean Carroll believes that hyper-intelligent humans are at risk of thinking that any domain of mathematics is achievable with sufficient study. However, this is not the case and it is a form of hubris to think so. Even the brightest of humans will never be able to occupy the same depth of understanding as some of the greatest mathematicians. Realistic humility is key and humans should be content to accept that some things are beyond our reach, no matter how hard we try. This is the same with trying to dunk a basketball - it is impossible for most people and is not achievable with training. There are astonishing heights to be reached, but it is unknown whether it is a continuous climb or a more punctuated ascent.
Artificial Intelligence has already surpassed humans in some areas and is far more advanced in terms of speed and memory. A paradigm shift may occur when systems move from narrow to general intelligence, leading to a vast new space of intelligibility and the potential for systems to enact their intelligence on the universe with effects that may seem magical or mind-boggling. It is uncertain if such a shift will occur, but it is a possibility that may lead to dramatic changes in the future.
Humans have made a fundamental leap in intelligence compared to all other organisms that preceded them, due to language, symbolic reasoning, abstraction and the ability to quantify. This leap is seen as the only one to be made, and from here on out, any further development is simply a matter of speeding up the same processes. There is no reason to think that humans are at some kind of maximal point and no further leaps can be made.
Scaling up the speed and scale of intelligence could be enough to reach a point where it is vastly more effective. Already, tools such as computers and writing have enabled humans to achieve things that would have been impossible without them. It is unclear if multiplying the number of scientists working on a problem or speeding up the wrong people and giving them more hours would actually increase the rate of progress. It may be that scaling up speed and access to memory is sufficient to break through a fundamental threshold.
Progress does not depend on a predictable production function, but rather on a stochastic search process. Scaling up the amount of samples taken can increase the rate of progress, but this is not always the case - if the person is not particularly productive, then scaling up may not help. An example of this is the scientific culture in China, where cultural forces can hamper anyone with a different way of thinking. Therefore, progress is dependent on randomness and the productivity of individuals.
Scaling up human institutions to produce knowledge may not always lead to progress. While it is possible to scale intelligence, it is uncertain and unpredictable. Instead, progress is seen through vertical increase, or depth, rather than breadth. Increasing scale can lead to the emergence of new capabilities and more refined representations, such as reasoning. Therefore, scaling up may not always lead to expected results.
An analogy can be drawn between raising the temperature of water and increasing a quantity such as population, wealth, or capital. This can create a background condition which increases the likelihood of stochastic events occurring, such as evaporation or grains of rice jumping out. This could be applied to Artificial Intelligence (AI) as if human-level intelligence can be achieved, it could easily go beyond it by running more copies or running them faster. This could happen rapidly, though it is not necessarily the case.
Population size is an important factor in the likelihood of a genius like John Von Neumann appearing. Cost is an important concept in economics and personal lives, which involves making trade-offs between different values or goals. Reducing the cost of something increases the likelihood of it happening. In intelligent systems, part of the goal is to reduce the cost of achieving certain effects in the world. This is done through modeling, simulation, and prediction, which drastically reduces the cost of experimenting on the physical world.
A population produces according to a stochastic process, with a distribution. To increase the amount produced, a system can be augmented with another identical population with different random seeds, taking the maximum of the two. This shifts the distribution upwards, as long as the variance is not zero. In reality, doubling the amount of scientists may lead to more bureaucracy, but the speaker suggests talking and imagining to come up with ideas.
In order to scale productivity, it is necessary to design institutions that enable communication and collaboration between different groups. This could involve dividing scientists into groups and allowing them to share and build upon each other's work. An example of this is two different planets communicating about their discoveries. Open Philanthropy has a report which is well known in the AI safety community that looks at quantifying timelines for progress based on timelines for compute and scaling laws.
I've had a couple of thoughts and one which I think may be interesting is certainly for me to have a conversation with you about so the first uh uh yeah let me let me not start at the end let me start at the beginning of this train of thought that would be better okay um the the beginning of the train of thought is that with with intelligence itself or so we have artificial intelligence and then maybe we've got some idea of you know you could have narrower forms of intelligence versus abroad or a general form uh or more General forms of intelligence and perhaps there's perhaps there's a spectrum it's hard to know how whether you know it's completely continuous or whether there's it's sort of um uh you know there's some phase shifts in there or it's hard to know because we you know we're short on we're actually not short on Theories of Intelligence but we're short on a compelling and convincing in the consensus theory of of what intelligence actually is I mean I don't want to get into sort of what is intelligence um but one thing that seems clear to me is that um uh the the this particular phenomenon does break our disruption models I think that's pretty clear and so what I don't want to do um is make the mistake of if you've got a hammer then every problem looks like a nail right you need to be you need to have better discipline and uh more humility than that right and so I don't want to you know say oh well we've got this this we've got this tool and it works in all these other situations let's force it to work in this situation I think it's I think it's it's just it's there's there's better rigor and integrity intellectually to saying this probably won't work or fit into our models it just it just it breaks it so that's that's that's one place where I I've landed is that is that this is probably a phenomenon is just beyond what uh beyond the tool that my team has got and uses and that we would be probably fooling ourselves if we try to apply it to this this particular situation because the you know fundamentally intelligence is so different than other phenomena um and and we talked briefly last time about you know intelligence is perhaps distinct uh in in some important ways from knowledge um they certainly interact but but it may be you know um I in my mind it's pretty clear that those two things are distinct because you can you can imagine that with a given set of knowledge agents uh or systems with different levels of intelligence can can um achieve more or less practical
effects out of that so if you have if you're defining technology if the Practical form of knowledge um you have to have intelligence stand in relation to that in order to see anything go into practice it's in an unintelligent system can't get much of practical use out of knowledge um and a more intelligent system can get more practical use more more utility or usefulness or it's you know it's in one or more ways however you want to think about it um can get more benefit out of that same amount of knowledge and so to my mind that's pretty clear it's pretty clear from that line of thinking I wouldn't necessarily call it an argument necessarily but I think that that through that reasoning alone it's pretty clear that intelligence is not identical at least in some sense to to knowledge itself and um as a result intelligence is something more than other Technologies other Technologies at least the way that my team and my our theoretical framework defines Technologies is as knowledge not anything more than that so um uh so that's why I think this Tech this Ai and certainly well not maybe that certainly but Ai and perhaps especially a GI and and uh especially especially super intelligence um that's why I think these probably break our model they are not Technologies in the sense that that we uh conceive of Technologies normal so there's something there's something remarkable or different or special um uh exceptional whatever about about Ai and AGI that sets them apart on a fundamental level from other Technologies and so I I um I would be very hesitant to try to apply any we would be reasoning by analogy and problem making Brave errors rather than you know being able to sort of formulate um and legitimately fit the phenomenon into our model uh in in a useful way okay so that's the first thing that occurred to me is just you know be humble this probably just isn't going to work um because it's a it's a it's too special or too exceptional in the case or just completely you know wrong category it's just not you know you're making category areas errors trying to analyze it with the tools we've got okay so that's the first thing the second thing then is that what well is there anything about this or any piece of this that we can analyze um so just uh just interrupt Adam sorry go ahead yeah Matt asks a good question about the sort of model of disruption you have maybe assuming a static level of intelligence I'm not sure that's necessarily true is it but it's just that you don't really intelligence isn't
there you conceptualize knowledge and Technology as being the same thing and that is something like the measure of the efficiency with which you can achieve transformations of the physical world that are desirable and it's just not a intelligence just doesn't play a role there you could say it's kind of to do with the production of knowledge which is somehow in the black box from your point of view and knowledge arrives and then something happens and knowledge arrives and then something happens but the in some sense I mean yeah maybe this is the kind of in the model in the models that we have intelligence doesn't have any it doesn't it doesn't I mean you could say that we assume there's some background um level of it or something like that but it doesn't actually play a role there's no quantity in any of the models that an intelligence holes or anything like that if if anything um we probably using other quantities that sort of are either proxies or sort of um Loosely capture some elements of intelligence and those quantities are things like capital and costs right I mean in in some sense you know large large aggregations of capital inside you know organizations like governments or corporations um and uh you know the deployment of of liquid capital in the form of um uh you know Financial assets like cash and anyone an investment and and minimizing things like costs I think intelligence some background level of intelligence is embedded in those quantities but it's yeah it's not it's not a separate and distinct quantity that we that we have in the model anywhere in our in our framework I shouldn't say model in our framework anymore foreign if I mean as as Matt points out you know if intelligence isn't in the model then uh lots of intelligence can't break it but maybe maybe one way of thinking about it let's see if you agree at him I'm writing something on the board so you kind of think of disruptions as well you do think about multiple disruptions happening simultaneously and interacting but fundamentally in any given domain say energy you think about there being a discovery or an increase in knowledge and then that plays out more or less as a fixed contribution right so there's that's kind of technology and then it improves in a feedback process but in some sense there's an overall kind of chunk of progress that happens and then that leads to a single S curve and then maybe a generation later say there's a there's another S curve that's to do with that same domain say energy
but if if the underlying process that's generating the knowledge is itself being disrupted then you could imagine that these these S curves kind of approach one another so there's a kind of indistinguishable merging of many disruptions and that's maybe in aggregate not something that's subject to your models because it you know if if you if you if you're undergoing still experiencing the transition from the first chunk and then the second chunk arrives you're kind of you're kind of adding together these two curves and you'll get you know something that's uh much more rapid well yeah I mean in in any given domain you know this is you have people like Kurzweil who often claim this I'm not sure I see a whole lot of real world evidence to support the claim but Kurzweil I think popularly popularized the idea that you that you have a succession of s-curves and they overlap and then this produces over time at a sort of a a lower resolution um you know view a a uh a continuous exponential growth sort of an overall acceleration and I honestly don't necess I don't believe the evidence supports that all that well in any given domain you can kind of squint it human history and broadly say that that uh no technological progress has accelerated overall and so maybe if you add all of the all conceivable s-curves together you get some sort of aggregate that looks like continuous accelerator you know constant acceleration um a technological the pace of technological progress but even then you know we've had long long quiescent periods um so I'm not I'm not sure that that there's a whole lot we can generalize in terms of claims about how these these curves overact are overlap rather but one thing that we do see I think more the often or sorry so what's the right way he says uh uh on yes okay so so on the one hand we do seem to in in uh at least in recent years we do seem to um uh we do seem to be seeing that S curves proceed through their course in a shorter time period you know where where they once might have happened over several decades now they can happen within a single decade and in a way that they they didn't in the past and so there's a there's there's there's something to be said for you know these um the the actual time some time compression happening in terms of how fast these the a given s-curve will occur and then on top of that you know we we um uh you I think we also see more progress happening in more domains simultaneously and so we have you know more overlapping potentially overlapping
S curves not just within a single domain but will cross interacting across multiple domains and um certainly that's the case sort of in the 20th century and now you know and now into the 21st century and um in a way that it wasn't for example a thousand years ago or ten thousand years ago so that I think there's good evidence strong at it it's pretty clear evidence for the whole you know um bit about how one the next uh s-curve naturally falls on from the last and and uh they overlap and that that I think there's less evidence in any given domain for that we've ended up waiting a long time for um electric vehicles for example uh you know after combustion engines uh Vehicles disrupted courses and that there's lots of other examples like that so um the one the next S curve does not automatically overlap with and Chase the previous one it's that's that's that's not a any kind of law you can open you know you can rely upon but one thing that does occur to me here is that the compression of all of this that compression that might be something that that stands as a proxy for intelligence right so so you know if you're if you're but again it's hard to know I mean how do you that's pretty strongly perhaps correlated or even you know that much difference to this accumulation of capital human capital and other forms of capital um uh resources basically but but if you but you could look at you could squint at it a certain way and think okay well one reason why we can why we're doing where we're racing up these S curves may be a bit faster than the past all we've had some fast ones you know in the past we've had you know we've had some we had fast Neolithic Arrowhead s-curves in the you know in the Neolithic era thousands of years ago you know where disruption only took a generation so um but anyway if you accept this the idea that we're getting faster and we're doing more at once so so both of those things maybe that is a at some level some not well it could be in an individual but certainly at some group level of analysis we are smarter we're able to take the knowledge that we have and you utilize it more rapidly if nothing else and produce progress more quickly than we were in the past and that and again I I think you know you could say some things perhaps about that but I'm hesitant to to you know um uh to try to shoehorn intelligence per se into into this anywhere in any kind of quantitative sense without thinking about it quite a lot yeah I like the way you put it earlier
that I mean intelligence is there are quite a few definitions in the context of AGI and so on you would have seen Shane lakes with Marcus hutter and of course there are many many definitions uh over the decades often it's defined in terms of kind of capacity to achieve a goal uh in a variety of environments or something like that I don't really like that definition it strikes me as circular in some essential sense um but I think I prefer actually the way you described it earlier as the kind of degree to which you're able to extract information or how did you put it I mean extract something from knowledge or do something with knowledge that's that's kind of related to the scaling exponents that we've been discussing in the context of um large neural networks in which I mean the the scaling exponent is literally the given an order of magnitude increase in data how many orders of magnitude do you increase your ability to to make predictions to generalize outside the training set that that ratio is the scaling exponent and that's sort of clearly related to the notion of intelligence you expressed and I would in be inclined I suppose these days to think about intelligence as being a kind of measure of that scaling exponent um so maybe just to write up a formula on the board I'm not sure it's exactly what we want to talk about but um so the scaling exponents or the scaling laws that look like this so where G is is a measure of generalization error or performance on new data from the same distribution this is the size of your data set I wouldn't call that knowledge exactly right because it's it's just data so that's maybe where this analogy breaks down but and this this thing here is the scaling exponent so you could say uh yeah I'm not okay I'm sort of disagree going to disagree with myself now but I'll finish making this point so you could say that more intelligent system can more rapidly see the point of the data right you're given data the more intelligent you are the more quickly you can recognize whatever structure is in it and uh and use that to make predictions and that rapidity is measured by by the slope on this log log graph but you would also say if you you know if you took a large learning machine and you faded data a static end result say gpt3 you might say that's intelligent even though it's not learning anything so there's clearly a sense in which you can be intelligent even if you're not learning so I think I I'm now going to retreat from this connection between gamma and learning
but uh and intelligence but uh I did like the way you put it earlier yeah I know that we just want to have no more thought in here it's a fun aside um uh but just to add one more thought is that their there probably needs to be some some accommodation or some set of threshold effects or or something um to capture what what we see in real world intelligence right so so um uh I'm a little bit skeptical I'm open to persuasion but I'm a little bit skeptical that um uh you know for any given system you know uh any level of you know any level of that sort of what's the right way to say this that that um that the thing and I don't know again if this so I'm sorry let me see if I can organize my thinking a little bit um okay so we have several different uh quantities that are that are interacting specifically in the in the in with the scaling laws right that seem to be that substitutable you can sort of trade off between them you've got the size of the data set you've got this you know the the um the hardware and and you've got time right um and if and you can sort of there's some sense and maybe it's not perfect but there's some sense in which these are are are tradable that they're they're you know you could if you if you have more of one or less than the other you can still get the same result and I I think I I I'm open to persuasion that that holds completely however when we look at real world intelligence and we see this in individual humans and we see this in different species um we see that you know there are you know there are many many real world examples where no where uh let's say Okay concrete let me make the concrete instead of just keeping this abstract but like there's no amount of time that you could give a chimpanzee and the chimpanzee will be able to understand calculus or something like and pick any problem like that right you can pick anything like that there are many many examples like that and so what does that tell us about the systems it isn't just a matter of of having enough time to Crunch away on a problem or or or or or whatever there's you there's something there are there's some sort of capacity something about intelligence that you know you tip past some certain threshold and suddenly uh it becomes possible to extract value out of information data or or perhaps knowledge if it's if it's data that's that's you know um I don't know how we distinguish Knowledge from data exactly but but if you if you're standing in relation to if an agent that's intelligent is standing
in relation to information with knowledge or data or or the environment or whatever else um there's there seems to me to be some sense in which you know the agent's capacities are not continuous but you know there are there are these sort of these these thresholds you exceed them and suddenly a whole new space of possibility possible outcomes opens up um uh you know with respect to the agent in relation to the to the available data or the available knowledge or whatever and um and I think this is sort of a fundamental question now going forward is you know our ours are Beyond human intelligence is going to continue encountering sort of these ceilings and then break through and suddenly and you know in whole new domains of are accessible that simply weren't accessible with a with a you know a lesser amount of intelligence given all the time in in the universe or or is it from this point on all just fundamentally you know um accessible to human scale intelligence so we got the tools we needed did you really just have to crack symbolic reasoning and abstraction and I know memory or something and andly anything that's intelligible can be intelligible to a human level intelligence given enough time I I don't know I'm I guess I'm open to persuasion but my my intuition is that we still have thresholds are still ahead of us that there are going to be things that that no amount of time or education would would a human could ever grasp that probably machines are going to be able to access in the same way that you know chips and dogs and dog you know cats maybe not Dolphins I don't know but other other mammals they're not as intelligent as us simply cannot access in massive Realms of um intelligibility maybe that's already clear I posted on Discord a discussion that some chess players were having about the impact of alphago on chess um I presume there's a similar conversation happening in Korean and Chinese about weichi or go uh but they were pointing out that you can kind of Glimpse the idea that alphago has about certain kinds of moves but it's just beyond any human to really hold all the relevant factors in their head uh so you might have thought and many people said in this over optimistic fashion that will repeat again and again in every domain of human Affairs over the next couple of decades where the first time an AI is able to do something people uh they they have a moment of private despair but they they hide that with a veneer of excitement about the possibility of collaborating with
machines and how great it will be uh that's the headline and then you know as time takes by everybody within the domain quietly comes to realize that well there's some things we could learn from the machine you know okay we we learned move 37 was not bad and in in weighty and then in chess there's some other ideas that we can also appreciate but uh that that's clearly kind of like a very superficial set of ideas that are within our grasp and then the machine is actually doing a whole bunch of other very deep clever things that no matter how hard we try we just can't quite wrap our heads around um and then people settle into a kind of as they have now and go is my impression I'm not an expert but settle into basically accepting that the machine is better than any human will ever be and will just never really get what it gets we've extracted what value we can out of the low-hanging fruit of the simple ideas it sort of shows us that we can get and maybe over time we'll get a few more but people steadily just revert back to not really expecting to learn a lot from the AI and I think that's that should be the default expectation that there will be a band of some stuff that will immediately see as soon as we turn the machines on that we'll get and we'll feel very excited that you know we're keeping up and still relevant uh but I think it'll be very short-lived as it has been in chess yeah I I certainly agree with that I that that's that's my intuition although you know it's it's that's all it is this is an intuition um I've heard several you know very very bright people who I admire and respect a lot um Sean Carroll is an example of one person I've heard say this and I think David Deutsch is maybe another one those are two people who's who I mean I'd like pay attention to their work and read their books and that kind of thing as an interest in Layman um but um both of them have have uh voiced this idea in the context of art of talking about artificial intelligence that um it won't be anything more than what humans are already uh uh and there's sort of a um a faith in the idea that we've we have discovered the fundamental tools of thinking and it's just a matter of applying them and if we had long enough or a long enough time uh to apply them or something like that then we would then we would achieve you know we could there's nothing fundamentally new that that something more intelligent than us uh could conceive of or or so forth and I I don't really buy that I think David deutsche's argument is more
that that there's going to be human and machine symbiosis um Sean Carroll seemed if I remember it correctly from the conversation I was sort of surprised to hear that but one thing that struck me is that he's you know for any I think it's it's possibly a risk for hyper-intelligent humans and he's he's one of them and and and and you you guys mathematicians are often in that group and in physics assists and the very very brightest human beings are at a risk of thinking well um uh I can understand anything I just a matter of applying myself and that's probably true it's not true for him I mean there are there are parts of math that uh I mean okay there I suppose I could probably tell myself that many domains of mathematics or all domains of mathematics which with sufficient study I could kind of get to some level but nobody can read like the work of Von Neumann or even con savich or you know or gross and Decor some of these people and and think I mean there is no way that I'll ever be able to occupy the same depth of understanding that they do any particular output I guess I can wrap my head around but that's not the same thing right it's not the same as being able to really follow what they're doing there is such a thing as as individual humans which are just really Beyond even the brightest of their peers uh to imagine that this isn't the case as a form of hubris and Sean Carroll isn't that bright uh I don't I don't buy that for a second well I I tend to agree with you but I'm and I appreciate that that's you know that I think that that's a healthy perspective and that that realistic humility is is is uh is appropriate and and decent of course um but I've certainly seen it in my own uh life uh and and um uh you know in painful ways you know with with um my daughter and that kind of thing um now acutely painful but but I I think that it's it's for for for us mere mortals we are often in awe of people who are experts in other domains and and we're just satisfied we're content to say well that's something I could never do no matter how hard I tried in the same way that I'm never gonna be able to dunk a basketball I could train my entire life and I'd never get anywhere close um and uh so I I suspect that there are there are um at any rate uh uh you know astonishing Heights to be reached um and whether or not it's sort of a continuous climb or whether there's it's sort of a more punctuated um Ascent uh maybe maybe from this point on I don't know I mean maybe from our perspective
it doesn't matter that all that much but maybe it does I don't know and and it's this is I suppose the big question um that's relevant in the near term for us now near term being like the next 10 to 15 years because I thought it wasn't going to be for 30 to 50 is a is there some sort of key threshold effect uh moving to General to to a general uh intelligence generally intelligent systems from narrowly intelligent or I guess a better way to say it would be um narrow well maybe not narrowly super intelligent artificial systems so right now we have artificial intelligence artificial intelligence is some sort of approach what humans are able to do I guess maybe image recognition and and things like that humans are you know still pretty close to Gold Standard although they're now being exceeded in some ways and then there are ways in which you know humans are being are already just not even a rounding error in terms of how it would be anywhere close right I mean there if you roll a calculation and memory and speed and you know there are a bazillion ways in which narrow artificial intelligence are super intelligent by any conceivable you know view You couldn't possibly argue that that um you know your computer is not super intelligent with respect to humans as far as you know mathematical um uh uh calculate getting you know calculating speed goes or memory goes or any of those sorts of things and so the question in my mind um it is relevant in this in this context is does this this um is there is there a sort of a paradigm shift coming when we move when systems if and when systems and maybe we think it's going to be soon move from being narrowly to being generally intelligent is that going to carve open a you know stupendously large new um space of you know whatever possible possible intelligibility and then everything that goes along with that in other words are are these new systems going to be so much vastly smarter than us that they then have access to um uh enacting that Intelligence on the universe two effects that would be would seem just you know mind-boggling or magical or or whatever to us and and I you know I think many of us suspect yeah probably why not I mean it happened with humans with respect to other animals why wouldn't why shouldn't it happen again um but maybe No Maybe not maybe we don't know but that's I think that's that's where my mind is headed with this idea of you know these you know some sort of threshold and Paradigm Shift yeah it was
a really interesting question isn't it so obvious this well there's there's just no reason whatsoever to think the other way right uh I don't even know what you'd have to think to believe that you'd have to think that like somehow humans are Universal Turing machines or something and that was the key threshold and that once you're Universal in principle I mean I suppose that's probably what behind is behind what David Deutsch is saying although I don't know of course but you could imagine the argument that well okay any understanding is super advanced intelligence has they can write it down as an algorithm and then I can sit there and I can Shuffle symbols around blindly like an idiot and believe their calculation if they give me a proof that's true but that's not relevant I mean that's not understanding or intelligence it's just you know it's just saying a pocket calculator can understand any proof that uh you know the most brilliant mathematician writes down because it can you know it can just check the calculations uh but apart I mean I don't I don't see any reason what principled reason is there to think that humans are at some kind of maximal point I don't I don't even think this is a serious argument maybe maybe another a tread of Steel Man this the The View there maybe maybe another way of looking at it is that human beings um are in some fundamental sense different than all other animals that have come before that preceded us you know all other um organisms all other organisms have intelligence of some sort one could say you know that you you could argue that even plants and microorganisms and so forth um right the way up and then then you I suppose you could also argue that human beings have made some fundamental leap uh and you could argue it's something to do with with language and symbolic reasoning and abstraction and you know um the the ability to quantify them remember all those sorts of things I'm just steel Manning this I'm not I'm not saying this is my argument of Steel Manning trying to steal management and then and then from that if you agree with that if you agree that sort of humanity has made a fundamental leap relative to everything else then you could also follow on to say that that is the only leap to be made and there are no others to ever to follow in which case from here on out it's just a matter of speed up basically um you know doing the same things but just able to do them a bit you know a bit or a lot faster but no more fundamental no additional fundamental
leaps are necessary or perhaps even possible maybe that's where where these guys are coming from I don't know I agree that that's the Crux I think it's like is there some qualitative um architecture Improvement to be made or um does it just not exist but um it's kind of a midpoint because I think this scaling up the the speed and the scale of the intelligence is like enough to reach a point where the system running on the same architecture but scaled up is like vastly more practically like effective in terms of its intelligence so um yeah I I I think it's really going to be really hard to resolve that um that other question but it might not matter because um speed up is like effective enough I mean Matt I agree that we already see this right I mean we already see this the the a human being unaided by thinking machines by unaided by computers automated by you know writing for example um a whole whole domains of activity of accomplishment of achievements in terms of manipulating the world transforming the world around us in various different ways those are completely out of they're completely unobtainable unless you have those tools in your hands even as a human right and we if if you didn't have if you didn't have any way to remember things or do calculations with the assistance of pencils and papers and slide rules and computers you can't you can't it's just it would just being positive would take too long even you know with human level of intelligence to you know uh design machines that fly in you know rockets that get to the moon and so forth it would be it would be impossible um and so we already see that that uh just by adding a few tools that help us speed up or or you know um scale up our existing intelligence we we have opened domains that were that were completely inaccessible in the past so I see that sort of as a similar kind of thing it doesn't really matter it's for all practical purposes it's it's you know the new possibility is going to open up one or the other right right yeah that's interesting I I think I'm not as convinced that um simply scaling up speed and like access to memory is really sufficient to break through some kind of fundamental threshold so maybe I yeah maybe I am a little more sensitive to the argument that um so put it this way uh it isn't clear at all that multiplying by 10 the number of scientists working on a problem actually increases the rate of progress or that if you speed up the wrong people and just give them 10 times more hours
per day to work on a problem that it'll make any difference whatsoever to achieving the outcome that that isn't how progress really works right if if people you know there are there are some people who are capable of having that idea uh which is comes from nowhere and doesn't really have any predictable origin that you can simply it's not it's not continuous where you just multiply some number by 10 and you increase by 10 the probability that it's going to happen or even scale it up meaningfully so where those ideas come from and what progress really hinges on is is a production function we don't understand at all and I I guess I'm sensitive to the argument that given we don't understand it simply increasing speed or I mean it uh it may not have a very obvious relationship to the rate of progress or whatever so I think I do believe that um what it'll take to get to something like super intelligence is is more than simply for example just uploading a human and making lots of copies and running them really fast uh I suppose you could make the argument that if it's possible for people to do then then sometimes that population would achieve it much more quickly they would then design then the AI that would be that thing that I'm asking for but that's kind of escaping my question my point a little bit I think yeah I think I am inclined to think that it's kind of like a stochastic search process where I'm kind of just like waiting to find progress um and there's a lot of Randomness so if you scale up the amount of samples that you're taking then you're gonna increase the probability at which or yeah you'll increase the rate at which you'll randomly find um progress so there are particular ways where it like might not help like if someone is not a particularly productive person and you just scale them up um then maybe that's not going to lead anything but if you if you sort of set it up I think on expectation you you will increase the rate of progress um maybe but I think those tail events uh so I think many other things can very easily swamp uh uh okay this is maybe we can take the example of say the the scientific culture in in the west versus China for example so the Chinese are making lots of scientific progress some of it's quite impressive obviously but there's someone who spent some time in China it's you can see the cultural forces which really uh hammer down anybody who has a different way of thinking about things and the these people are very easy to kind of be
discouraged and pushed out of a system and it's it's kind of almost by accident that it works anywhere right so the people that you really want to be giving resources to to make scientific progress are kind of exactly the people that any bureaucracy or machine you can build wants to get rid of in in some in many cases so yes like scaling up things in human affairs often has the effect of exactly doing the wrong thing because it systematizes and scales up management processes and so on in such a way that the accidents are made lower probability so I think that the relationship between scientific progress as it actually happens versus what the Scientific Management kind of people the high levels of government and institutions think uh these two things are very different and I think the former is uh yeah not so it's so sensitive to scaling in any predictable way um I could easily imagine a future in which the human population is a hundred times larger and scientific progress is zero or close to zero that seems to me I mean even with you know lots of money spent on science and lots of emphasis on science so I yeah this is okay yes a different discussion but um yeah I think so maybe if I if I understand right you're kind of saying that okay maybe it's possible to scale intelligence but it seems like for at least some of the ways that you could imagine scaling um human institutions that produce knowledge um there would be a tendency for that not to actually scale yeah that's right progress even though you scale so it's like there are also ways in which it might not work um and so I guess the question is um do we find the ways that do work so I think I'm pointing to what I see is so when I say I'm optimistic about say simply scaling up large models or something uh it's not because I I reasoned by analogy with scaling up sort of many humans together being better than an individual human I'm very much not optimistic about that kind of scaling but what we see when we scale large models is the emergence of at various sort of levels of new capabilities or more refined representations that look more like reasoning and such things so it's it's more like uh it's more like something like vertical increase rather than horizontal right it's depth rather than breadth that we seem to be discovering with increasing scale so uh if it was merely a matter of um so that's that's much more uncertain or less predictable than if it were the case that's simply sort of horizontally scaling as you might with human
organizations if that were a reliable path towards progress then I I suppose it would be much we would be much more comfortable predicting that AIS if they can achieve something close their human level intelligence could easily go beyond it because you could just run it more and make more of them and put them together uh so I suppose I'm a I think that is a a bit of the argument for why once you get to human level intelligence you'll very quickly go beyond it right because you can just build many many copies um or run them faster or something um I am actually not sure I believe that I nonetheless do believe that you know that that transition might be quite rapid but I've I think it's not that simple yeah um this speaking of reasoning by analogy here a couple of analogies come to mind I don't know how well they hold but maybe they might be you know maybe they might be folks in some interesting thinking um one is um that the amassing uh amassing some quantity of um some quantity that that creates some background condition which allows you know I think Matt as you mentioned sort of increases the likelihood that stochastic otherwise stochastic events would happen are will happen will happen at all and then will happen perhaps more frequently and the analogy that comes to mind there is something like temperature right so if you raise the temperature of uh of um you know water um then you know you you're you're uh there's I think there's some physical examples of this where if you're if you if you raise the temperature of water you you um you get more evaporation but evaporation I think is sort of fundamentally stochastic you know you you a handful of individual molecules will sort of make the leap to um uh you know out of the um one phase into the other um and I think you can you can kind of put something similar to this into effect I think there's some cooking or some preparation methods that do this where you can get you know grains of rice to jump out you know something if you're you know you raise the temperature and you boil it or under a certain circumstances so anyway there's just an analogy the idea that you've got some you're gonna have some background quantity and if you if you increase it um you know then you're sort of enabling um background conditions for leaps to occur from um from smaller uh entities or or something within that um and so this is I mean population is probably the most obvious one but there are other quantities you could imagine like wealth and capital and resources
and that kind of thing um but population is the big one I mean if if you've got a given chance for a genius like John Von Neumann to appear you know you're probably not going to happen if there's no human populations only a thousand people but much higher chances if the population's 10 billion people and if it was 100 billion people or or trillion people then you'd have a lot kind of things so that's the first thing that maybe there's some there's some something interesting to think about there and then the second one um again a trade officer in my mind here um and uh uh anytime you have a trade-off it's it's it's worth circling back to think about what about the concept of cost and in economics you know this is sort of in in money and finance and our personal lives you know cost is sort of it's quite a nebulous idea but but it I think fun to at its most fundamental level cost is really about making trade-offs right it's about about it's about exchange it's it's about you know weighing um different things against one another um you know and and trading off opportunities uh in one domain you know or in service of one value or goal against opportunities in others and um if you can reduce the cost if you can reduce the cost of something then you can increase the likelihood that it that it might happen so for example you know in a system where scientific progress is extremely costly you could imagine that they were you know the the you know the institutions the bureaucracy that you know the culture all sorts of things would be very resistant to that because you know you you if it was costly or if it was risky for example that's another form of cost but if you were to lower the cost far enough or lower the risk far enough then you can imagine that that you know progress Could Happen much more um much more rapidly or much more freely at any rate um if there was no cost to it or if the cost is very low and so one might imagine that in in intelligent systems at least part of what you're doing is is reducing the cost of um uh achieving certain effects in in the world right and so you know uh we do this variety of methods you know as human beings it's you know the intelligence that's in a human agent as compared to the chimpanzee or a dog or something like that but one way we do that is we do with modeling and simulation and prediction you know we're we're able to drastically reduce the cost of experimenting on the physical world around us because we just do it in our
head very quickly no risk you just imagine user use our imagination and then you know the smarter ones of us don't necessarily get killed in the process of um you know trying new things in the world around us um and so I mean that's that story though so those are two sort of ideas that just occur to me um while I was listening to you guys I don't know if any of that's helpful yeah I think this under this on a topic as hard as this I I don't know any other way to proceed than simply talking and for a long time and then hours and hours later there's an idea and then that's that's all you remember but that's uh that's the way to do things I have an interesting I have a slightly more formal take on the um on the discussion of speeding up by scaling I wonder if that's interesting it's like on one of these boards yeah so I use this one over here um I think the idea is something like um let's say you want to let's say you have a population um so I'm going to call this case one it's some kind of Baseline of scaling um let's say you have a prop uh population I'm just going to draw five dots to represent like five people but this might be five billion people I don't know um and they they produce According to some kind of stochastic process um and maybe how much they produce um follows a distribution um that looks something like this um then um if you wanted to try and increase the amount produced in reality which is going to be a sample from this distribution and you have access to an ability to scale resources then a base as a baseline what you could do is you could augment your system with another identical population doing the same things but with different random seeds so they also sample from a distribution it's the same distribution but then as an extra step you you say what was the point that was sampled here and what was the point that was sampled here and then you take the maximum of the two and that will follow a distribution that has shifted up as long as these distributions are not um the Dirac Delta so as long as I have some variance um non-zero then the maximum of the two random variables is going to be distributed according to a slightly um I slightly increased distribution so that's like a Baseline and then so one of the things Dan was concerned about is like okay well if you double the amount of scientists maybe you maybe what actually happens in reality um in human societies where this happens is that you just end up with lots more bureaucracy you end up with lots more
bureaucracy uh you end up changing the way in which people do science and it actually instead of having one popular instead of having like this kind of parallel experiment you end up with something where you have 10 dots but um they're kind of each suppressed and the overall distribution goes down um and in that case you're kind of not beating the Baseline and the question of Designing something to scale is the question of sort of can you do better than this Baseline can you find a way to integrate the two societies for example what if you imagine this was like two different planets and they were able to communicate between each other about their discoveries um is there a way you could set that up so that then the incremental progress they made on the way to the overall amount produced was kind of bootstrapped um not bootstrapped but like shared and they could build upon each other's work and that might lead to um a situation where you've scaled productivity more than the baseline or it might not because maybe they'll just find other things to do with the with the communication Channel and it won't lead to more progress I'm not sure that's like a question that you can then ask about a particular scheme that you have for trying to use the resources the extra resources does that make sense yeah yeah I think that is useful yeah so you kind of uh done I mean that's that's a form of institution design almost right I mean you could say that well why don't humans do this already with their scientists divide them into groups small enough don't let them communicate that's the kind of idea of Institutions and management and and units but in practice it's uh I suppose different planets this may be a good idea thank you yeah I think we're going to have to kind of wrap up there I'm just looking at the time um yeah I wonder if did you see those articles that I I sent you Adam that were looking at kind of quantifying um timelines for progress based on timelines for compute and and um scaling laws and so on so I wonder if over the coming weeks we could maybe shift to talking a little bit about those unless you have other ideas he doesn't like that hello Adam thank you are you Adam maybe I had to step out for a second okay that's a bit of a funny way to end the session uh but uh we'll leave Adam um I just wanted to add that the ones that you linked I think open philanthropy has a report that they use um that's well known in the AI safety community and I'm going to put the link