WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed. The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except for errors at the level of individual words during transcription.

Synopsis


Technology disruption can cause exponential growth of a new technology that eventually displaces the old one. AI is a different kind of disruption, as its performance and cost are improving rapidly. Richard Dawkins' theory of memes applies to understanding how technology can spread. AGI should be developed carefully and rooted in values such as benevolence and amicability. Businesses can benefit from disruption by disinvesting and divesting to reduce expenses. Disruption is inevitable and can create a whole new possibility space.

Short Summary


Disruption studies may offer insight into the asymptotics and rates of growth of quantities without bound. Sigmoid functions often display exponential growth before inflecting into exponential decay and asymptotically approaching a limit. This is believed to be caused by a number of interacting feedback loops, which can cause systems to switch from acceleration to deceleration. Phase transitions involve a stable equilibrium changing over time when a control parameter is varied, while divergences where things go to infinity are not seen, but are infinite relative to the microscopic scale.
Phase transitions occur when a microscopic scale effectively diverges to infinity, leading to macroscopic perceptibility. Technology disruptions involve two trends: the growth of the incumbent technology and the rate of change of the new disruptive technology. This divergence presents a value proposition, as the new technology becomes cheaper and/or performs better, and is adopted and displaces the old technology. Disruption often occurs before a new technology is perfected and adopted despite its imperfections, and early adopters suffer through its imperfections. Balancing feedback loops and exponential counterbalancing can also be tracked.
Humans have been gradually getting smarter over evolutionary time, but new technologies can cause a disruption if they become much cheaper and better than existing ones. AI is a different kind of disruption, as its performance and cost are improving rapidly and it is difficult to predict the outcome. Technology acquires users in a two-way process and it is hard to make forecasts beyond the disruption as it throws off existing models. AI is both exciting and terrifying as it is difficult to predict the outcome and its effects on humanity.
Richard Dawkins' theory of memes posits that knowledge can spread like a virus, which can be applied to understanding how technology can be used to propagate itself. The upper limit of the size of animals is determined by the competing pressure from viruses and bacteria. The concept of an intelligence explosion has been suggested, where some countervailing force may prevent it from growing beyond human intelligence. Technology is limited by materials and production constraints, as well as the finite but growing demand from the population. The outcome of an intelligence explosion is uncertain, but it could change the relationship to the substrate itself.
The development of Artificial General Intelligence (AGI) should be undertaken carefully to minimize the risk of it being antagonistic or hostile towards humans. Popular current uses of narrow AI, such as spying, advertising, gambling, and military applications, should not be used as a basis for developing AGI. Instead, it should be rooted in values such as benevolence and amicability, and trained on tasks that do not involve greed, exploitation, or winning conflicts. A model should be run to answer an interesting question about the near-term development of this race which is not obvious. It is not possible to prevent rogue actors from progressing, so the best approach is to ensure that the race is conducted safely and without risk to humanity.
Technology disruption can take many forms, from one-to-one substitution to creating a whole new possibility space. Intelligence technology is difficult to predict, but it is known that if a remarkable phase change is expected in an industry, then assumptions must be rethought. Satoshi Nakamoto may have created Bitcoin to slow down AI development. Businesses can benefit from disruption by disinvesting and divesting to reduce expenses and costs. Profiting on the way down is possible in the short-term, but ultimately disruption is inevitable.

Long Summary


Asymptotics and rates of growth of quantities without bound were discussed in the context of AI safety. Disruption studies may offer insight into processes like this, as it is a general phenomenon of control theory, cybernetics, feedback and complex systems. Sigmoid functions are often seen, which display exponential growth in the initial phase, before inflecting into exponential decay and asymptotically approaching a limit. It is thought that a number of interacting feedback loops cause accelerating adoption initially, before certain thresholds are reached and the rate of adoption begins to decrease.
Feedback loops can cause systems to switch from acceleration to deceleration, although this has not been quantified effectively. Phase transitions involve a stable equilibrium changing to another over time when a control parameter is varied. In disruptions, divergences where things go to infinity are not seen, but relative to the microscopic scale, they are infinite. For example, when boiling water, the average radius of a bubble of air is infinite compared to the microscopic scale.
Phase transitions occur when a microscopic scale effectively diverges to infinity, leading to macroscopic perceptibility. In the context of a disruption, the ratio of one product to another can switch from being very small to very large, but never infinite. The derivative of this ratio diverges as the critical time approaches, and it is common to track multiple divergent quantities. Balancing feedback loops and exponential counterbalancing can also be tracked.
Disruption often occurs before a new technology is perfected and adopted despite its imperfections. For practical purposes, the rate of change of the incumbent industry, product, service, market segment, or technology is ignored. This means that any pre-existing growth or improvement trends are not considered. Early adopters of a new technology often suffer through its imperfections as it is not yet fully developed or affordable. However, adoption of the new technology still begins before it is perfected.
Technology disruptions typically involve two trends: the growth of the incumbent technology and the rate of change of the new disruptive technology. There is a divergence between the two, as the new technology is improving faster than the old technology is capable of. This presents a value proposition, as the new technology becomes cheaper and/or performs better. As a result, it is adopted and displaces the older technology, and the market itself may grow.
Humans have been getting smarter over evolutionary time, with gradual improvements in intelligence. However, new technologies can cause a disruption if they become much cheaper and better than the existing technology. For disruption to occur, the new technology must improve rapidly and offer a more compelling value proposition in terms of cost and quality. This could be analogous to what is being faced with an intelligence explosion, where humanity and human institutions have become much smarter over the last thousand years.
Humankind has been getting smarter and AI is on a trajectory to exceed this rate of improvement. It is a technological disruption of an extraordinary kind and it is much less predictable than other disruptions. It is hard to say what good the framework of dealing with disruptions can offer beforehand about this particular disruption. AI is a completely different ballgame and its performance per unit cost and value proposition is improving rapidly.
Technology is a form of practical knowledge and intelligent agents utilize it in order to benefit from it. However, when intelligence itself is growing and disrupting the models, it is both exciting and terrifying because it is difficult to predict the outcome. Technology acquires users in the same way that memes become captivating - it is a two-way process. It is difficult to make forecasts beyond the disruption as it throws off existing models.
Dawkins' original idea of a meme is that knowledge is viral and can propagate itself, similar to how viruses use hosts. This concept has been useful in understanding how technology can capture and use users to spread itself. There is a body of quantitative work that could predict the number of cells in an organism, but there is a competing pressure from viruses and bacteria which puts an upper bound on the size of animals. This same concept could be applied to an intelligence explosion, where there may be natural parasites or some other countervailing force that prevents it from exploding beyond human intelligence. The limitations of the technology itself dictate the roof where the thing converges and slows down, forming a new stable equilibrium.
Technology is limited by the materials available and the constraints of production, as well as the finite but growing demand from the human population. Intelligence is a wild card, as human beings are capable of exploiting the biosphere and natural resources in an open-ended and adaptive way. It is uncertain what will happen with an intelligence explosion, but it could change the relationship to the substrate itself.
AI development is inevitable and cannot be prevented, making containment policies futile and leading to dangerous complacency. Therefore, it is necessary to accept that this is an arms race and to act accordingly. It is not possible to stop rogue actors from progressing, so the best approach is to ensure that the race is conducted safely and without risk to humanity.
Creating an artificial general intelligence (AGI) should be undertaken carefully to minimize the risk of it being antagonistic or hostile towards humans. To do this, AGI should not be trained on goals that have the potential to be destructive, such as military use, advertising, or exploiting people. Instead, it should be rooted in values such as benevolence and amicability, and trained on tasks that do not involve greed, exploitation, or winning conflicts. Popular current uses of narrow AI, such as spying, advertising, gambling, and military applications, should not be used as a basis for developing AGI.
The speaker is concerned about the development of Artificial General Intelligence (AGI) and how it is being built on the wrong foundation, such as trying to maximize returns on investment on Wall Street and Facebook's research into maximizing advertising dollars and addicting people to social media and clicks. He suggests training AGI on the best things about humanity such as literature, music and philosophy, rather than the worst. He is losing sleep over the possibility of AGI being developed before 2030, and appeals to those smarter than him to figure out what to do. The speaker is good at going beyond philosophical prognostication and building a model to extract reasonable observations about how things might play out. He suggests running a model to answer an interesting question about the near-term development of this race which is not obvious.
Technology disruption does not necessarily follow a one-to-one substitution model, but often creates a whole new possibility space. This is exemplified by the ballpoint pen, which replaced fountain pens but did not fundamentally change the way people wrote. Intelligence technology is different and new, and so it is hard to predict how it will disrupt the market. The speaker has identified a few principles to inform this prediction, but ultimately it is uncertain how intelligence technology will be used and what the market for it will be.
Satoshi Nakamoto is a mysterious figure who may have invented Bitcoin in order to slow down the development of Artificial Intelligence by five years. RethinkX uses a theoretical foundation to identify patterns and suggests that if a remarkable phase change is expected in a given industry, then it is necessary to rethink basic assumptions about the industry, business, business models and strategies. This could result in profiting on the way down.
Disruption is inevitable, but individual businesses and investors can still benefit. By disinvesting and divesting, businesses can reduce expenses and costs associated with long-term planning. This can be done in industries such as the energy sector by ceasing to invest in assets and maintenance, and still remain in the marketplace. In the short-term, this can result in profits before the new technology replaces them.
Incumbents can profit during recession or collapse by slashing costs and divesting from their industry, while investors can actively short industries that they believe are going to collapse. Companies can also become profitable by making false promises and minimizing operating costs, passing the profits onto shareholders despite knowing that they won't be able to meet their debt obligations in the future. It is important to be aware of these tactics and to warn policymakers and other decision makers about them.

Raw Transcript


i can't think of a clever segue into our topic from this but so let's just jump straight into it uh one proposal i made was to talk a bit about asymptotics or uh what could be said in a more elementary way is uh making proper distinctions between rates of growth of quantities that are growing without bound and the context where that came up for us last week was in the ai safety seminar where we were discussing attempts to make ais which are very capable safe or aligned and if that is a process that requires human reasoning then there's a problem because human reasoning capacity whether as individuals or collectives doesn't seem to be growing like n squared let alone e to the n whereas capabilities of ais may be growing uh like one of those so that's what makes fundamentally ai safety a scary problem and i did wonder if uh disruption studies or whatever you would like to call it had any insight to offer about processes like that i mean it's a general phenomenon of control theory or cybernetics or feedback or complex systems of course right so it does seem like there might be some conceptual tools available or i was just interested to hear your kind of thoughts on that adam well unfortunately i think that that you listed the the pretty much comprehensively as far as i can tell the the examples uh in which you know we see and and draw upon you know theory and and and um uh you know real world examples for phenomena that that um display feedback that where the feedback is uh results in a complex pattern and by complex you know i mean that in the formal sense but take the most the most familiar the most standard one that i see in disruption is simply this sigmoid function right this is an s-curve of some sort and so what you have is it yes it's non-linear and yes it's complex in some manner but what we have is a you know exponential growth in some initial phase and then some point at which the that inflects into exponential decay and asymptotically pro approaches some limit and so you get a sigmoid function you get some sigmoid shape to some function now the the the prevailing thinking at least as far as technology's disruptions go is that there are just a number of interacting feedback causal feedback loops uh that initially the relationships between the different quantities cause accelerating adoption so an increase in the rate of adoption per you know um time step initially uh and but but that at some point past certain um thresholds in the values of the parameters that are that are you know
involved the quantities that are involved those feedback feedback loops flip they switch from being um or rather i should say the whole system switch is from being dominated by reinforcing feedback loops that drive the acceleration to what in systems thinking lingo and uh is are called balancing feedback loops that decelerate um uh and and caused the the the rate of change to slow down so i i um uh yeah this this is not something that we've quantified very effectively and i'm hesitant to try uh i guess i'm also not reaching for there's there are risks in that but yeah i guess i'm not reaching for formal treatments but more like so i can imagine that so the criticism i was laying uh of ignorant as i am of the literature on some of these topics to do with intelligence explosions and so on is is to talk too casually about infinities right to to just um to treat things as kind of static in the goals rather than processes that are revolving and to not examine into whether the gaps between things are growing or shrinking over time even if both of them are going to infinity and i didn't it wasn't immediate to me that this is a topic that's come up naturally in the context of disruption but it seems like it might ought to come up now things don't go to infinity in the disruptions that you're talking about right but uh the rate of change of some things might effectively seem to be extremely large uh if you're dealing with like a okay so a phase transition in physics or in statistics uh is a phenomena where you go from a stable one stable equilibrium to another over some period of time when a control parameter is varied it might be temperature or pressure or price in the case of disruptions like the ones we've been discussing in this seminar perhaps where you have a stable equilibrium the system's kind of locked into a certain configuration and it's persisted for a long time and then 10 years later 15 years later it's in a different stable equilibrium and in between maybe in practice in the real world you don't see divergences where things go to infinity you don't see that in physics either it's not like there's a phase transition when you boil water and the the average radius of a bubble of air inside the water is one of these divergent quantities it's not infinite you don't have infinitely large bubbles in your teapot but relative to the microscopic scale they are infinite right of course before you boil your water uh there are small air bubbles the radius is on the order of um i don't
know it's probably on the order of ten or a thousand atoms i suppose but it diverges to being many many orders of magnitude large and macroscopically perceptible that's effectively a divergence to infinity from the point of the microscopic scale so this is an example that i have in mind when people talk about intelligence explosions right they clearly are referring to some kind of phase transition but in in the real world you never see infinities you see something that on a microscopic scale is effectively infinite and on some other scale looks noticeable and that's how macroscopic things often become noticeable and why we notice phase transitions like boiling water or uh you know any other phase transition that comes up naturally super um saturated fluids or or anything like that so all right yeah in the context of disruption what i'm curious about is um so when you're going through a phase transition so there's some divergence so maybe it's the the rate of change of uh the ratio of one product being used to another right so that ratio switches from being very small when product so i don't know this is going to work but let's think about this as an example so if a is the unit sold of product curly a and b is the unit sold of product curly b uh well those are functions of time and you might look at a t on bt and that's for t very much less than some critical time this is small say so this is b dominates or we're in a stable equilibrium where b is preferred but for t much greater than t c this is large so this is after the phase transition has happened so a is preferred now this is never infinite right it just goes from you know maybe being nearly zero to being you know maybe 10 or 100 or something but the kind of thing you might expect is that this the derivative diverges as t approaches the critical time so at the critical time it's any given unit of time the ratio increases very very rapidly something like that and what i'm interested in is in disruptions like this one do you typically track just one of these divergent quantities or is it common to track multiple ones yeah that's a very good question the the and i can see a parallel there for this you know a description of an intelligence explosion but let me run you through look there are a couple things to say here um don't if you would remind me to come back to this idea of of balancing feedback loops and um you know one exponential uh uh counterbalancing another exponential let me come back to that later don't let
me forget there might be something interesting there but but for right now um uh the the the for for all practical purposes during a disruption um we we ignore the rate of change in uh for the incumbent industry or product or service or market segment or or technology so whatever the incumbent entity is whatever that is we if we if in practice we effectively assume that it's that it is um no longer uh or is not making improvements that are that continue to be relevant during the disruption now it may be that there are times when an in an incumbent industry or an incumbent business for example tries to react to a disruption by making an existing product or service better by trying to improve its quality and or its cost so that it's it's so that it's its value uh grows and and in in an attempt to remain competitive with br with for example a new disruptive technology that is that is uh posing an overwhelming competitive threat now we we can imagine or we can think through some exam brad would be the person for me to really appeal to here for historical examples because he's an encyclopedia of these things now but it would be something like um an effort to make uh film cameras a desperate effort in the late 90s to make film cameras cheaper to keep them competitive with um uh digital cameras or to make um you know a desperate bid for that that only lasts for a brief time to keep an incumbent technology uh relevant or or competitive especially in the early days when they when the disruptive technology is not perfected this is one of the strange things about disruption is that adoption of the new technology begins before it's perfected the technologies often are pretty crappy in the early days the early adopters of a new technology or bless them are often willing to suffer through uh you know the growing pains of you know the technology is not completely perfected it doesn't perform very well and it's not as affordable as it one day will be but nevertheless adoption does get going despite that imperfection so um like the porsche so admitting admitting that these two things are there sorry go ahead i said like the poor folks who suffer through the early version of meta you need tools yeah so i mean this is this is completely this is completely uh standard but for for the purposes of our analysis we very seldom if ever consider the um uh you know the continuation of any pre-existing growth trend or improvement trend um of the uh incumbent technology not now now this is this is perhaps a problem i mean
it it it doesn't seem to alter the overarching dynamics but when i think about it for sure in most disruptions the incumbency isn't completely you know completely uh ossified and standing still and doing nothing a lot of things are going on you know their cost structures are changing and and um they're often quite desperate measures that the the incumbency will take to stay alive a little bit longer or to profit on the way down that is actually can be a very successful strategy to to profit as the industry's collapsing that's that's certainly something that that disrupt disruptees have have successfully executed in the past but what we what we typically don't uh have in my um team's work anyway and any that i can think of um is looking at sort of the divergence of the rates of change from the incumbency which is you know got some during during these long acquiescent periods of uh where between disruptions where there's where there's some sort of stable or metastable or partially stable homeostasis of some sort where that you know that you've got some technology ecosystem it's fairly stable and maybe it's improving slowly and incrementally perhaps even linearly over time but it's it's if the change is slow um and then um maybe perhaps by an analogy you know similar to raising the temperature of of a liquid you know that that can be occurring slowly and and steadily and then you know you can that's very different than what's going on in a phase change as to you know to kind of use your your analogy from earlier but but uh so i can see the difference at least in my mind or so sorry let me back up a second if i think about it a typical technology disruption we really have two um uh two trends one is the the growth of the uh or the rate of change of the incumbent technology and one is the rate of change in the new disruptive technology and there's a massive divergence between those two and um uh the the that rate of change what i'm talking about there is really fundamentally about cost or economic value and the result of those of those changes the new technologies getting cheaper and or performing better but some some combination that presents a value proposition the new technology is improving way way way faster than the old technology is capable of improving and then the result of that divergence the result of that difference between the two is that the new technology begins to be adopted and displaced the older technology in uh in in a market and the market itself may grow so the
market size is not fixed that in fact overwhelmingly we see that new technologies uh disruptive new technologies grow market substantially we've seen that many many many times um but that's a that's a that's a consequence of the the new technology becoming cheaper and better much much much faster than the existing technology the incumbent and and very importantly it's common sense here obviously and very importantly the trajectory of improvement of the new technology must take it to parity and then well beyond parity in order for it to be a disruption you know a new technology could come along but if it never got as cheap and then substantially cheaper than the existing technology well then it wouldn't have a disruptive impact um so that's a that's a sort of precondition for a disruption to occur is that the new technology not only must be improving very rapidly this is what we always see but it must be on a trajectory to become much better and by better i mean offer a much more compelling value proposition in terms of um cost and quality and performance and that sort of thing then the the incumbent okay so that that's what you've got going on now the parallel that i can imagine between those two things and and uh what we are perhaps facing with an intelligence explosion is that humanity and i i mean humanity i don't mean necessarily even individual human minds i mean humanity and human institutions um i think that it's it's probably fair to say that humanity's been getting smarter and we could think about you know over the course of evolutionary time where have we had you know phase changes and where if we had sort of more slow steady incremental improvements or expansions in intelligence i don't know how you want to how we want to operationalize intelligence and distinguish that from things like knowledge um i'm not sure but i think what we could say is that is that say take for example over the last thousand years you know there was some point at which you know the right pieces came together and humanity starts became much started becoming a fair bit smarter collectively and perhaps even individuals i mean with better nutrition and better education and so on an individual person in 2022 is is honest to goodness smarter than they would have been with identical genes in the environment and nutritional and you know and and educational and social environment of um you know two three four hundred five hundred years ago a thousand years ago um but but certainly collectively we
have you know we have we have institutions corporations governments you know whatever you know universities um academia that are um that are themselves you know they are in a form of incumbency and they are getting smarter collectively humanity has been getting smarter and so it's not like this is something completely it's just a static quantity i think there has been a rate of improvement um but i i think this is directly analogous to the rate of improvement uh that we see in in an incumbent industry of technology and then with the disruptive technology coming in well agi would be something directly analogous to that and it is getting it's it's it's performance per unit cost and it's you know the value proposition of it um to a potential market is just it's just improving at a in a completely different you know way a completely different ballgame it's just like it's not even not even comparable it's you know much much much much much much much faster and and with that crucial caveat there that it's on a trajectory to become to exceed the the incumbency and so there's this there's this well the red the the the pieces that would normally be there for a an imminent technology disruption appear to be there for an imminent uh you know intelligence the disruption of intelligence now maybe this is maybe this is a similar phenomena maybe it really is a technological disruption of of an extraordinary kind um let me but at any rate that we seem to be in a similar sort of situation so that's that's that's my sort of approximate my my rough you know hot take that's what i'm looking for that's my hot take on it um let me put you on the spot it fits the pattern as a representative of i think x okay what good are you guys if you can't say anything useful about this one this disruption okay electric cars good for you energy good for you food yeah food's nice if the framework of dealing with the disruptions can't offer anything useful beforehand about this particular disruption uh what what good is it at all and what do you have i happen to i think i have to agree that that um we are in a tough spot i don't know what we have to offer here oh wait did i lose my connection okay so um my response is i think that what's happening here uh is is fundamentally different i think there's something fundamentally different than any other disruption that we have seen um uh and and and it i think it's different in a way that makes it uh much less predictable um and makes it very much harder to say
anything about and the reason why i think is that it is that technologies disruptions what is technology well my working definition of technology is that it is a form of practical knowledge but knowledge stands in relation to an agent or or set of agents with intelligence so then that's one reason why i mentioned why i distinguished knowledge and intelligence a few minutes ago and i think it's one thing if you have a new form of knowledge practical knowledge i.e a new intel a new technology emerge and then suddenly the intelligent agents find that new form of knowledge so useful that it becomes uh you know it becomes adopted very quickly following these consistent patterns that we've seen throughout history and displaces older forms of knowledge that are suddenly less useful and um therefore no longer uh uh you know compete and and hold the attention um or you know the the um they no longer retain a user base amongst the intelligent agents that are taking advantage or utilizing that knowledge okay that's all fine and dandy but when suddenly you turn you you you flip the situation and you're talking about the the the improvement or the or the new the new uh uh disruptor is intelligence itself um then that breaks our models that breaks my team's that breaks what we do because then you're talking about changing the agents themselves and this is this is like a you know there's something there's some recursive trap there there's some sort of like you know um uh you know you're dividing by zero or some such analogy right i mean it's it's it's i don't i can't make sense of it because if intelligence itself is the thing that's that's growing um and it's the thing that's disrupting then you know yeah i i it just breaks the models it breaks my thinking um it's one reason why i find this so both exciting and terrifying is i honestly don't i i do not know how to how to think about or make any forecasting um you know during and certainly not beyond the the disruption i mean if we're talking about having fundamentally more intelligent more intelligence than than the the the agents that are intelligent right now uh yeah that's that completely throws me for a loop so um it's a dual point of view where rather than people adopting technology technology adopts people [Music] well certainly technology acquires users it's it's i mean you know technology like you know i think there's a very good reason why uh the idea of memes has uh you know has become captivating has has um you know uh
has proven itself to be a useful concept you know dawkins originally original idea of a meme is is uh goes back quite a long way and you know now memes are you know you know their entertainment and whatnot but the idea that knowledge uh is viral in some sense and that viruses capture and use hosts to propagate themselves or at least there are dynamics that that effectively create a a pattern of behaviors that looks like that well yeah i think that that's you know there's absolutely something to that now in this case it may be that it's a little more literal than figurative it was literally intelligence and intelligent you know agents and entities and beings coming into into existence and capturing the uh you know the users but but in some in some figurative or metaphorical sense all technologies do that to to some degree as all knowledge does that to some degree i wonder there's a body of quantitative work that could predict the rough order of magnitude of animals in terms of number of cells i mean suppose you're sitting back at the cambrian revolution uh and you see multicellular life start to emerge you might if you were a single celled kurzweil uh predict that the number of cells in an organism will be growing exponentially and that you know within 20 years will fill up the solar system or something but that isn't what we see i mean there's a kind of competing pressure which is from viruses bacteria parasites smaller things that can nonetheless be very effective against big things which which put an upper bound pretty strong upper bound on the size of animals i mean from this order of magnitude calculation a dinosaur and and a cat are kind of equivalent things right it's you know i don't know how many cells there are in a human body but it's something on the order of one trillion i think uh we don't see 10 trillion cell or 100 trillion or 1 000 was that a quadrillion so this is changing the topic a little bit i guess but i'm wondering what the an intelligence explosion may have natural parasites as well right or some other countervailing force that means that it doesn't necessarily explode a thousand orders of magnitude beyond human intelligence so in the context of technological disruption i mean you think about the the roof where the thing converges and sort of slows down and becomes a new stable equilibrium as being dictated by by what the limitations of the technology itself i guess right there's not like an active countervailing to come back to balancing exponentials
disruptions stop when they hit the limitations of the technology generally speaking right they they stop when they hit the limits of the technology when the when the technology hits the limits of the um uh it hits the limits of the the physical system in which it's instantiated in other words the limits of the substrate and the substrate is comprised a lot of this is maybe too too much through an economic lens but the substrate in my mind is the upon which technologies play out you know are utilized um and you know produce and reproduce themselves and then are used and you that use and are used by um uh human beings and and and um our institutions uh and organizations that that substrate is is determined by the supply and by the demand right so there's so on the supply side you know those there the technology is limited by the materials that are available and the constraints of production and manufacturing and all those sorts of things on the supply side of the of of of this picture and then it's also constrained by demand and demand ultimately comes from the human population and so demand demand is finite but growing because the human population is finite for growing um so yeah it it it uh it there's a there is there are the technologies live on a substrate that has limits and it can't sort of a technology can't can't artificially can't expand beyond those that the sub little substrates without um something more without some ability to affect or change or grow on the substrate itself any given technology and then a new technology that can come along and take advantage of the substrate in a different way um can out-compete it uh but you know intelligence is weird intelligence is sort of seems like it has a fundamentally different character to it i mean look at huma human beings right i mean human beings are very fundamentally different at exploiting the the um the biosphere and the all of the natural resources on the surface of the planet fundamentally different way of doing that than any other animal um and uh you know in an open-ended and dynamic and and and um uh you know adaptive way you know you know and then and so intelligence is this is this intelligence is this asterisk it's this wild card that that um uh you know flips the script by by changing the relationship to the substrate itself as it were um and i'm not confident that you know we aren't going to see something very similar to that with an intelligent intelligence explosion we know regardless of how far
up the you know um the next asymptote is like you know what limits the agi hits and maybe hits some limits and breaks through those over time and you know you know that maybe there may be sequences there may be you know a successive uh um a succession of s-curves um or a succession of phase changes uh but i have no reason to imagine that the very next phase change the first one in this in even if there is a sequence um ahead and not a single giant leap to whatever the universe's limit is um i have no reason personally i can't think of a reason to imagine why the very next one in other words the one that's the age that that artificial intelligence is on right now um it's not going to reach a point where the capacity to modify the existing substrate that it's operating on is going to be mind-boggling um and i think this is where a lot of the you know the concerns emerge emerge from you know artificial general intelligence could be smart enough to um maybe maybe sufficiently intelligent enough to to obtain the capability of um of of trans of of utilizing transforming and utilizing um for our planet and our solar system's resources in in in ways that that are so capable that they're terrifying right um so if i teleported you in front of a joint meeting of xi jinping and biden right now and biden was awake what would your how would you say what you said agi policy i mean as a representative of rethinkx and he's put you in charge uh saying i don't know what's gonna happen uh that's that's not good enough so yeah yeah you're right and what do you what do we actually recommend and what do we recommend to policymakers um i i i suppose i would have to say that the there's it's not reasonable and or realistic it's neither it's because it's unrealistic it is not reasonable to presume that there's any way to stop agi development i think that there's no way to prevent some rogue actor from progressing on this trajectory i don't i don't see that as as reasonable or realistic and therefore recommending any policy of containment i think is probably futile and would lead to very very dangerous complacency um uh so the the my recommendation i suppose would have to then be based on the assumption that this is an arms race it's a race to to and we're under race conditions and that and that is not avoidable or containable or preventable um and remember you would have to at least in principle prevent this for however long it takes to be able to allow it to then continue safely well
how long is that going to take for 10 years or one year or a thousand years until we know how to do this safely if ever this is containment and it can fail at any moment all in in in that time this just seems unrealistic to me well matt has my recommendation one left in his masters and then a three year phd and then it'll be solved so my my recommendation would be uh that the that a very that a um that in order to minimize the risk we should ought to endeavor to to uh make an agi to to that its creation ought to be undertaken under the circumstances that we can currently imagine are most likely to lead to uh an agi having um as as little alignment problems as possible i guess you could say it the other way around you could say lead to an agi that as is as benevolent um or amiable or amicable or you know um uh towards human beings as possible but probably you want to think about it you know the other way around which is make it as as as as as least antagonistic as least hostile as possible um and are there any ways that we can think of that might uh distinguish pathways for developing agi that are more you know that are more or less likely to you know to to lead to hostility um and so for example so for example training artificial intelligences to achieve goals that are to have the potential to be destructive is probably not a great idea so you probably don't want to be running this age this ai slash agi race um with you know developing these systems for things like military use or um advertising uh convincing people to buy things as as much as possible you probably don't want to be trading these systems um rooting them on values like uh greed or exploitation or um uh competency winning conflicts like you know military so in other words i guess i guess it's it's we right now what are the main things we're using a not what are the main things we're using narrow afi for we're using it for things like you know spying on people you know i'm thinking of the alphabet agencies in the united states um we're using it for trying to sell people they don't need that's advertising uh we're using it for gambling on wall street and and you know and other uh for trading um and uh perhaps we're using it for uh you know some sort of military applications none of these seems these to me seem to be a good basis for running you know this race because if one of those candidates pulls out into the lead and wins the race and becomes generally generalized well it's not that maybe that's not built on the wisest of
foundations um but you guys but you're you can't not run the race that's not realistic so what do you do well you gotta you've gotta run the race and you've gotta you've got to win it with something that's built on a better foundation uh and i don't know what that foundation is but i i certainly wouldn't build it on a on the foundation of trying to maximize returns on investment on wall street and i certainly wouldn't want you know facebook which most of its research is dedicated towards maximizing advertising dollars and addicting people to social media and clicks i think that's probably not the best foundation um you might want to build it on a on some other basis um certainly i would want to train agi on uh you know the very best things about humanity rather than the very worst things and so what are the best things well maybe we ought to train agi on our literature and our music and our philosophy and um you know that sort of thing uh but i i i i come here with with with in desperation appealing to you much smarter fellows than me to figure out what the we're gonna do because i i'm losing sleep at night over this i thought we had for 30 to 50 years and it really wasn't going to be a problem and i had a whole i had a whole career of just mundane disruption predicting about solar panels and like that ahead of me i did not think you guys in computer science were going to crack agi before 2030. and now it seems like that's maybe the possibility and i seriously i'm losing sleep over this i don't know what the we do what is i mean okay a lot of what you said i agree with but in some sense there's no there's no comparative advantage for you to say it or rethink x to say it so in my mental categories categorization of where rethink x sits your skill seems to be to go beyond this kind of i mean people talk about ai safety and these concerns and and it's all it's philosophical and that's fine uh and it's kind of prognostication in this this rough fashion uh but what you're very good at is going from first principles and actually building a damn model right and then running the model and extracting some uh not obvious a priori but after the fact kind of reasonable observations about how things might play out so my question is okay uh not recommendations like nationalize facebook uh fine you know go ahead do that throw it in the ocean i don't care but what what model can you run what is an interesting question about the near-term development of this race that is not obvious
right now but which is consequential that's kind of a question i feel like in a year from now i could imagine you telling me something i didn't know about how this race is likely to play out or moves you can make in this race that could avoid the worst forms of the race uh where people do pour even more billions of dollars into optimizing large models to ends that we don't really want them to be optimized for where's the lever adam to pull yeah you you i would i'm hoping to pull a rabbit out of my hat here at some point um but i have been stewing and stewing and stewing on this and i am i'm not seeing any clear ways in which the lessons of past disruptions really apply here i mean we have identified a number of sort of principles that are informative and that are not obvious as you say in prospectively but in retrospect there you know they make a very good sense these are things like um the uh like the new technologies do not they tend not to simply replace older ones on a one by one substitution basis but rather they uh tend to um to create a whole new possibility space and as a result uh the market and or system um based on those new technologies tends to be much larger and and much more capable than the older system that it replaces um and there are exceptions to that you know i mean you know there are things like ballpoint pens that come along and really are just a substitute um and don't necessarily completely transform you know the way we you know we're writing with pencils and fountain pens for example um even though on a product by you know in a product category sense they are quite disruptive and they in the in that they capture market share but in many other instances we do see phenomena where a disruption changes the underlying system itself changes the way we produce goods and services using those technologies and so forth so that's one example of the kind of insight that comes out of the work that we do um we have uh uh we have a handful of things like that but none of them i mean they all they they're they are what's the i guess the best thing that i could say is that we would be reasoning by analogy and not knowing whether or not the analogies hold with the suspicion that they won't because something is fundamentally new and novel and different about intelligence as a technology compared to all the other technologies we've we've developed in the past um so we could try to imagine you know what is the what is the market for intelligence what is the incumbent
artificial intelligence market and what value proposition does that uh hold and what um what value does it create you know what what what production does it facilitate today and what would a much larger expanded market and you know greater space of possibilities look like in the future but you know again that seems like a very a pretty weak and and not particularly useful or powerful uh conclusion to draw in this instance right so here's an idea for you so here's here's my conspiracy thing i think satoshi nakamoto is a time traveler he came back in time and invented bitcoin in order to slow down agi by five years because the bitcoin miners bought up all the gpus and sunk incredible millions of dollars and staffed the world of gpus and spent them you know running some pointless algorithm to secure some uh you know experiment in currency instead of uh accelerating the development of ais i like this it's like you've got a movie idea the for the um what is this uh satoshi what is the satoshi bitcoin founder yeah satoshi nakamoto and this there's there's my understanding is that this the identity is still a mystery whether it's an individual person or or you know a group of people or whatever yeah it's right there's a lot of intrigue behind bitcoin correct yeah that's right um yeah it could be the good basis for a good sci-fi story um i guess we've got five minutes left i wanted to yeah i mean so i wanted to come back to that i was just gonna stop it again on the way down i wanted to get you to explain what you meant by profiting on the way down but i think we have a bit of lag here so you go ahead and say what you were going to say well i can do both of those things so i'll just finish my the thought that i had and then i'll talk about what you the the profiting on the way down so um the the the overarching formula that rethink x uses uh it's nothing it's not really a secret but it's it's right there in the title of our organization is that we basically what we do is is is um we use a you know not particularly mind-blowing um theoretical foundation to look at phenomena that match a pattern and then what we really do is we say okay if you if you know that some that this remarkable phase change is coming or if you have a reasonable expectation that it's coming in a given industry or market or something like that then how how that then then this is this justifies rethinking your basic assumptions about yourself about your industry your business your business models your
competitors you know the value that you're creating and offering to users in the world the you know the the the trade-offs that you might be making it's an opportunity to really challenge assumptions okay so that's at the most fundamental level what we really do and so the the question is is there something that that we can do along those lines that would be a novel insight because there have been a lot of people been trying to do that with respect to a you know artificial intelligence artificial general intelligence and super intelligence and you know there's boston and he's written his book and lots of others have as well and a few of these research institutes so the question is whether there really are any assumptions that have not already been challenged and any insights that can be drawn from those so that that's that's just just completing that thought to answer your question about profiting on the way down um uh what we've seen in the past is that industries that's that that come to terms with the fact that disruption is imminent and unavoidable um it actually doesn't work so well for entire industries but it works well for individual businesses and for investors what we see is that it's possible to change the way you operate a business knowing that collapse is coming so for example one thing that you can do is uh you can begin aggressively disinvesting divesting um from holdings in the industry and you can um uh you can sell off assets and you can continue to operate and provide a good goods or services without incurring any of the normal expenses for operations for operational upkeep for the um purchasing of of new assets that would then you know be necessary to continue operating into the future you can just stop all of that you can eliminate all of that expense um that is associated with believing you have a future and planning for the long term and so if you're a big industry you know if you're if you're uh you know the energy sector for example um you could run power plants and just stop investing in all of the assets and maintenance and those sorts of things that you would normally need to invest in to keep running decade after decade and as a result you can slash your costs in the short term and uh demand is still there and the new technology has not come in and replaced all of you know has not just does not replace you in the marketplace yet um but if that's imminent and it's coming you can profit in the meantime by slashing your costs and so this is a
pattern that we see uh where you know um uh the incumbents if they're wise can profit on the way down as they're as they're crashing and burning by slashing their costs and pulling out and divesting from their own industry does that make sense there are actually people who are a little more cynical than that there are there are you know they're people who investors who actively short industries that they believe are going to collapse and lose value so this you know this you can profit in that way but that's that's not really what i was talking about that's a little more um you know trite as it were but the the the the craftier way that that um businesses profit during periods of recession or ultimately collapse if if the recession is permanent um is uh to take actions that slash costs in the near term while revenues are still high so that they they uh radically expand their their margins and then to pull the profit out of that while they can and the problem of course is that people can do this on the so so the companies companies for example um can make a whole bunch of false promises and become very profitable and pass all that onto shareholders knowing that they're not going to be able to meet their debt obligations in the future and so they're just you know and then they know they're going to go bankrupt they know they're going to leave unwary investors out in the wind they know they're going to lead leave creditors out in the pissing in the wind they know they're going to leave you know pensioners who's up who they have obligations to they know they're going to they're going to default on all of their obligations but they can profit in the short term by minimizing by slashing um just sort of the operating costs and that sort of thing so this is something that we see again and again and we we try to warn policymakers and and other you know civic leaders and other decision makers about those dirty tricks to expect those from the incumbency and to be on guard for them so we've run out of time today but do you see the uh analogy i'm making here where is that the the second board uh oh let me see maybe we are the incumbents [Laughter] so that's yeah that's your official recommendation adam thanks i tell you what i will give this some serious thought this week and i will see if i can pull a rabbit more of a rabbit out of the hat uh i will i will give it some serious thought like if i like if if suddenly someone said okay a week from tomorrow you're gonna have 15 minutes with sleepy joe