WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed. The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except for errors at the level of individual words during transcription.


Adam talks about the three main categories of Artificial Intelligence and the risks associated with their development, as well as the debate about General AI and Super Intelligence. He proposes Artificial General Intelligence (AGI), and examines the different scenarios, timelines, and reactions. Technology experts have been the best source of knowledge on AI, yet progress has been faster than expected, and Elon Musk's predictions have been off. Tesla is investing in AI capabilities, and the US has increased its federal funding for AI research institutes.

Short Summary

Adam discusses the three main categories of Artificial Intelligence: narrow AI, general AI, and sentient AI. Narrow AI is already powerful, but general and sentient AI is still in development. Sapience is a higher level of intelligence, involving self-awareness and the capacity to model and articulate thoughts and feelings. He also examines the risks of AI, such as the instrumental convergence argument, and the importance of recursive self-modeling in AI development.
The debate over General AI and Super Intelligence is ongoing, with the latter being smarter than the smartest human. Progress in deep learning has been surprisingly easy and the timeline for AI has shortened, leading to questions about the potential of AI to replace human labor and its consequences. This session will discuss different timelines, reactions, and the potential impacts of AI on government policy, economic behavior, personal behavior, and career expectations. It will also consider whether all jobs can be replaced immediately and which occupations may be immune to this replacement.
The speaker proposes the idea of Artificial General Intelligence (AGI), which is a form of artificial intelligence that is already superhuman in many ways and is likely to replace humans in many jobs. He suggests that there is no point at which the system will be human level, as it will always be vastly superhuman in almost every other respect. He further discusses three different scenarios (A, B, and C) which represent different views on the timeline of AGI, and suggests that if one were to survey people and ask when they believe AGI will arrive, a population level distribution would be created. Lastly, he identifies three groups of people who may have the best understanding of this: tech insiders, domain experts and investors, and elites.
Technology experts have traditionally been seen as the best source of knowledge and understanding of disruptive technological change. Artificial Intelligence, however, has begun to break the theoretical framework. Insiders have an incentive to talk down the rate of progress, and people tend to give conservative estimates when surveyed. Progress in the development of AI has been shaped by the success of Transformers, GPT-3 and other neural networks, causing estimates of a breakthrough by 2030. However, it is difficult to accurately predict when a technology will mature and Elon Musk's predictions on self-driving cars may not come true in the near future.
Elon Musk's predictions about AGI have proven to be incorrect, though his company Tesla is investing in related technologies, betting on its potential. Tesla's success in electric cars has allowed them to invest in AI capabilities and safety. There is a lot of investment in AI technologies, but the US has recently increased its federal funding for AI research institutes and has banned the export of semiconductors and manufacturing equipment to China, showing a shift in sentiment from long to medium timelines. Policymakers may take action in order to prepare for AGI, though the public may not be ready.

Long Summary

Adam talks about three general categories of artificial intelligence: narrow AI, general AI, and sentient AI. Narrow AI is any software that performs tasks akin to intelligence functions, such as logical reasoning and inference. General AI is the goal of creating a machine that is capable of understanding the world around it and making decisions based on that understanding. Sentient AI is a machine that has become self-aware and has the capacity for emotions and feelings. Adam believes that narrow AI is already powerful, but is not yet general or sentient.
The transcript discusses Artificial Intelligence and its three categories: non-sentient AI, general AI, and sentient AI. Non-sentient AI is capable of responding to novelty in an effective way, while general AI is capable of handling a wide array of tasks and responding to novel challenges. Sentient AI, however, is capable of recursively modelling its own mind. It is thought that general AI is imminent, while sentient AI is yet to be fully understood.
The speaker discusses the difference between sentience and sapience, suggesting that sapience is a qualitatively different type of intelligence which involves self-awareness and the capacity to model and articulate one's thoughts and feelings. This recursive self-awareness is thought to be unique to humans and is integral to the idea of an intelligence explosion and the singularity, as proposed by thinkers such as Marvin Minsky and Rick Kurzweil. This is further supported by the instrumental convergence hypothesis, which suggests that any agent has a common set of goals and strategies.
The speaker discusses three categories of intelligence: instrumental goals, resources, and intelligence itself. Instrumental goals are necessary for survival, resource acquisition, and intelligence growth. Sapient and self-aware intelligence can quickly converge on instrumental goals as a means to achieve any other conceivable goals. Artificial general intelligence systems may not have any goals at all, but paths exist for the emergence of instrumental goals and other goals in sapient systems.
The transcript discusses how a sufficiently capable General AI could become agentic and potentially sentient. It also examines the potential risks of AI, such as the instrumental convergence argument, which states that an AI pursuing a seemingly harmless goal may acquire intermediate goals that are dangerous to humans. The conversation also touches on the importance of recursive self-modeling in AI development, and suggests that this could be a crucial factor in determining the behavior of AI.
The distinction between General AI and Super Intelligence is a topic of debate, with no broad consensus yet. To qualify as Super Intelligent, the AI must be smarter than the smartest human, but this does not necessarily mean it poses a threat to humanity. This minimum threshold is the first rung on the ladder of Super Intelligence. The next rungs involve AI being smarter than any non-computer assisted collective intelligence, such as a team, corporation or university. Ultimately, Super Intelligence is a complicated topic with many nuances to consider.
Organizations like armies, corporations, and governments can be seen as superhuman in a collective sense, but they don't represent the same kind of existential threat as the hypothetical Terminator scenario. There is debate about how close artificial general intelligence is to becoming dangerously super intelligent, and some think that the gap could be surprisingly small. The conversation suggests that an artificially generally intelligent system could become dangerously super intelligent quickly, even if it is not sapient or pursuing instrumental goals.
Many people have changed their views on the timeline for artificial intelligence, from a thousand years to hundreds of years. The progress in deep learning has been surprisingly easy, leading to the conclusion that intelligence is not as difficult to reproduce as previously thought. This session will discuss different timelines and reactions to the perception of which timeline we are in. Additionally, it will explore the different dimensions of artificial and super intelligence, and how they are related.
The views on the timeline for intelligence has shifted since 2012, with many people shortening their expectations. Recent progress in generating images has had a big impact on the public's expectations. This session will focus on understanding the impacts of discontinuous jumps in timeline estimates on government policy, economic behavior, personal behavior, and career expectations. It is not necessary to fine-grain expectations, but two categories could be distinguished: general AI and machines as tools.
The transcript discusses the potential of Artificial Intelligence (AI) to replace human labor and the consequences of this. It posits that the public and policy makers will become aware of this possibility when certain types of jobs are threatened, or when a machine can do any job better than the best person. It is uncertain whether all jobs can be replaced immediately due to software, but some occupations may be immune to this replacement.
The best combination for many tasks right now is a human and a machine working together, but this is likely to be a short-lived phenomenon. Machines are already outperforming humans in many positions, and soon they will be able to outperform humans even when they are working together with them. This raises the question of when all jobs will be threatened by machines, and when the best machine performance will occur without human guidance.
The speaker discusses the potential of Artificial General Intelligence (AGI) and its implications for the workforce. They suggest that AGI could replace humans in governance and policy making decisions, and that it could potentially replace humans in cognitive labor such as sitting at a keyboard. They note that while AGI may require a lot of hardware to train a model, it may not be an obstacle to replacing humans. The speaker suggests that there is a date at which AGI will be built and can do the work of humans, and that there are microstructures to be considered in a second order analysis.
The speaker proposes to use the term AGI to refer to a form of artificial intelligence that is already superhuman in many ways. He notes that there is no point at which the system will be human level, as it will always be vastly superhuman in almost every other respect. He suggests that computers can do everything, and that the gap between blind spots and superhuman intelligence is too large to contain a human level. He ends by encouraging people to make an effort to understand this concept.
The transcript discusses the idea of AGI (Artificial General Intelligence) and how it is viewed by different people. It proposes three different scenarios (A, B, and C) that describe how people view the timeline of AGI, with A being a short timeline that peaks around 2030, B being a medium timeline that peaks around 2040, and C being a long timeline that peaks beyond this century. The transcript suggests that if one were to survey people and ask when they believe AGI will arrive, a population level distribution would be created that reflects the different scenarios.
Adam and the speaker discuss probability distributions which represent individuals' subjective beliefs about when AGI will arrive. They identify three groups of people who might have the best understanding of this: tech insiders, domain experts and investors, and elites. Insiders may be too myopic to have a full understanding, and their incentives may affect their understanding of the technology.
The best knowledge and understanding of disruptive technological change lies with expert groups, such as the one the speaker works for, but this may not be the case for Artificial Intelligence. Tony Siba has had success in this area, but the speaker suspects AI breaks the theoretical framework. Insiders have an incentive to talk down the rate of progress, as they don't want people to be overly concerned. They will focus on what already exists, rather than pontificate on the future, as this aligns with their incentives.
The development of nuclear weapons and other technologies require experts from a variety of domains. It is important to consider the potential impacts of a technology before it is developed, and to have the expertise to achieve control over it through regulation or a combination of regulation and markets. Tech leaders and insiders have traditionally viewed AGI as an academic fascination, but there were some who believed in it and have been proved right. It is important to consider the various sets of expertise needed to develop a technology and to anticipate its impacts.
People tend to give conservative estimates when surveyed about AI timelines due to the incentive to not appear crazy. This is seen in many different domains, such as climate change and the energy sector. Internally, there is a catchphrase called "watch the delta" which refers to watching the rate of change in forecasting time horizons. It is believed that people shifted their expectations from long to medium around 2017-2018, largely due to the success of Transformers.
Transformers are a type of neural network invented in 2017 which has become the dominant deep learning paradigm. In 2020, GPT-3 caused people to aggressively advance their timelines for AI development, with some predicting a breakthrough by 2030. Tech leads at OpenAI and DeepMind have given similar estimates. This progress is shaping the behavior of venture capitalists and large tech companies, and is likely to become a dominant opinion in the near future.
Elon Musk has been predicting self-driving cars for the past 5 years, but it is difficult to accurately predict when a technology will mature. Historically, when a technology is close to maturity, the time horizons for predictions will continue to shorten until the technology arrives. If the technology is further away than expected, the time horizons will start to expand again. If Musk's predictions do not come true in the next few years, it could be a sign that self-driving cars are further away than expected.
Elon Musk's predictions about AGI have been proven wrong, which is a sign that the technology is not imminent. Tesla's stock price is a bet on AGI being cracked, as they are investing in a wide range of technologies related to it. Musk is like a magician who keeps pulling new fancy things out of his hat, but if his predictions have the wind at his back then the bet is not irrational or foolish.
Tesla has become a successful and profitable company due to their electric cars that people want. They are on track to sell a million cars in a year and make 10-20 billion dollars in profit. They have no debt, unlike traditional car companies, and have a lot of cash to invest in new technologies. Elon Musk, the CEO and largest shareholder, has a lot of influence over the company and its decisions. Tesla is an example of success and could own a piece of the future robot market.
Investment in self-driving technology and robotics is increasing, and many companies are making big bets on the technology, even if the chances of success are small. It is difficult to evaluate the expertise of companies such as Tesla in this regard, but the level of investment in AI capabilities and safety suggests that insiders are taking the short timelines very seriously. VCs are investing real money, indicating that they mean it, and that the technology is not a joke. This is an indication that the future of AI is imminent.
There is a lot of investment in AI technologies, but it is unclear why there is not more investment from states, especially given the massive potential of winning the AI race. The US's recent regulation on exporting advanced semiconductors to China suggests that the elites are transitioning from L to m. It is a mystery why there isn't more investment from militaries and economic sources given the potential.
The US has recently increased its federal funding for AI research institutes and has always had a strong industrial policy of subsidizing technology development. This is similar to the way the US kick-started Silicon Valley with money from the Pentagon. These recent moves with banning the export of semiconductors and manufacturing equipment to China are a significant escalation in the cold war between the US and China, and could be tied to a dawning realization that AGI may be coming soon. Canada was also famously encouraging research and deep learning through funding for many years.
The recent moves by the US Administration to ban sales from American companies to Chinese companies could represent a shift in sentiment from long to medium timelines. This could be due to decision makers in government leadership in the US and elsewhere starting to take the technology race more seriously, and taking action accordingly. If the public is not yet ready for this shift, what action can policymakers take? It could look like what we're seeing, and could be done for plausible deniability, as well as for a good reason that can be justified to the public.
The restrictions imposed by the US are mainly on high-end semiconductors, targeting AI. Export of the chips has been banned, as well as software used to lay out chips. American citizens working in China and third parties such as ASML in Europe, who sell technology to China, also face consequences. This is a short-term measure as it is not possible to cut off the second biggest economy in the world indefinitely without a war. This move only makes sense if it matters in the short term, as there are clear downsides for American businesses. The US is attempting to delay China's development of AI, but other companies will find ways to sell into the market.
The temptation in American politics is often to prioritize short-term goals over long-term policymaking. In this case, it is unclear if this is the case, as the potential downside of souring the relationship between the US and China is too great for any short-term benefits. It is possible that the goal is to repatriate manufacturing or prevent supply chain shortages, but this rationale is too thin to justify the consequences. This could give a huge impetus to Chinese domestic semiconductor manufacturers, making it difficult for the US to build an industry from behind without protectionist measures.
China has been successful in both manufacturing and software, but its domestic market is not large enough to make the success of semiconductors trivial. In response, the US has taken a number of actions in the past six months that have put pressure on the Chinese semiconductor industry and have helped resolve internal political problems. These actions may have been taken with a great deal of foresight, but it is also possible that they are the result of people simply being stupid and taking unnecessary risks.
The global economy is heavily reliant on semiconductors and chips, and it is important to consider the implications of policy decisions made by the US government. It is important to be humble about the decisions made by the government, as it is not always the case that they are making the best decisions. The speaker suggests that the last 10 minutes should be used to have a break and then open up the discussion for questions and play a game of Penrose tiles. This is a game where players have to click on the little tabs on the side of the screen and finish the star, and there are infinitely many ways to tile the plane. The game also has a sound feature, and players can undo their moves.

Raw Transcript

cool uh shall we get into the topic for today Yeah Boy we've got tons to talk about here you've got a great uh agenda the time too which is nice yeah let's see if we can make good use of it right so the the proposal for this week was to talk about AGI timelines uh I guess maybe we can start by briefly well he even we may disagree if AGI is the right term uh so let's maybe talk about what we mean because there's a there's a variety of terms uh because partly for social reasons people are afraid of looking silly so they they try and find a term that doesn't sound like Kurzweil might say it um right so there's you know transformative AI is kind of like a socially acceptable version of AGI for those who think agis is for distasteful nerd types um but there's there's many terms that are kind of um and just AI itself used to mean AGI I guess now it's not clear AI seems to denote a community and a research program more than a goal at this point um so Adam what do you mean is Agi term you would use or what do you think is the definitive marker that we're looking into the future to see and how would you know yeah I I still I I still and I have for many years thought in in terms of three General categories for for artificial intelligence and I everything that I've talked about or written about in the last decade or even more um it I my thinking really hasn't changed on this and that is that you have that you have what what Once Upon a Time was called narrow artificial intelligence this is just this is just uh software of any kind at least in my mind um any anything that that uh that performs tasks that uh are akin to intelligence or intelligence adjacent functions so any kind of thinking anytime any kind of abstract manipulation any anything uh any quantitative reasoning any logical reasoning any inference any inferential reasoning anything of that sort at all is done by any kind of machine anything that's narrow artificial intelligence in my mind so that's that's the first big category we've had that forever and we have extremely powerful examples of that now but but in my mind that that sort of um basically just just what for a long time was just software uh is is it's become incredibly powerful more powerful that I would have believed were possible for and just to not yet be General but um still you know uh uh nevertheless not General and um uh uh not certainly not sentients and those are the other two categories in my mind so there's narrow and then there's there's General artificial intelligence
that's not sentient and I thought that that's that something like that is possible for a time um and but perhaps not truly like in the limit not truly general not completely General but but functionally or pragmatically general and by that I mean um uh a a some sort of some sort of um uh that there is a capacity of an of a system an artificial system to respond to novelty in an effective way um and that's that's how I've tended to think about this second category General artificial intelligence that is non-sentient that that uh you could have you could have very sophisticated very clever algorithmic approaches that that uh could through their sophistication and and cleverness um of the the sophistication and cleverness of their design um and perhaps just Brute Force also they're they're Raw Rock computational power and resources um and memory resources for example that they could handle novel uh challenges and um uh that the result would be systems that can handle a very wide array of different tasks and and solve a library of different problems and and you know meet a wide array of different challenges perhaps not every single conceivable one that that um an agent could encounter or a system could encounter but many lots and lots and lots um and so this would be a step away from sort of the the software that can that can um win a game of chess but can't do anything else to software that can win a game of chess and go and every other game you can imagine and drive a car and and and uh we seem to be getting to my mind sort of that seems imminent to me and maybe I'm wrong but that is what feels like it's holy moly we are right right on the cusp of something something like that then the third category um in my mind has always been uh artificial systems that are sentient in other words they have this additional function this additional capacity um and I don't know whether this emerges naturally or whether this is something but they have to be designed in I honestly don't know I don't have an agnostic about that but this is the idea that these are systems that that have the um uh it have the ability to model their own then model themselves um and model not just themselves as an agent but model their own mind in other words their recursively model their own modeling so have mental representations of their mental representations of their mental representations recursively and of course this is what this is sort of a Hallmark of sapience perhaps more than sentience
um depends on how you define Sapien and Sapient and sentience and saffons and there are a few other terms like this if you if you want to say that all mammals are sentient you know they're conscious and they have some sort of obvious subjective experiences and and respond very rapidly to stimuli for example and these sorts of things like animals do if you want to call that sentience and I'm perfectly happy for people to to insist on that that's fine dogs and cats are set you know if you want to say they're conscious and they're self-aware and they're sentient okay that's great but sapience if you want to use that different term that in my mind is different this is this is where you can think about and articulate and model um and and and and perform symbolic and abstract reasoning on your state of mind on your mind and the contents of your of your thinking itself and you could so you can say I think this about my thoughts or I feel this way about my feelings or I think of this about how I used to think about how I once felt about how I thought you know and so on and so on it's going to be there's really no end to that recursion um and once you have the capacity to have self-awareness with this recursive this recursive modeling that I'm I to me to my mind seems quite unique to human beings and seems the core element of sapience well that strikes me as something quite different a a type of intelligence a qualitatively different um type of intelligence that's doing a different type of functioning and I my impression my impression is that when you know the folks like Marvin Minsky and Rick Kurzweil and Verna Vinci and these other people who are thinking about intelligent was it good men and others who are when they were thinking about an intelligence explosion and a singularity and that sort of thing um I my suspicion is that they were presuming that that sort of recursive that sort of recursive um self-awareness uh that that that was an integral portion integral component of an intelligence explosion and my my understanding there I'll just just very quickly I'll stop talking in just a second but just to complete this thought my understanding is that it is a logical conclusion of um uh this idea of of just sort of a an instrumental I think it's the instrumental conversion convergence instrumental convergence hypothesis is that what Nick Boston calls it anyway this is the idea that it's it is it is logical to um uh it is logical to for any agent to have um some some shared common set of
instrumental goals because logically those in serving those instrumental goals is a means of achieving uh effectively all other conceivable goals and so you know you have to like like you can't achieve any goals if you're dead so survival is an instrumental goal that's just logical you know you you if you want to achieve any goal at all you have to be alive in order to achieve it so survival needs to be a a subservient or a supervening I guess um instrumental goal same with resources like if if you want to be able to achieve any goal at all you need to have some resources if you have zero resources you're not going to be able to achieve any other goal and so you need to have resources and then intelligence itself intelligence itself seems like it's you know you could think of it as a kind of resource but if you want to think of it as something separate well then it's that's instrumental and I think the idea was that any Sapient self-aware recursively self-modeling intelligence would very quickly converge upon those instrumental goals as the means of achieving any other conceivable goals including goals that hadn't thought of yet goals that it would arrive at or would conceive of or would um uh eventually just come to Value um and so uh you would have an intelligence explosion as a function of pursuing those instrumental goals like trying to acquire resources trying to become smarter and trying to stay alive uh and there may be some other ones that I'm forgetting but at any rate that that uh those things don't necessarily seem to me to be obvious outcomes of of the non-sapient general intelligence like I can see you I can see artificial general intelligence systems being generally intelligent in the second category that I mentioned without having any goals at all I can certainly see that but it uh um I do see I think paths for the emergence of instrumental goals and then other goals evolving very quickly possibly um as a in any system that is sapient and so that's anyway that's very long we did but sorry I've been thinking I've spent many many years thinking about this and like those are the three categories that I still I still sort uh intelligence into in my own mind and and um I've I've I have to say I've it's been so long since I've it was formal about the literature that I can no longer remember who or where to give credit so none of these are none of this is my thinking originally at all and that's certainly not but I'm afraid I can't remember anymore how to properly
attribute and cite the the you know where these ideas come from but that that broad three category those three buckets that's how I've been thinking about things for a very long time and I still think about them open the change in my mind of course but that's how I still think about them as of right now yeah thanks that was useful uh yeah I guess let's make a few remarks on that maybe there's a there's a whole other conversation we can have in in this direction um we could have it now or we could uh sort of pivot back to the other topic at some point but let's keep going with this for for a little bit so the there is an argument and I'm not sure to what degree I think it's a strong argument yet but I'm actively thinking about it there is an argument that uh any sufficiently capable General AI will become agentic that it will acquire the characteristics of an agent and maybe sentience is another layer beyond that I don't know to what degree that is sort of one of these Universal properties that you might expect to emerge at sufficient levels of capability that seems to be a very important open question um I'm not sure I agree completely that the recursive self-modeling is as crucial as you were describing it to be uh I'm unsure about that but the the uh the agent-like Behavior Uh seems to be close to being uh something that you might expect to emerge um yeah and the other comment I wanted to make is that yeah as you certainly know Adam the uh one of the key Arguments for AIS of sufficient power being by default dangerous as in not only dangerous if somebody builds them to be dangerous but actually by default a risk is this instrumental convergence argument it's that because any goal you give a sufficiently powerful Ai No matter how banal one of the key steps to achieving any goal with sufficient certainty or at sufficient scale is to acquire all sorts of other intermediate goals and the instrumental convergence hypothesis is that among those intermediate goals or instrumental goals are things like acquire power become more intelligent recursively self-improve acquire and protect access to resources and uh so a sort of under controlled or unaligned AI might by pursuing a rather banal goal or even one that we view as ultimately good May pursue instrumental goals that we view as highly non-aligned with human preferences which is the polite way of saying her we don't want to be paper clips but between there are a couple of things oh sorry go ahead I was I have a couple
things to add just to add in there but go ahead please do finish if you're I didn't mean to interrupt you in the middle of a thought but I do have a couple of things yeah yeah you've often uh you've often distinguished from we've been talking about this between General Ai and super intelligence um that's right that was one of the things I was going to mention so what is the distinction there so I think that they they uh others disagree well I should say there's a there's healthy debate about this and there's there's there doesn't appear to me to be a broad consensus yet um in in the at least in the conversations that I've participated in or observed um about whether or not you know um artificial general intelligence uh is in inherently qualifies as super intelligence from from date from from t zero from the moment uh you know you from the moment of your mid-generality do you also have uh super intelligence and there's debate about that I'm I'm yeah yeah that's that's the Crux but but the thing is there's there's so that's the that's the minimum right the minimum to qualify as super intelligent is smarter than the smartest human um but don't think that that equals I don't think that's identical to the minimum amount of of intelligence that poses a threat to humanity or an existential threat or an uncontrollable threat to to humanity or or or to to uh you know I I do what's a better way to say it that that I think I think that there's a there is potentially a gap between something that is minimally super intelligent uh with respect to to any individual human something that is minimally super intelligent but in other words what is the minimum threshold to qualify super intelligence with respect to any other human system so I think you need to think about collective intelligence you know organizations of people being able to achieve Things That No human no individual person can um so you know you you that that's enough in other words I'm marking out sort of Milestones or thresholds of of the ladder of super intelligence the bottom run would be smarter than the smartest individual person um but there are a lot of rungs on that ladder then and some of the next ones up would be smarter than any any uh non-computer assisted um non-software based uh collective intelligence that humanity is is put together so for example you know a group of people you know in a lot of different ways you can think about anything from a you know a married couple to a team to a giant Corporation to a university
whatever those are all you know you can all think of those as as systems that have intelligence and and they're certainly superhuman with respect to any individual person um but I don't I don't think that those necessarily I mean I suppose there are certainly some organizations that we've already created that are super intelligent in a collective sense um and you have you have swarm intelligence and crowd intelligence too which can be superhuman it's hard to hard to hard to know exactly that seems pretty narrow like a narrow dimensions of intelligence but um uh uh uh most of those don't represent sort of a an existential threat that we have to worry about the the alignment of in in some sort of profound way I mean yes there are ways in which large organizations that can out think any individual person are threatening and they do they are you know Humanity's threatened by armies and corporations and governments and all kinds of organizations all the time but we don't worry about those the way that we worry about you know the the proverbial Terminator scenario right the Skydive why not well I think that you then have you have to have you have to get a few a few some distance further up the super intelligence ladder before you have a systems who's intelligence is overwhelmingly Superior um and uh I don't know where or how to define that I don't know at what point that occurs and then this this becomes a debate an open question of okay well yeah uh uh what if any Gap is there between these you know artificially uh artificial and general intelligence so AGI uh in those other categories that I mentioned you know either the non-sapient version or the Sapient um version uh at what point do those systems uh possess sufficient super intelligence to be to represent a a a such a threat such that we have a major alignment problem we have to think about very seriously um and the uh I my own personal opinion is that um uh that Gap could be surprisingly small that an artificially generally intelligent system could become dangerously super intelligent surprisingly quickly a lot of people don't think that I think a lot of people think at least in the conversations I've been part of and observed a lot of folks think well yeah you could be you know if it's not Sapient and it's not really pursuing those instrumental uh goals and you know it's not really being agentic uh yeah it's gonna be a long time before it's you know a threat and we don't have to worry about turning into paperclips
um and then there are other folks who's think yeah well you know I mean it could go from being pretty smart to terrifyingly and smart in an awfully hurry um and uh and I've seen some compelling Arguments for how and why and then as far as the Sapient version goes um I think one could make a case that that an intelligence like that can become dangerous extremely rapidly in in my mind you know like not years but maybe not even months but on the on the order of weeks or days or hours or something like that so um but yeah is that all that yeah there's a there's a there's a lot there there's there's there's ways there's there's the dimension of artificial intelligence and then there's a dimension of super intelligence and these are not perfectly aligned I don't know if they're totally orthotic to one another but they're they're you know they're definitely different things I don't think they're identical right um okay so the main thing I want to do in this session is to talk about different kinds of timelines and different reactions to the perception of which timeline we're in so at a higher level uh so maybe I'll just preface the discussion by describing broadly speaking how my own views have evolved and I think many people uh have had this experience where we've gone from maybe a long time ago even before the Deep learning Revolution thinking that yeah you know artificial general intelligence or even sentient machines are are possible or at least we don't see any fundamental obstacles who knows how long it will take we don't understand intelligence very well we still don't understand our own intelligence very well but apparently that's not a prerequisite for making progress on building artificial intelligence uh so we've gone from a state of maybe thinking it's possible but being unclear about the timeline to after the Deep learning Revolution thinking that well uh you know a lot of people would have updated their timelines already in 2012 thinking that we seem to have gotten onto something and that maybe brings it forward from a thousand years from now to hundreds of years and then as as the progress has seemed basically easy right which is the really surprising thing it's not know it's not easy like opening a jar of peanut butter is easy but compared to many other areas of science progress in deep learning has looked remarkably easy and the the ease of the progress I think has to update you on how difficult intelligence is to reproduce in general right and how
how difficult is intelligence a hard problem or a kind of easy problem relatively speaking and I think that as a as a judgment that has shifted for a lot of people over the last uh you know since 2012 so um it's only 10 years uh okay so there's a range of views on the timelines I think the people closest to it not all of them have short timelines there are surveys of machine learning researchers and AI researchers and you will still find people who think that you know maybe by the end of the century we'll have something that's a kind of human level intelligence or greater um but many people have shortened their timelines outside of the AI Community uh there are maybe we could imagine concentric circles of people who are more or less aware of what's going on and the general public is seeing some of the progress um and some kinds of progress make more of an impact than others I think the recent wave of progress on generating images is probably captured people's imagination more than much of what has come before and that feeds back into people's expectations on what's going to happen and how close we are to creating something like an AGI and I don't really want to focus necessarily in the session on our personal predictions although that will come up naturally what I'm interested in is the question of I think there will be discontinuous jumps in the timeline estimates of various groups of people and I think those jumps will have significant impacts on government policy on economic behavior on personal Behavior expectations for careers and that's a circle of ideas I'd like to think through a bit more carefully to that end it's it's useful to know what we're talking about so hence the definitions we just went through but I'm a little I mean we don't want to fine-grain things too much I guess because it starts to become a bit unmanageable uh I wonder if it's worth distinguishing expectations about different kinds of AI or if we'll just stick with I mean what I proposed in the email was just to stick with timelines for AGI itself where we pick that to mean something here uh maybe it could be disaggregated into two categories I'm not sure do you have an opinion Adam on how much we want to disaggregate I mean you could you could imagine maybe the two big categories are so if you ask someone when do you think there will be a machine that could do well okay again I'm not sure we want to focus purely on the uh homo economicus on the person as a as a tool in a machine uh but if we're viewing things from an
economic perspective then I suppose an important question is and what point can an artificial system do any job a human can do uh sitting in front of a computer and let's say like a median human not to be offensive about it but um that's going to come somewhat before the possibility of a machine that can reproduce the cognitive labor of any human and it will be extremely disruptive even if that Prospect of replacing even the smartest humans is is somewhat further along and then you know then there's the scenario where it's smarter than all humans put together by many orders of magnitude so there's that kind of replacing of jobs question and then there's the question of sentience or you know I don't I don't have much to say about that I guess I thought to focus on the the prospect of when people think that AIS can say replace any research mathematician or replace any lawyer as a kind of organizing question for this what do you think yeah I think that that's a good way to I think a possible thing to think about there I so I agree the the if if our if our overarching concern for this for purposes of this discussion is the the when does the tide of public opinion and awareness turn uh causing you know some major shift in public Consciousness the general Zeitgeist and uh uh and and sort of social with consequences attendant upon that awareness that sort of I think I think sort of a overarching thing that we're interested in trying to get a handle on here and I think that the framing that in terms of what concerns could lead to um you know sudden surge in Awareness um among the public and policy makers anybody else um yeah one one certain certainly one concern is uh uh uh when certain types or certain types of jobs are threatened when some um perhaps proportion of jobs are threatened uh and and um uh yeah they're just just just I I think that that is that's definitely a good a key yeah a key signal but perhaps another one would be um like a key question you could ask is uh you know what what what what about the moment when a a machine you know AGI or whatever it is um can do any job better than the best person that doesn't necessarily mean you can replace all jobs immediately right I mean you know just just because you can you can read a an x-ray better than the best human radiologist doesn't mean you suddenly put 500 000 Radiologists out of work now maybe it does it's just a matter of software but but maybe maybe there are occupations where that's that's not possible where you would
literally need 500 000 instances of that AGI running and there aren't the resources and there won't be the resources to do that for a while um uh but you could you know you do have some system that is you know that can perform superhumanly um you know and it could replace the best single best person at a given task or job or occupation so I think those are probably slightly separate like when do when do all the jobs really start being threatened and quickly versus like when can the when can the machine do a better job than a person than the best person I think we're already there for quite a few positions you know quite a few things now like you know if you you um another one to think about maybe is um when can a machine or an AGI I don't know how we ever want to say it but when can can the machine outperform not just the humans but but humans plus machines right so when can the best machine outperform when is the when does the best machine uh best machine performance occur when humans are out of the way as opposed to with human guidance so myself give an example of this my understanding right now is that the very very very most powerful chess players right now are human software pairs where you get a really really good I don't think that's true paired with starfish or something or whatever it is Stark is what is it is it whatever that thing is stockfish and that you then stop fish that's what it is and that um that that combination like the human enhanced by the tool is is more powerful than any other combination and my my impression is that that is going to be a very short-lived company it already passed yeah there was some brief window much celebrated by meat meat bags when that was true okay great well I was not aware that that was the case but that was 100 what I was predicting which was that that would be very short-lived um uh because we're desperately clinging to that idea right now if you're if you guys I mean you must be aware of this right that this that you see I mean every time you open the news and see this especially about art and so forth there's a lot of talk about well the best combination is going to be a person and the machine working together and no yeah maybe right now the central cope of our age right right yeah all right I think that's that's useful I mean so there's a couple things like when are all the jobs going to disappear when can a machine do better than the best person can possibly do in any given job um or across a range of jobs and
um uh uh I think it then it it it it then we need to start talking about specific jobs like what kind of into like what kind of work are we talking about and um are we gonna are we gonna oh I don't know this is a big big can of worms to open but especially things like governance decisions you know um like when when is the when should we when should we turn economic policy and foreign policy over to to AGI I mean if if AGI can already do better than the best brain surgeon and the best mathematician you know if if we get to that point then why would we hold out you know um you know governance and and policy making and stuff in in foolish human hands why wouldn't we why wouldn't we uh allow machines to to make better decisions than us than us in that domain as well and of course that like I you know like I said this is a huge can of worms but you know um so I think there's something to be well something to that idea as well is which jobs are we talking about in my response to that somewhat cynically would be that we have a very large infrastructure which uh projects the illusion that there's actual governance going on already where in the core there's actually zero cognitive activity if surely we can just repurpose that edifice to pretend that humans are still making the decisions when that that core is replaced by something else uh you know that's a an area of Entrepreneurship for an entire class of people to reinvent their jobs and maybe not even change them very much foreign yeah I guess the gap between maybe it's worth I don't know how seriously to take this but it could be the case I mean supposing some number of years there's a there's an AGI that can perform the cognitive labor of a median human of sitting at a keyboard in front of a computer uh but that's a system that takes a thousand gpus maybe of course we should keep in mind that it takes a lot more Hardware to train a model than to just run it so but okay so let's take suppose it means a thousand gpus for inference that is to actually run the system when it's trained it may not be so easy to scale it up to effectively replace Inhumans at least not immediately uh I don't don't see that as much of an obstacle really so I'm not inclined to really pay much attention to this Gap um here I think uh there are all sorts of microstructure that we can pay attention to as a second order analysis but to first order I think the way I'd like to think about it is there's a date at which uh this system gets built that can do
the cognitive labor any media and human can do and then somewhat later there's a time where it can replace the smartest human but let's just focus on what I wrote as AGI median ADI Med because the other thing will come soon enough and AGI in mid or AGI and Max will also come soon enough after that and I don't really want to pay attention to the distinctions between these so for the purposes of today I propose to just declare AGI to mean to mean this one okay that I'm I'm ex I'm willing to accept that for the purposes of this discussion let me note this one let me let me know one objection your honor just let the record show um I I personally I personally think it's probably a mistake to be imagining that that uh that we will have a that we will that we will have systems that that slowly crawl upward from subhuman to human level you know immediate human level two then um uh you know Superior uh you know top Human Performance um right now I mean you can pick any anything any any intellectual domain any any function behind anything a job a person does sitting behind a computer for example you mentioned like you know take a job that's where someone's doing it behind the computer um the the the artificial intelligence trying to do that job is already superhuman in many ways many many many many ways and it's just it's at this point it's just there are huge blind spots that result in very stupid or silly mistakes being made and you know those those are getting eliminated by various processes that you guys understand better than I do but you know at the the point at which the the the the system is intelligent enough that it's not really making any mistakes and it's not really getting you know doing anything really Oddball and and doesn't have any huge blind spots that it's the moment where it stops being you know just getting tripped up on stupid stuff it's it's so vastly superhuman in almost every other respect that it that you're it's not a human level at that point it's that massively superhuman so this I don't really anyway that's not just lodging this this official uh note in the record let the record show I don't really see how the human level is is achievable at all this you sort of have blindly weirdly strangely dysfunctional and then and then you close those gaps to Super intelligence and there's nothing there's no human level in between I mean uh yeah because it computers can be able to do everything anyway yeah okay you get it so I'll just end yeah yeah just make an effort so
yeah but for the purpose of the conversation yes I'm more than willing to accept the the premise yeah I agree with that which is partly why these categories don't seem that distinct to me um Okay so uh given that so this this thing AGI is a distribution over dates right so there's okay we're sitting here we have some Bayesian level of expectation about when this day will be uh and uh like to maybe just for the sake of Simplicity give three scenarios and then we can talk about what what will happen to move people between these different kinds of scenarios or believing rather in these different kinds of scenarios of course it'll be much more continuous than what I'm about to describe it um Adam can you step over here so the orbcam sees these boards please all right so I'm going to call them a b and c Along Came and so a is going to be short timelines and by that I mean something like so here's 2020 he is 20 30. and a short timeline means something that is sort of peaking around 20 30. so if you believe in a short timeline you think roughly speaking there's a probability of at least 50 percent that will have AGI before 2030. then a short time sorry a medium timeline I'll Define to be something that views it as going to occur sometime in the next decade but still with significant probability Mass passed there and maybe maybe it's not even gaussian maybe if it's medium you think uh you know maybe there's there's some significant probability it happens before 2040 but maybe it sort of decays something like that and then C because you know there's still people who think it's just impossible Etc and we're not even going to talk about that and so for a long timeline I'm going to take it to mean that you think it's probably going to happen this Century but beyond that you don't want to commit so who knows uh okay um presumably if you were to for example survey people [Music] um and you were to ask them a question like you know when do you think there's a 50 50 chance or when do you think there's a 99 chance that that AGI raw it will arrive you would get a description a population level distribution like this as well of a sort of public opinion it's different than exactly these scenarios but they would you know you would over time you would see public opinion shifting presumably um and and it would you know at different points in time it would probably reflect these different scenarios that the population sort of the aggregate sentiment level does that make sense yeah that's right so
these are probability distributions so each so if I were to ask Adam or or I was to say myself what my beliefs are about when AGI will arrive well then uh in principle one of these graphs describes my my subjective belief about that question then as we go across the population each individual has some graph like this and then uh you know you can you can say that each time they receive some information they'll update and shift their subjective estimates from one of these graphs to another or to of course something in between so yeah we imagine for example learning about GPT might shift you from medium to short something like that um okay so in addition to these three scenarios I wanted to uh identify a few groups of people and then talk about um when we might think that those groups of people will shift from viewing one scenario is most likely to another and what that might mean so I haven't thought too much about these groups so feel free to disagree if maybe the different classification makes more sense but the three I thought of were the sort of insiders by which I mean uh people in tech companies researchers engineers the sort of experts the domain experts in this technology which would also include investors so the the Insiders let's call them foreign and then I mean definitely not insiders technically but people whose opinion matters very much uh Elites so by that I would mean um kind of cultural Elites political Elites economic Elites outside of the investors I guess by investors here I mean people that are actually investing in the technology so VCS for example foreign and the third I just mean everyone else any comments on that classification Adam how I would break it down um yeah the the I honestly don't know who Who had who might have a claim to better understanding the best understanding I mean you know uh in my experience looking at other technology disruptions the um insiders can often be quite myopic as a result of the different primarily as a result of the different incentives that they're that they're um that weigh on them um it's not that they're you know they they it's not even that they lack exposure to information it's that they resist certain uh onboarding certain information um and so we see you know like the the um like if if somebody well just one specific example is that if somebody's really really you know committed to fuel cell vehicles or um uh you know um a technology that that is is a dead end they will you know really they won't
know or understand necessarily the the um uh the the the mega trends that are going on as well as for example you know a non-technical person who's you know in a venture capital group that's looking you know at a um you know a more macro level um so uh but having said that you know it's it's it's I don't know where the the best knowledge and understanding lies that's my big question with this this is how I would break it down except that I would I'd look at the sort of that um uh that top category and say gosh where where is the best knowledge um and uh I I don't have a great answer to that um uh it's it's varied over time like right now right now um the best General track record of success for anticipating um disruptive technological change is lies with expert groups like the organization I work for but that hasn't always been the case and it's certainly possible that it's not the case for this this particular technology for artificial intelligence in fact I kind of suspect it isn't um so Tony siba for example has been a remarkably successful technology disruption analyst and he's got a great framework for that but I personally suspect that AI breaks our theoretical framework in some fundamental ways and so my my big question here is is um yes you've got that you've got sort of the the Insiders and the technical uh ex you know domain experts you've got you've got Elites in leadership and decision making roles and then you've got everybody else but where is the real where is the best knowledge and expertise that I don't know that's a question yeah I guess on the point about incentives uh so it's it's often the case that when a technology has negative consequences or unintended uses that might suggest regulation is a good idea insiders have a a rather strong incentive to talk down the rate of progress which you can see very clearly with AI where uh I'm not saying it's conscious necessarily right but a lot of the Insiders will uh not really want to talk about things like AGI or uh AI safety um in part I mean the in part it's just the pragmatism right I mean they're engineers at heart so they they want to build things and talk about things that exist and not pontificate too much on the future because that's kind of their temperament uh but that aligns very well with their incentives to not have other people freak out about it right uh so they will say things like well we'll have plenty of time to figure out what the consequences are once they once we get a bit further along
and things like that which would sound kind of pragmatic but uh really from the point of view of the people who are likely to be impacted seem a bit Hollow and self-serving I think often yeah I agree with that certainly I think that makes very good sense it also makes me uh it reminds me that um uh you know there's a lot of different expert different uh different dimensions of expertise that one would need to consider being brought to bear on the issue as a whole so as an as an analogy as an analogy historically and one can think of the the example of nuclear weapons the development of nuclear weapons and the technical experts and the the engineers and the leaders uh the leadership that was involved involved in the development of that technology those may not have been the people with the most expertise for understanding the social economic political you know um though all of those other Dynamics right there's they're still there so that would be a separate set of of um expertise and other domains of expertise uh so there's a difference between the expertise required to to develop a technology and the expertise required to anticipate and uh it's its impacts in different ways and in all the different domains and possibly many different sets of expertise and then maybe perhaps another set of expertise for how best to to achieve control over a technology through regulation or and and or combination regulation in markets and whatever else um but if if if you have decided on a specific goal or an end like like how do you achieve an aim if uh with with regulatory apparatus that's another set of expertise hmm okay um yeah maybe let's just go through a few of these combinations and that'll be a good stimulus for conversation so uh the first one I had written down here was uh so I'm not sure the best notation for this but so let's go let's let's use the letters so t e p Okay so the tech leaders the Insiders I guess I used that's yeah I for insiders let's do that okay insiders go from long to medium to short and I would say this is kind of most insiders most people in Tech would have thought AGI was some kind of stupid fascination of uh of uh mathematicians and people in Academia before um before 2010 roughly of course there were people like Kurzweil who were long term Believers and actually building their careers out of a belief in in this kind of thing and you know they deserve a lot of credit for being right but I think that's kind of accurate I think I don't know when
of course you can survey people I'm not sure I really yeah maybe this is a a clear indication of bias I I've seen surveys in various books about Ai and In Articles where they survey AI researchers and get their expectations on timelines maybe for next time I can take a look at those and and communicate what those surveys say uh I think people are there's a very strong incentive to not seem crazy so just talking to people I get the impression that the kind of answer they would give to that survey is much more conservative than what they would say to you in private so yeah in private people are often much more willing to choose shorter timelines and then for the the estimates they were given a talk or when they're speaking to the public they would feel the need to be able to defend them very thoroughly which would mean being able to cite existing research or existing lines of investigation with a little bit of extrapolation uh that's not bad that instinct is a kind of scientific Instinct but uh forecasting is is not well served perhaps by that instinct so I do think this this is something we see in many domains 100 correct and we see it in lots of different domains many of the ones I I my team looks in and then in other domains like the climate change Community where the consensus is conservative for exactly the reasons you mentioned and it does there is a derivative there it tends to march in in One Direction over time and yeah so it's uh so you can you can kind of we we have a catchphrase a little you know like a hashtag and internally in my team called watch the Delta which is watch the rate of change at which the forecasting time Horizons move forward um and it's it's the Dynamics in the private Consulting the mckinseys of the world all the bloombergs and all that sort of stuff same pattern in the climate science Community same pattern and the energy sector same pattern in the automotive sector same pattern exactly what you just described for the reasons you just described so you're 100 of the money as far as I as far as I'm aware yeah so we got any insiders I mean just my personal opinion just watching the way people talk and the way they allocate their attention which is of course the better thing to watch the action rather than the speech I think a lot of people probably shifted their expectations from the short from the long to the medium uh around 2017. so it was all sort of shortly thereafter let's maybe say 2018. uh largely as a as a result of Transformers
so Transformers are a class of neural networks that was invented in 2017 in natural language and then spread across every other domain and it's now unquestionably the dominant Paradigm for deep learning and it's the first time where really there was a single architecture that just Without Really modification could handle almost any modality and so it's a very general uh differential differentiable computer basically and people's ambition with deep learning really underwind a step change when when this was discovered and there's been a gold rush ever since really um and then I think somewhere around 2020 and I'm gonna sort of blame this on gpt3 uh I think a lot of people shifted their timelines very aggressively forward to expecting somewhere around the end of the decade I mean maybe it varies maybe it's you know maybe it's 20 30 just to cite some examples from people that the audience may know so John Carmack who is not like a central hit figure in the history of deep learning but a famous game developer is now full-time doing AI research and he's Rich enough to just buy a giant Nvidia kind of super computer to put in his basement um and his personal estimate is 50 by 2030. and that's a that's a phrase you often hear actually is people uh their point estimate is 20 30. now one shouldn't take these kinds of things too seriously right it's not like he knows how to do it but uh you know he's a serious guy thinking from first principles and comes up with this number and yeah there's sort of various lines of attack you might take to to come up with a similar figure so he's not an outlier when it comes to a certain community of people and in that Community you would also put I think the tech leads at places like open AI I don't know that people that Google at Google or deepmind are sort of on the record as giving their estimates but I think the chain leg some time ago gave around 20 30 as an estimate so it's it's certainly not a universal opinion within the tech Elite so maybe maybe this is still underway right this this transition uh but I think it within a few years this will be a dominant opinion and it's certainly shaping the behavior of VCS and the the inner core of large tech companies you can see the way they're allocating resources is they are they're Believers um can I have one thing here again which is that um one of the things that that we've seen historically with with uh technology disruptions of other kinds is that um the telescoping forward of the
projections for when a technology will mature um and maturation is a rough it's tough thing to describe but but the the sorry what I'm trying to say is that in historically in instances where um the these timelines have been telescoping forward if if the technology was indeed imminent then the the those those um that Delta that I mentioned or these you know the shortening of the Horizon the people of people's expectations that has tended to move only in One Direction in instances in in instances where there was you know disagreement and the technology in reality lit was further from maturity than we thought then the time Horizons you know they can telescope forward and then bounce back out they can you know we realize there's some hurdle and then they you know people start estimating further out further further out again um now there's not much you can there's not much you can proactively deduce from this ahead of time because you just can't know like you can't know it whether you're really close to the technology maturation or not there's just no way to know but here's one thing that you can say about that which is if the technology if we are genuinely close to it one thing to expect is that these these forecasts these expectations they will continue to come closer and closer and closer right up until the moment that of arrival and so if if if if we are indeed close we are indeed close then don't be surprised that in 2025 or 2026 we are predicting 2028 and 2029 for this thing to arrive does that make sense um uh and if it if if it isn't actually close then what I would suspect is that by 2025 2026 people will be changing their minds a little bit and we'll say well maybe it's 20 35 or 20 40 after all um if if we get if another couple of years go by and the timelines are only continuing to March forward I would take that as a sign again you can't deduce too much from that but you know it doesn't hold that much water but um it's based on what we've seen in previous technology disruptions that's that's yeah it's a it's a it's a pattern to watch for yeah maybe maybe let's uh give a bit of a contrarian take since we've been kind of boosters for much of this conversation so uh famously Elon Musk has been saying he's going to have self-driving cars in five years for I don't know certainly five years at least I mean I remember I think when I first came back to Australia so that was 2015. uh I think musk was predicting self-driving cars you know within a few years I think that
was probably right and that that keeps getting pushed back and and I don't know what your estimates are Adam but it it it's revealed continually to be a much harder problem than it seems uh so there's at least one Tech leader whose predictions about a particular subset of AGI are not to be taken at face value um yeah and that also fits the pattern that I've described earlier as well which is that if you're if if the technology were genuinely imminent then Mo then most observers making predictions would be would be correcting in the direction of sooner than we thought and since we're not seeing that that is a sign that the technology is not imminent right if if the technology were imminent the the self-driving then then we would have then what we would expect would be the opposite of what has happened with Elon Musk right people would be saying it's 10 years away and then the next year they would be one year later they'd be saying it's eight years away and two years later they would be saying it was five years away and there would be this there there would be this this um uh this this marching forward this outpacing of the of the telescoping of the of the um uh of the of the projected Finish Line relative to the actual you know movement forward in time right the Finish Line would be approaching us faster than time is progressing faster than one year per year in other words the opposite's happening with Elon Musk that's a very bad sign for self-driving technology but he's also with just one voice and more reasonable voices having been making his such crazy uh uh uh forecasts so here on that note since we're discussing Tesla it's interesting to look at their recent announcements and what it occurs to me that maybe musk knows that the the stock price of Tesla is is kind of an AGI bet something like that red where they're investing in a wide range of technologies that you'd expect to be incredibly valuable if AGI is cracked like you know just general Robotics and the self-driving car is a subset of yeah for sure General Robotics and the robot they've announced and so on is you kind of you know you could say it's just doubling down on the you know he can't Retreat from these predictions so he's just like a magician who keeps pulling a new fancy thing out of the Hat I think that's the interesting thing about mask is I think that's also true but it's it's in some sense if you if you have the wind at your back uh then the crazy bet isn't actually irrational or foolish it's kind of just you know
the wind is gonna arrive in your sails at some point and you just have to keep you know keep dancing around and keep the keep the game going until you happen to catch the wind and I think I think the wind will come and I wouldn't be short on Tesla but uh yeah I don't know what your take on the the stock price of Tesla is it is it really about a car company or is it actually it's a kind of story stock around the the AI you know uh trajectory it seems to me yeah I I it's hard to it's hard to to to pin that the behavior of the company down I mean we've got they've got a lot of different things going on and they've got obviously quite a very eccentric leadership and and they're unusual for a company as large and successful as they are they have a leader with so much influence an individual person with so much influence over the destiny of the of the company that's rare too right I mean Elon Musk is not just the CEO he's also like the largest shareholder and he wields a huge amount of influence over the decisions you know so it's a strange situation um and there the the the in my mind it's it's I agree that it's it's it's a sensible bet right I mean the the um whether for luck or or whether by virtue of luck or hard work or all of the above Tesla has managed to be successful in building uh an electric car company and they make electric cars that people really want and as a result of that they are you know they're on track to become just an absolutely staggeringly they already are a staggeringly profitable Enterprise right by any any conceivable measure right now finally I mean they weren't there was a you know just a few years ago they were they were on the ropes but now I mean they've got they're selling you know what I think they're going to sell a million cars in a year they're going to make you know 10 or 20 billion dollars in profit they have no debt to speak of I mean like the traditional car companies are like you know four or five to one you know uh debt to annual revenues ratios Toyota makes Toyota's like like Toyota's 200 billion dollars in debt right which is more than which is way a big multiple of their annual revenues as far as I understand it um I don't know the exact numbers but Tesla doesn't have any debt so they're just they've just got all this cash it would be crazy not to Blade out some some multi-billion dollar bets on these new technologies if there's any chance of owning even a piece of the you know the robot Market of the future or the the
self-driving Market of the future yeah and I think it's the same reason why you know Google had the X you know the moonshot sort of programs as well if they've got all that flustered all that cash it's worth making a pretty big bet even if the chances of it succeeding are small um as for the actual you know likelihood of success or leading I don't know I'm not able to evaluate you know Tesla's AI expertise the way you guys are I don't know whether they've got a deep team or if it's all for show I don't know whether they're you know their efforts are are are you know whether their internal efforts are really gonna be the winning uh horse or you know whether they'll just buy buy the smaller companies and accumulate the technology that way as they did with batteries I don't know but um if I had you know if I had four or five billion dollars in cash coming in every quarter I would make big bets on self-driving technology and and and Robotics other robotics areas as well for sure I mean I think that's just a very logical um decision and and and quite frankly I'm surprised that others are not doing more of this that's that's the surprising thing like if if you are if you are a tech company with billions of dollars what are you using that money for if you're not investing in positioning for a uh you know the the the the AI AGI um Revolution that's coming as it were um yeah yeah it seems it seems like the smart money has got to be trying to find a way to get some piece of of of whatever this is that that's imminent now or that feels evidence anyway yeah that's an actually an interesting point to dwell on for a minute so I would say the the main reason I would say why I think uh this I category of people I've adopted shorter timelines is just you follow the money uh the level of investment in so all the I think essentially all the co-authors on the Transformer paper from 2017 now run their own startups funded to the tune of hundreds of millions of dollars collectively uh the level of investment I mean of course it could just be crazy right the VCS may just have no clue uh but it's real money it's not like uh it's a joke right and if uh if people are putting money on the line they they mean it to the degree that you can mean something so I think the level of investment not only in AI capabilities but the increasing level of investment in AI safety in particular is an indication that a lot of these insiders are taking the short timelines very seriously right I mean VCS typically aren't investing on
timelines of 100 years that isn't their business model not even really on timelines of 20 or 30 years most of them most of them are looking to get returns in some window that is only consistent uh I mean if you just add up the amount of investment and think about well how much economic growth would be necessary to really justify that uh I think it's difficult to square with even medium timelines for um I mean of course you can get a lot of economic value out of just narrow AI which is of course what most of this investment is in but I think it's an indicator anyway okay so we move on to um another one of the categories uh well let me add one just very quickly Dan one thing there I think that that that is it's a question in my mind which is um uh there is the the the around this investment this question of all you know okay there's a huge amount of investment going into this space now um and that indicates something um uh expectations uh uh you know some amount of Insider knowledge okay all the rest of it um but if if the prize is really as great as as as is expected right I mean if this is really like if winning this race means you win the world the real question in my mind is why is there not more investment and in particular in particular why are states not invested in this why why are there not sort of 50 100 billion dollar per year you know defense budgets you know pouring money into this I mean if you win this race you win you just absolutely win everything um so it would seem like the stakes are sort of just they're infinitely high and it would justify if not a not infinitely large but but it would justify an extremely large amount of of expenditure not just among you know sort of private investors but among but by States and so this this would then lead me to ask well why aren't they if they aren't making these expenditures and uh next second question would be are they making these expense insurance and we just don't know about it they're you know their their stealth is worth um so I just add that in there yeah I think that that's a good segue into this Elites category I think uh I think the Chip War and the recent um regulation on exporting Advanced semiconductors to China strikes me as a leading indicator of at least one state the US making this transition among its Elites from L to m uh so yeah I agree actually it's a big mystery to me uh why there isn't orders of magnitude more concern but also investment among militaries and also just economic investment in these Technologies you
could say there is right I mean for example the US just in the last couple of years has has stepped up a lot of federal funding for research institutes in AI there's uh well you could say that America has always had a strong industrial policy of subsidizing development in various areas of technology and uh and um and uh kind of a well so the US often criticizes China for this too tight integration between the government and businesses around scientific development and technology and the subsidization of of the champions in various sectors and how this is anti-competitive and so on and of course to large degree they're just copying the way the Americans for example kick-started Silicon Valley with money from the Pentagon and so on uh but okay so to some extent it's really a significant increase in the level of industrial policy around semiconductors around Ai and Affiliated Technologies in the US just over the last couple of years so I think it it's not like nothing is happening clearly you can see some hints of people starting to be convinced that this is of short-term importance now the the question is is that is that really about narrow or or general AI right I mean you could say that well DARPA has been involved in funding AI for decades right uh there's a reason a lot of this development happened in the US and it's uh well a lot of it happened in Canada right like Canada was famously encouraging research and deep learning through funding for for many years when this was considered kind of a joke by the mainstream ML and AI communities but America has certainly been funding this for a long time so it's not it wouldn't be accurate to say that the elites politically culturally economically haven't been paying attention to machine learning or or kind of narrow AI uh but I would say that these moves like it's really difficult to understate sorry it's it's difficult to overstate how important these recent moves with Banning the export of semiconductors and Manufacturing equipment to China are this is a is a big deal this is a clear escalation in the cold war between the US and China and it seems to be tied to some kind of dawning realization at least this is how I interpret it adorning realization that yeah AGI may be coming fairly soon I don't know what your take on that is Adam yeah that's and we've we've uh we've gone back and forth a little bit privately Dan you and I outside of this uh these discussions and I've expressed exactly the same sentiment when I saw
that the news of this this change in the um in some of the Strategic strategically relevant policy making around these Technologies especially between the United States and China that was this it's a little bit conspiratorial I know but it was one of my first one of the first things that came to my mind which is um our our decision makers in the category of Elites you as you have put it up above there um in other words our our um decision makers in um government leadership in the United States uh and elsewhere starting to get serious about this uh Tech this this technology race and um uh if if they were if they were if the expectation had shifted from Scenario L to M or especially if it's had you know shifted closer to the S scenario and what action would we expect those Elites to take what could they take in light of a public that is not yet there with them right so so if the public were still with you know broadly speaking the L scenario then what actions can the can policymakers take and can government officials and bureaucracies writ large what can what action could they take based on knowledge and based on the advice of an Insight from you know the the tech layer um The Insider's layer uh well I I would imagine it would look an awful lot like what we're seeing and um so it it I think it aligns now that doesn't mean it's true it it's it's but this is the nature of these sorts of these sorts of wheelings and dealings and shenanigans right is he sort of you need plausible deniability if you're doing something for one reason and you can't sell that to the public you need to have another good reason for doing it for justifying it and of course that aligns pretty well too um but yeah that was absolutely where my mind went first of all I don't think it's implausible in the slightest um uh maybe it's worth it yeah yeah sorry I didn't want to interrupt you well it was just that just then leads to to the the of the the other portion of the agenda that you were um that you had laid out for us which was this idea of okay so what happens if sentiment at large begins to shift right um so yeah let's get on to that there's one comment I wanted to make first though which is well why do let me just defend the case a bit that the the recent moves by the U.S Administration why do they represent perhaps a shifting sentiment from long to medium timelines that's because okay let me say to my understanding what they did so not only did they ban the sale from American companies to Chinese companies of
equipment and semiconductors uh it's a bit fine grain right the the restrictions are mostly on high-end semiconductors so it's not like the the chips that go into your you know your Toyota Camry a sort of banned from being uh exported it's it's meant to be high-end chips so it's it's aimed directly at AI for example this is named in the in the regulation uh why go after that okay so what has been done so export has been banned uh software that is used to lay out chips has been regulated uh also American citizens in China who are working in these companies are not allowed under you know are not allowed to continue working on these kinds of Technologies but also third parties so for example asml in Europe can't continue to sell their technology into China at these for these high-end ships without facing consequences so for example they wouldn't be able to sell into America um okay so it's not like America's uh obviously the world hegemon but they can't dictate to everybody what to do forever so China's a large Market over time they will develop a domestic capability you can be more or less skeptical about how well that will go but over time I think it can be done uh and other companies around the world won't ignore this Market they will find ways to sell into it or they'll take the hit or they'll spin out subsidiaries that manage to get around the rules so it's not like this is a permanent solution right you can't just cut the second biggest economy in the world off from the most important resource in the world indefinitely without a war that's just not possible so it's a short-term measure so it's a delaying tactic all this can achieve is to put china behind for some number of years and then then maybe they catch up or maybe they don't but it's it's a short-term measure if you think I mean if you don't imagine there's going to be a major war that has AI as a substantial input uh within 10 to 20 years and you don't think this economically will make a huge difference within 10 to 20 years it's a huge risk to take for no real benefit I mean there are clear downsides to this regulation right it will have costs for American businesses uh and so it seems to me only rational to take this kind of move if you think that in the short term it matters so I think it's only on the basis of a medium timeline that this strategically makes any sense I don't know if you agree with that Adam yep I agree with that completely I don't see I don't see a different plausible interpretation
um one could argue what the Temptation is always to is with American politics it's always to default to you know some sort of some sort of um political expediency or grandstanding in service of whatever the short-term election you know election goals are for different politicians as opposed to the long-term goals of bureaucracy and and and you know um longer Term Policy making uh and so you know you you but in this case and I don't really see evidence I don't see compelling evidence that that's the case I don't see how how this is a political football or a political wedge issue or anything politically expedient that could explain sort of grandstanding or rattling the saber or you know pissing off China or when any of these other sort of things that you might obtain short term because that's what you mentioned short-term benefits from at the expense of long-term stability and and the benefits of a you know presumably a more cordial long-term trade relationship with China and so it seems like a big potential downside to take for what for what's what is gave in the short term and it's you know you it's I think the superficial analysis I'm seeing as well as protectionists it's trying to repatriate manufacturing back to the US it's trying to ensure that we don't suffer supply chain shortages like like we are or have done recently in the future but that all seems like a bit of a you know that that seems pretty thin seems like hogwash basically it just doesn't doesn't seem like there's a deep enough rationale there because forgiven given like you said this you know the how how much this is how much the potential there is to Poison the Well to put to really sour the relationship between the United States and China it's it seems like a you know not just China but also with all the European well yes yeah exactly yeah yeah yeah exactly so um one it also it also gives a huge impetus to the Chinese domestic semiconductor manufacturers so I guess it's easy sitting in America to underestimate how difficult it is to build an industry from behind when it has economies of scale right this is just the most difficult development problem out is if somebody's also already reached economies of scale to to build up an industry you have to well the obvious answer is you kind of protect it by tariffs or something right so that you given an artificial advantage to your domestic uh competitor while they catch up but that's really hard to do because without exposing the company to competition it
just becomes typically inept you know nepotistic corrupt just dependent on subsidies and nobody actually does the hard work necessary to catch up with the international incumbent and China's success the part of it that is is really it's very far from straightforward to achieve what China did not only with manufacturing but also now with software to not only put Protections in place but to stop your domestic companies from being stupid without the competition you have a large enough domestic Market you can make it work but semiconductors work at such a scale that even China's domestic Market is it's kind of doesn't have enough demand for the high-end chips to make this trivial so it's without really being forced into this corner it could have been easy for the Chinese domestic semiconductor manufacturers to continue to not really be able to catch up but this by forcing the issue it resolves a lot of internal political problems around trying to get this effort to proceed more quickly so it's in some sense handing a gift to the domestic semiconductor capability in terms of getting them focused on this competition and that that also I mean that's that's a major part of the risk from my point of view of taking the moves that the Americans have in the last six months yeah the the only thing about all of this is that is that you know you and this is the problem of course of all conspiratorial of all conspiracy theorists theorizing rather and and and that sort of thing is that is that you're is is you're granting you're giving too much credit to the sophistication and wisdom and and and and cunning and guile of the of the people who are involved and the reality of course is that is that the people and the organizations and the systems that they're part of are usually a lot dumber than that and so the the the question then to ask is is there a stupid explanation is there an explanation for taking all of these actions that doesn't require a great deal of foresight and wisdom and understanding this issue and playing 4D chess and all of that kind of thing is there is there some simpler what's the simpler explanation for just doing this and and being stupid about it and taking unnecessary risks and blah blah yeah I I I I mean I suppose you could you could make a case for the that but this one's awful suspicious especially considering it's a long long it's it's a far-reaching policy in terms of time it's a huge Stakes involved you know there's hundreds of Industries the whole
economy a whole global economy is touched by this because semiconductors and chips are now just ubiquitous they're in everything um no part of the global economy is not going to be policy making so you know the default assumption should be that people have thought through it pretty darn carefully but having said that you know let's let's leave a margin there of our uncertainty let's let's admit that there's a possibility that this is just the US government full of idiots doing something really dumb for some reason that that that you know it it's it's not some Grand conspiracy or great big plan but it's just some dumb thing that they're doing and this has happened many many times in the past so you know let's hold out at some point some possibility witness everyone the the faith an American has in there institutions [Laughter] you don't get to yeah you don't get to it you don't get to let how you don't you don't get to be a citizen of a country that elects Donald Trump as your president and then just automatically assume that your government is making genius 40 chess moves in the policy making Arena so you you have to be humble about it that's cynical yeah um I think we should probably start next time with the question of public opinion and its timelines and what those shifts might mean uh given that the time we have left so I suggest we use the last 10 minutes or so to have a bit of a break and then open up the discussion for questions and and play a little bit of Penrose tiles so if you'll follow me I have a bit of a surprise foreign have you seen this before Adam ah okay all right so on the bottom of your screen you'll you have two well you'll see personal boards still there but uh you have darts and cuts okay so if you hit one for DOT then you can click on the little tabs on the side of this thing and finish this star yeah there you go okay now it can be extended infinitely but you have to there are many ways to do it infinitely many ways to tile the plane these are called Penrose tiles by the way there's two shapes dots and cuts there's infinitely many ways to tile the plane but there are also it's not that any given move always works uh so this is a fun little thing to play with you guys can take it from there I think see what you think works and there's also at the top you can click play to tile has a has a sound inside so my suggestion is we uh you can also undo foreign myself from the previous session any things people want to curious about [Music]