WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed.
The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except
for errors at the level of individual words during transcription.
AI is rapidly advancing, and could revolutionize the way we live. There is a concern about how to prepare for this transformation, and how AI will affect workers, relationships and values. Effective Altruism is helping to bring AI safety to the forefront, and John Carmack is leading the way in training AGI. There is a debate over how AGI can be achieved, and a need to discuss the properties of optimal agents. Despite the potential risks, there is optimism that robots will be put to productive tasks, not dystopian ends.
AI poses a challenge to existing systems of production and use of resources, as it may soon be able to do complex tasks as well. Automation is the next step in which machines can do complex tasks better than humans, estimated to take no more than 20 years. Effective Altruism has had a major impact on AI safety, with organizations focusing on safety and EA sources funding AGI startups. This transformation presents a challenge in terms of how to prepare for it, and a cultural shift needs to be addressed at many levels.
The Effective Altruism movement has had a significant impact in bringing AI safety to the forefront of discussion, with donations allowing influential voices to be heard. It has become a popular movement, with many people finding ways to participate in line with their own values. This has created strange alliances, such as the online community Less Wrong, which discusses the risks of AI and is influential among effective altruists. This could help us better prepare for the massive transformation that is coming, which is bigger than a disruption.
AI is a revolutionary paradigm shift, with many believing that human-level intelligence is on the horizon. There is debate over whether it will arrive quickly or slowly over a short timeline. Those in the former camp believe that safety must be addressed in advance, while the latter camp believes the same approach used for other engineering artifacts could be applied to AI. John Carmack is a testament to this approach, as he has set out to train AGI on his own quarter million dollar Nvidia DGX station. AI is a complex and rapidly advancing field, and it will take time and effort to reach superhuman level.
Two views on how Artificial General Intelligence (AGI) can be achieved have been discussed. The first view is that we need to continue throwing resources at the challenge and scaling laws support this. The second is that some minimal processing power is needed, but also some key design elements to reach human-level intelligence. It is unclear which is correct, but it may be a combination of both. Additionally, it is possible that an AI system can show promise with fewer resources than currently available, and that the FOOM scenario may not be the only one possible.
The speaker is worried about the potential of Artificial General Intelligence (AGI) to disrupt the global economy, challenge democracy, and threaten liberal ideals of equality. He believes that to prepare for this transition, scenarios must be discussed where humans have some agency and assumptions must be made about what AGI could do. He suggests that if AGI is only as intelligent as the smartest human being, then nothing will change, but if it is much smarter than any collection of bright people, then it could have a significant impact.
Technology has the potential to revolutionize the way we live, from machines that can do things beyond human capability to the possibility of a super intelligent AI. This is a much bigger threat than regular disruption, and could be destabilizing, yet few people are actively worrying about it. The speaker is skeptical of discussions about the properties of optimal agents, as automation is a huge problem and opportunity, potentially bigger than the other disruptions. Despite this, there is still hope for optimistic scenarios, and it's important to stay positive.
The speaker suggests that safety concerns related to AI and safeguarding institutions and values against extreme power and wealth inequality should be considered separately. They express optimism about the potential impacts of extreme wealth inequality, as they don't think it directly translates into calamitous outcomes. They argue that most billionaires are not tyrants, and the use of force is unlikely to be monopolized by individuals. Hence, scenarios of feudal or medieval living under tyrants are unlikely to occur.
The speaker is optimistic about the future of society when machines take over the majority of tasks currently done by humans. He suggests that robots will be put to productive tasks rather than used for dystopian ends. However, there is a concern about the implications for workers, as the threat of withdrawing labour can weaken morale and cooperation. AI is becoming increasingly integrated into our lives, with many people having relationships with virtual girlfriends and chatbots, raising the question of what happens when our closest relationships are with AI systems. This could lead to a situation where our relationships are based on economic and interpersonal utility, rather than just economic value.
AI poses a challenge to existing systems of production, consumption, organization, governance, and use of resources in the natural world. These systems rely on the assumption that only humans can do complex tasks, but AI may soon be able to do these tasks as well. This will require a rethinking of how these systems are organized and how they can adapt to the changing environment.
Technology has enabled machines to take over simple tasks, and this is known as mechanization. Machines are powerful tools but still require human operators and monitoring. Automation is the next step in which machines can do complex tasks that humans can do, and even do them better. This transformation is estimated to take no more than 20 years, and this is without artificial general intelligence. This transformation presents a challenge in terms of how to prepare for it.
Effective Altruism is becoming an increasingly influential movement, and has had a major impact on AI safety. Recent AGI startups have been funded by EA sources, such as Anthropic. These organizations focus on safety, which has been a source of internal conflict in Open AI. EA commands a lot of capital in terms of donations, and has led to a cultural shift that needs to be addressed at many levels, from policy to technology.
The influence of the Effective Altruism movement has been significant in bringing AI safety to the forefront of mainstream discussion. Donations from individuals with deep pockets have funded many efforts in this area, allowing voices like Bostrom, Russell and McCaskill to be heard. Effective Altruism is evidence-based philanthropy and appeals to those who are not tied to traditional institutions. It is not surprising that the AI research community has a long connection with the Effective Altruism movement.
Effective altruism has become a popular movement, with many people finding ways to participate in line with their own values, which are often rooted in science and rationality. This has led to strange alliances, as people passionate about effective altruism become aware and activated to engage with and care about other issues their fellow enthusiasts are interested in. This is seen in places like Less Wrong, an online community that has been around for over 15 years and discusses topics such as the risks of AI. If this community is influential among effective altruists, it brings the issue to the forefront of people's awareness. This phenomenon could help us better prepare for the massive transformation that is coming, which is bigger than a disruption.
Artificial Intelligence is more than a disruption, it is an evolutionary paradigm shift. It is comparable to the eukaryotic transformation and the colonization of land by marine organisms. It is a different league to fire, language, writing and the wheel, and even electricity. It is important to be cautious when applying assumptions from other disruptions to Artificial Intelligence as it is so much bigger.
Technology is advancing rapidly and many in the tech industry believe that human-level artificial intelligence is on the horizon. There are two camps in this discussion - those who think AI will arrive quickly and cause a 'foom' scenario, and those who think AI will arrive slowly over a short timeline. Those in the latter camp believe that AI capabilities are accelerating quickly, but that getting to superhuman level will be difficult and require a lot of time and effort.
There are two camps in the debate on AI takeoff. One believes that it will happen soon and safety issues must be addressed in advance, while the other believes that the same approach used to ensure safety of other engineering artifacts could be applied to AI with sufficient resources. John Carmack is an example of the latter camp, believing it will take him five years to train AGI on his own quarter million dollar Nvidia DGX station with a bunch of A100 GPUs. His wealth allows him to pursue this goal himself, and his existence is a testament to the possibility of the approach.
Two views on how Artificial General Intelligence (AGI) can be achieved are discussed. The first view is that we just need to continue throwing resources at the challenge and scaling laws have lent credence to this perspective. The second view is that there is some minimal amount of processing power necessary, but some key design elements or 'secret sauces' are needed for extraordinary capacities of human-level intelligence to emerge. It is suggested that the speaker may be in the camp of believing that a few key ideas are missing and that he may be able to crack the problem. It is unclear which view is correct, but it may be a combination of both.
Humans have a larger brain size than chimpanzees, cats, and dogs. It may be possible for an AI system to show promise on a system with fewer resources than the ones currently available. If signs of sapience and self-awareness are seen, it would be possible to acquire additional Hardware resources even in a hardware constrained world. It is possible that the FOOM scenario may not be completely compelling and other scenarios are possible.
AGI could rapidly increase compute resources, even under hardware-constrained conditions, leading to a rapid acceleration of the technology. The speaker worries that this could be caused by clandestine efforts by the US intelligence and defense communities, and that this could lead to something dangerous. He does not take comfort in the idea that a boom scenario is not preordained, and instead worries about the potential for something to go wrong.
Artificial General Intelligence (AGI) has the potential to be massively disruptive, automating a large number of jobs in the global economy. It poses an existential threat to democracy and liberal ideals of equality. To prepare for this transition, scenarios must be discussed where humans have some agency. This conversation has focused on AGI, although narrow AI could also be disruptive. To protect democratic values, it is important to think ahead of time and consider the potential outcomes of this transition.
The speaker is concerned about AGI and what it could do if it becomes vastly more intelligent than the smartest human being or collection of people. He suggests that if it is only as intelligent as the smartest human being, then nothing will change, but if it is much smarter than SpaceX, Harvard's physics department or any other collection of bright people, then it could have a significant impact. He believes that before discussing how to prepare for AGI, assumptions must be made about what it could do and scenarios must be constructed around that.
The speaker is skeptical of discussions about the properties of optimal agents, as predicting something based on assuming systems can do all the work of the Harvard Physics department is enough to make many predictions about how society will change. The speaker is concerned about the potential threat of superintelligence, but believes that automation is more akin to other disruptions such as energy, transportation, and food. Automation is a huge problem and opportunity, potentially bigger than the other disruptions, but these disruptions seem normal to the speaker.
Technology has the potential to revolutionize the way we live, from machines that can do things beyond human capability to the possibility of a super intelligent AI that could either solve all our problems or wipe us out with very little effort. This is a much bigger threat than regular disruption, and we should be prepared for it. However, few people are actively worrying about it and it could be destabilizing. Despite this, there is still hope for optimistic scenarios, and it's important to stay positive.
The speaker expresses that there is a gap between safety concerns related to AI and those related to safeguarding institutions and values against extreme power and wealth inequality. They are both important, but should be considered separately. The speaker is optimistic about the potential impacts of extreme wealth inequality, as they don't think it directly translates into calamitous outcomes. They suggest that the billions of dollars under the control of billionaires rarely leads to negative outcomes.
Individuals becoming extremely wealthy is not as concerning as it may seem. This is because most of these individuals are not tyrants like Muammar Gaddafi or Vladimir Putin. Capitalism and the wealth generated by nations is viewed more conservatively and is not as worrying. The use of force will remain largely monopolized by states, meaning it is unlikely for individuals to obtain a huge amount of control over it. This means that scenarios of feudal or medieval living under tyrants are unlikely to occur.
The speaker is optimistic about the future of society when machines take over the majority of tasks humans currently do. He argues that dystopian scenarios such as those portrayed in films are not plausible. He questions what use robots would be put to, suggesting they may be used for work instead of policing citizens. He believes that whoever controls the robots will not be a madman, and that alternative modes of organizing society and production are largely driven by aesthetic senses of justice and fairness.
Robots are likely to be put to productive tasks rather than used for dystopian ends. The speaker is optimistic about this and does not find the idea of robotic police officers dystopian. However, the speaker is concerned about the implications for workers, as the threat of withdrawing labour is a powerful tool in negotiations between management and employees. This is often obfuscated as it can weaken morale and cooperation within the institution.
AI is becoming increasingly integrated into our lives, with many people already having relationships with virtual girlfriends and chatbots. This raises the question of what happens when our closest relationships are with AI systems. AI could soon be better than our spouses at listening to us and understanding our worries. This could lead to a situation where our relationships are based on economic and interpersonal utility, rather than just economic value. We may soon have AI that has known us since childhood and cares deeply about us, which could undermine the foundations of our current relationships.
Humans have relationships with simpler systems, such as virtual pets and video games, and can become attached to them. There are potentially psychological and social disruptions caused by these quasi-intelligent agents, such as the influence of bots on social media. This could lead to evolutionary processes where people become socially disabled by focusing too much on the feedback signals they get. It is important to consider the unhealthy outcomes that could arise from deploying these systems in everyday life, both for individuals and societies.
where did we leave off last time oh man um Let's see we were discussing these principles in the context if I recall correctly of uh concern about the adaptation of Institutions to well I suppose it's a fairly generic problem of disruption but and specifically to do with the arrival of very intelligent machines in the near future we're foreign principles they're I mean they're they're pretty shallow platitudes in a way um and you know there are there are circumstances in which that's those are nevertheless valuable I mean they can they can sort of um keep you on the right track keep you from from straying too far um from you know sensible terrain but uh but there there are there's not a whole lot of specific content here and so we I think we need to we need to try you know break down analytically we need to we need to analyze the the specific challenges that AI poses the specific ways in which they might uh in which AI um is likely to um uh to alter uh circumstances and and upon which incumbents um well just our whole incumbent system our whole mode of um producing and consuming and organizing and governing and all the rest of that um in relation to you know uh uh resources and material and energy in the natural world that kind of thing I mean all of that is going to change because so much of that so much of that system as a whole with all of the structures and components of it they're all premised on this idea that humans are the source of of um are the only source of uh uh you know intelligent um work intelligent laborers you know the the the only the only thing that can perform complex tasks and complex tasks are you know essential for the functioning of virtually every aspect of um the you know the production system that and I use that in the broadest General sense you know it even even more broadly than the economy as it were um we have we used to sort of have a global way of of meeting human needs of producing things goods and services and but more than that you know re producing and reproducing organizations and and institutions and um you know all kinds of tangible and intangible infrastructure and and um culture and traditions but all of this stuff gets produced and reproduced in a system and all of that all of that is premised on this on this this basic assumption that only human beings can do um uh the complex tasks that are required um or only human beings can do it can do necessary needful complex tasks and to the extent that we've we've uh succeeded
in allowing machines to take over some tasks they've only been the dumbest of tasks the very you know and and for example my book that I'm that I'm I'm very close to finishing here um I distinguish between uh Automation and mechanization which I wish I think I think you're a useful um useful terms and not quite you know widespread terms of art so people uh use those same terms but I'm quite careful to distinguish between using uh technology and in particular machines to mechanize um versus using them to to automate and um uh mechanized in under that rubric mechanization is something that you can do with a very simple task a bulldozer is it allows you to mechanize you know the relatively simple and Mindless task of moving Earth around in certain ways certain very simple ways and there's a great deal of power and mechanization I mean you know um a bulldozer can move more within an hour than you know a team of 100 guys with shovels can move in a month I mean it's it's they're it's incredibly potent these these you know these machines these instruments of of sort of um modernity as it were uh are enormously powerful but they're dumb and they still require uh a human operator and monitoring and and or um a human human constructed um programs in order to do anything more sophisticated at all and uh so so it's this is all just sort of 35 000 foot view down on a system where humans are absolutely essential for so much everywhere inside this very very large system um and so the the uh uh it's it's it it it's it's hard to even know where to start when we're talking about changing that basic feature you know saying okay well well here's what's going to change suddenly and I think really quite abruptly I mean certainly in the scheme of things quite abruptly um machines are going to be able to do uh you know all kinds of complex tasks that hitherto could only be done by human beings and then very very soon afterwards um machines are going to be able to do virtually every task complex task uh that a human can do and and in addition to that be able to do it substantially better and um I would be very surprised if that entire transformation took longer than 20 years from today and this is this this goes this is even without full-on artificial general intelligence which is looking a lot you know more and more likely as well on on a sort of a 20 or so year time frame um so I mean it's it's it's very difficult very difficult to know where to even begin uh preparing for a transformation of that
magnitude I'm I am honestly still after all these weeks of talking about it I'm still at a loss for how to even you know I mean other than just just you know tense up and brace for impact is basically about all I've got at this moment invest was um yeah maybe on that topic let's come at it a little bit obliquely uh we were talking I mean I suppose the impetus for this discussion originally was talking about how could our institutions adapt but and T antecedent institutions is culture more broadly um so this change is clearly deep enough and Broad enough that it's going to be as much a I mean the adaptation has to happen at many levels policy and the actual technology of AI safety and institutions need to adapt and there needs to be regulation but also it's a huge cultural shift that's going to take place um and it's interesting I think to reflect on what's happening with effective altruism uh I haven't paid that much attention to it it's been in the news recently because of um will mccaskill's new book maybe you've paid more attention to it than I have Adam uh I was been aware of it for years and I sort of know what it's about and I've read some of the Articles written by Peter Singer and and some of the other sort of protagonists in the movement uh I have my criticisms of it I suppose I wouldn't call myself an effective altruist in the sense of aligning myself with the tribe although I share a lot of emphasis on the long term and so on things that I think draw people to the movement now but it's clearly becoming a very influential movement and certainly commands a lot of capital in the terms of donations and so on uh and it's also having a very big impact on AI safety so I don't know if that's something you're aware of but I'll spell it out I wasn't really clear on this until recently when somebody who has a position that's funded by money that comes from effective altruism kind of explained it to me um but there's been a number of recent kind of AGI startups with a focus on safety uh anthropic is is one of them uh sort of spun out of open AI kind of like a internal conflict there over how to you know how important safety is I'm guessing I don't know the details there anthropic has a lot of funding I don't know what percentage of it comes from EA sources and I don't have any you know inside knowledge of that I'm just guessing some of it is but there's a few organizations around like a number of new ones to do with large language models that that seem to be as though
somebody with some relatively Deep Pockets has donated a bunch of money to people who are worried about AI safety and this seems to can you know likely to continue um so yeah I'm not saying that all of them are funded by this money but I think it's involved to a significant degree and I think it because the EA people for example will mccaskill talk seriously about AI safety they're kind of like another voice besides bostroms and Stuart Russell and the kind of more the people who've been worrying about this for a long time and there are a set of voices that are quite influential among many young people so it seems like AI safety is an issue and not just like bias but kind of like existential safety uh is becoming more of a mainstream topic largely due to the influence of this philosophical movement um which is quite an interesting Dynamic uh so yeah I'm bringing that up partly to it's raining uh yeah I find it interesting how movements like that may play a very important role in adaptation to disruption at this scale right given that it's as much you know it's deep enough to trigger significant social changes and yeah I don't know what you think about all that yeah I mean I wish I I was more uh intimately familiar with with you know the goings-on of the effect of altruism movement um I had heard that that mccaskill has a new book out um his name has come up on a few of the sort of podcasts that I'm I'm aware of and occasionally listen to although I honestly haven't been listening to very much recently um so it just kind of has come across my general sort of news uh radar as it were um I read a few of Peter singer's books and have enjoyed his work for many years 20 years or more now and um uh so uh I I think there's there's it doesn't surprise me I was not actually aware of a of a deep and long connection between the um uh sort of the AI research community and uh the effect of altruism movement but it honestly doesn't surprise me I mean effective altruism is basically you know um uh evidence-based philanthropy and evidence-based charitable giving and so it's it's it's it doesn't surprise me uh because that sort of appeals to a a you know irrational rationalist kind of um uh personality or or you know young people who are um who are uh you know well-intentioned but are not wedded to traditional institutions like you know religious Traditions or or whatever um uh it doesn't surprise me at all that they would they would look to something like the effect of altruism movement and
see in it a way to um uh to participate uh in line with their own values and if those values are primarily you know rooted in science and rationality and so forth I think it seems in other words it doesn't surprise me even though I wasn't aware of those connections before having um you pointed out though it's it's I mean shared values or it makes for strange bedfellows sometimes right and so it really is interesting that if somebody for example is very passionate about effective altruism they could learn about and become quite animated and and activated I guess is a better word become quite activated about other issues that um you know their fellow effective altruist enthusiasts are are interested in right this is just this is this is just how communities work if you join a big Community you end up becoming aware of and oftentimes activated to start you know engaging with and caring about things that other members of that Community um uh are are into no surprise there right so if if a whole bunch of you know people I'm thinking of places like you know less wrong that they've been quite uh uh um you know it really now is how how far back does it go 15 years 20 years um that whole uh sort of community online you know you know it's it's basically a it's kind of a little bit of a social platform but it's great if you know it's an old standing Forum I'm thinking you know even thinking back to early slash dot style days and early early Reddit days and that kind of thing it reminds me of stuff like that but a lot of the conversation originally I don't know I don't know what it's like now I've been on there in years but um a lot of it was originally quite interesting you know it's about about the risks of AI and so forth and there were some I remember reading some really fun stuff on there um yeah so if this if this community is is influential among effective altruists then yeah it's it's uh it it brings that issue to the Forefront of people's awareness who might otherwise not be paying that much attention to it fascinating cultural phenomenon really um but uh uh I yeah I'd have to look into it more and think more carefully about it to see what um uh you know if if there's anything instructed we can learn there about how to prepare better uh at the individual or the community or the you know so entire um you know social level for this massive transformation that's coming um and I I I I I'm even hesitant now I think this is bigger than a disruption honestly um I mean it's a disruption would be
something like disrupting a specific kind of product a specific Market segment or even an entire industry but this is so much bigger than that that I I think this is this is a you know artificial intelligence is More Than A disruption it's it's a it's a full-on you know evolutionary Paradigm Shift yeah it's it's completely it's in a different League uh honestly I mean it's it's it may even be in a different League than fire and language and writing and and the wheel and um you know certainly bigger League than electricity uh so it it I don't know if anything compares to it really except I mean like you have to go back to things like the Advent of um uh uh I mean I think I honestly think it's it's it's comparable to things like you know the eukaryotic transformation every you know that you know the shift from single-celled organisms to multicellularity it's something on it's something at that level and that's not it yes the the evolution of of um uh yeah or maybe vertebrate Evolution or seizure or something I you know the um uh the colonization of the land um by a marine organisms um success what was it uh I guess the earliest plants were about 600 million years ago no more reason than that 300 million years ago um uh yeah it's it's something it's something at that level of impact I'm I'm thinking more and more I don't really know how to see it a different way and so how do you prepare for something like that I mean I just I'm yeah I I think I think I mean I I'm not I don't I'm not opposed to trying to continue to think about this through the lens of disruption I don't think that that's necessarily you know a waste of time but I do want to be very much on guard against assuming against freaking in you know onboarding assumptions from other disruptions and from the theory and it's and you know things that have worked from a smaller disruptions I I think we need to be very careful in assuming that this is going to look anything like you know um you know the disruption of film cameras by digital cameras or the the uh uh the disruption of Transportation by combustion engine vehicles I mean those were a big deal in their way um certainly but uh this is so it's so much at such a different um you know a different a different degree and scope of comprehensiveness and depth and breadth that um yeah I'm I'm sort of hesitant to um to to make to make too many assumptions that what we think we know about technological change in other ways applies all that much here and they I
think we could we could find out that that we're very very wrong um to extrapolate what we've seen with other technological disruptions to this situation and I think in fact I mean I I think most of my team sort of even it's it's we've had some discussions about this but most of my teams sort of feel this way it's one reason why we haven't put out anything on this topic yet um is because we're just we're sort of concerned that we would look we make fools of ourselves if we if we if we try to pretend that this is the same as other disruptions and that all of our all of the usual thinking that we have bring to bear applies in this situation because it probably doesn't yeah let me let me then uh sort of describe some of the things I've paid attention to recently and and where I feel like more quantitative analysis might play a significant role in in the discussion so let's say there's roughly two camps uh like okay put aside the people who you know maybe are just completely skeptical about human level artificial intelligence arriving anytime soon um let's assume it's coming uh you know timelines may vary but so let's say relatively soon so let's accept for the sake of what we're discussing that it's going to be here and at least at most 20 years something like that um then among the people who think it's coming soon there's roughly two camps there's those who uh well a lot of the risks come down to whether it's fast or slow relatively speaking right so fast would be uh you know the kind of foom scenario right where you you turn on the machine and it gets just above human level and then take it 10 seconds later it's male ordering out for DNA that's going to make Nano robots and so on right and then 30 minutes after that it's game over for the human race so that's like the extreme version of fast um right right it's super hard takeoff boom is the yeah as they say that's the and then there's more of a slow take off I've seen Sam Altman refer to his sort of prediction as short timeline slow takeoff and I think this is a good summary of a very broadly held position among Tech circles including I think the majority of people actually building these things is my impression from kind of public statements and also private discussions um and so that short timeline slow take off the defense of that looks something like this uh well cooling capabilities are accelerating very quickly but as far as we know getting to superhuman level will will be maybe not purely but at least to
a large degree due to scale of data and compute uh and there's a limit to how fast we can make gpus right there's a finite number of gpus in the world and we make finitely many per year and spinning up Fabs that manufacture The Cutting Edge parts for the next generation of AI models you know there's like three companies on Earth that can build those and maybe really only one tsmc uh and you know it takes years to build a Cutting Edge Fab you can't just multiply them by 10 or 100 it will and a large constraint there is Engineers not even materials right you need to train the engineers that can run these highly sensitive complicated pieces of equipment um so they would point to that and the scaling laws and say well yeah we're going to need you know a lot of compute and it's going to take time to build that even if we know we can do it and so they would kind of say we're already in the takeoff it's just a matter of building out the stuff that's going to take time and if there's time if it's a slow takeoff then well maybe we we have some period of at least years between knowing we can do it and being able to do it and we'll be able to reason effectively about safety issues in that period maybe that's also you know you could already say that's pretty foolhardy I had to give yourself just a window of a few years to solve this incredibly difficult problem but I think there's kind of broadly speaking these two camps so there's people who think we may not have much time at all and we need to get it sorted out in advance more or less and then there's people that think well it kind of the same approach we've taken with safety of many other engineering artifacts could apply with sufficient resources to this problem as well and therefore there's no need to you know get really worried about this going sideways um to put a name on one of the people making this argument so John Carmack had a long interview with Lex Friedman on his podcast a week or so ago and that's basically his position you know he's Rich enough that he's bought one of these Nvidia DJI dgx stations it's like a quarter million dollars with a bunch of a100 gpus in his basement and you know he's he's trying to train AGI himself right and he's like yeah I reckon it'll take me five years on my own uh so that's like so there you have like I think you know the existence of John Carmack is is like uh it's like a uh he's isn't he the guy who created Quake yeah yeah that's right that guy and if I think he was into rockets for a
while I mean I think he's quite a polymath and genuinely pretty talented and bright guy um but uh I think he he it sounds like he may be making the an assumption that I've seen others make which is that um you know AGI artificial general intelligence is probably something that we have the hardware to support already and we just have to crack a few more key sort of ideas and principles and put them together and um you know it's almost like it's a puzzle to be solved as opposed to a mountain to be climbed right and I think those are those are sort of two is Stephen still after all these years two sort of um competing viewpoints on on how AGI can be achieved one is sort of you know we have to we just have to continue um hurling resources at this compute resources and data resources and so forth and the scaling laws as they have emerged have lend a lot of credence to that sort of that sort of um mind that sort of perspective on on this Challenge and then um uh if you can use the other I think quite quite perhaps quite a bit older view that goes back to several Generations um uh is this idea that that AGI is there's some yes some some minimal scale some minimal you know um uh uh amount of of basically sheer processing power is necessary but but um there are a few sort of secret sauces you know design elements something like that something key comes together and if you look at biology you know you you you you can get a sense that that um if if the right Pieces come together suddenly you get uh uh these extraordinary capacities um that we associate with with intelligence human level intelligence that you don't see in other mammals for example despite there not being you know a thousand times more compute in a human brain than in a dog and brain or something like that um and so it's the the the I I think this sounds a lot anybody who thinks they're going to go into their basement and toil away at this and crack the pro it's somebody in that that it sounds to me it's like somebody who's in that camp of oh we're just missing a few key ideas and I'm really smart and maybe I'll get lucky and figure them out and put them together yeah that's definitely his attitude yeah um I mean yeah I probably don't have a basis to judge one way or the other maybe it may be it may be one or the other equipment or perhaps it could be a combination of of both I mean a couple of key architectural uh things happening happening for example in the human brain together with you know human brains for three times prefrontal cortex
anyways three times the size in humans as it is in chimpanzees much much bigger than cats and dogs for example so scale certainly had something to do with it in humans yeah I think the question I mean it's not implausible for Comic to crack something because the original Transformer paper was trained on one or two gpus I forget uh certainly much less scale than he has access to now um and so it may be that you know what if a key Innovations are necessary if any are necessary can at least show promise on the system as big as the one he has and then you know immediately would be taken up by many other people with a lot more resources so I don't think it's well this is another this is another another situation where I I mean perhaps there's some con there may be some perhaps there's some Middle Ground between sort of the two camps the foom scenario and the the slower takeoff um and and one way in which I can see some Middle Ground thing happening is that if if a system starts to show promise um presumably so it's say for example a uh a system really starts to appear to be generally intelligent it starts showing you know extremely strong and clear signs of sapience of of sentient self-awareness recursive self modeling of self and and so forth um and it starts really having some of these features that we associate with with being self-aware and blah blah okay um presumably if that were to happen then that would not happen on that would not necessarily happen on the largest supercomputer on the planet I mean I suppose it's not impossible that it could happen on a big one but uh if if we were to see signs like that it there would be Headroom well I guess what I'm saying is there would be Headroom to um uh to acquire additional compute you know basically additional Hardware resources even in a hardware constrained world right because we're on we're not we're not talking about you know using all of the available Hardware in the world and then uh if we achieve you know some human level AI AGI uh that then we just have to wait for the slow process of building more you know um more Hardware uh I think you're right that we are potentially Hardware constrained but the starting point for that is is going to be I think well below the existing limit do you see what I mean and so this would be this would be something you wouldn't I don't necessarily I don't necessarily see the the food scenario as as completely compelling I can think of a number of reasons why I mean it's there's some impossibility there so it's
a lot like you know science fiction sort of stuff to me um Hollywood sci-fi sort of stuff to me um uh but having said that I can easily imagine you know AGI looking very promising wow suddenly we've got some this thing and then there being a path to very quickly like in a matter of hours or days or certainly weeks you know um quite dramatically increasing the compute resources that are available to that particular system even under you know a hardware constrained world like we live in right I mean so is say you're on a 1x a flop machine and suddenly you realize you've got AGI running on it it would you know it wouldn't necessarily be impossible to quite quickly scale up that to 10 exit flops um if you thought you were going to win the entire world with by having that which I think probably you know big corporations and governments think they will um and so uh uh and who knows what what I don't know would that have a foam effect or not that's I suppose harder to predict so I I I agree I think in general with the two ends of the spectrum um uh but I do I think see some some room for uh some middle ground in which even if um uh even if folks are right that it is you know it's not automatically going to be a hard boom style takeoff I think that that the simple Dynamics the human dynamics involved could mean that that um there could be an Abrupt acceleration if we you know suddenly realize oh we've got we've got this in hand holy moly um all right let's throw let's let's really throw that everything we've got at this all of a sudden kind of thing um so and I worry a lot of I actually worry a lot about that um because I I I I'm I'm not a sanguine I think perhaps it's you about you know who's actually working on these things I suspect that I mean I I I think I think there's I strongly suspect they're more clandestine efforts going on than is widely recognized it's very strongly suspect the U.S intelligence community and the U.S defense community and those are not necessarily the same thing that they both have uh you know I I strongly suspect that they're working on this stuff um and I think they could they could do something really really stupid uh you know if they happen to cross the line first um so yeah uh that worries me quite a bit I know I guess I guess what I mean to say by all of this is I don't take a whole lot of comfort in the idea that that uh that there is where that a food scenario is not preordained I guess that's what my big take away from from that
yeah yeah I agree I suppose we might be guilty of uh okay we're sitting here sounding very convinced about these predictions but on the other hand we're not really following through or at least I'm not with second order things that we must therefore be the case so I think it's goes without saying that okay 20 years is not enough time to sort out our existing inequality sufficiently to have any confidence that the the outcome of this transition is in any way whatsoever Democratic right um so isn't this like an existential threat to not only democracy but the kind of broad classically liberal outlook on you know at least in principle people are equal um and in that case well what should be done uh in order to prepare I mean okay if we're not talking about adapting existing institutions uh what can be done to preserve the ideals that are currently embodied in those institutions it's not you know we don't want to end up in whatever the head of open AI or deepmind thinks is a good system of government and right now they're probably pretty reasonable people but um yeah you know I mean um a a it's hard to again I I honestly I'm not I'm not I'm not sure I know where to start or I'm comfortable starting in any particular place but you know let's let's we I I guess we can make some some some assumptions so let's we can I mean if you have to think through these things in terms of scenarios and I think I mentioned last time that you know the Doomsday scenarios are perhaps not really you know um into the model that productive to talk about because if you know if they're unavoidable or if there's not much that we can do well then you know oh well um so talk about the scenarios that that where we maybe have some some agency in them we being human beings and and people trying to think about this ahead of time um uh so okay uh make you make those assumptions we talk about that speaker we can talk just about limit limit the discussion to just those scenarios okay so you have an uh an artificial general intelligence um and here again this our conversation has has drifted not drifted but has has zeroed in on General AI um uh automation itself is is hugely disruptive um so that just just narrow non-general just narrow AI has the potential to be massively disruptive um and you know a lot of a lot of Labor a lot of toil and drudgery a hell of a lot of jobs in the global economy could be automated in the you know in in in much the same way that uh driving cars presumably can be automated and and then
Network done by machines uh without the car necessarily being sentient and without without it achieving AGI so I'm a little more comfortable thinking about um narrow Ai and its impacts and how maybe we prepare for those and and so forth but for AGI um which is a different you know a very different kettle of fish altogether um you know the scenarios are you know how quickly does does something like this become um significantly more intelligent than human beings and uh then it's a question of of what can a super intelligent agent do I I think if we have to at least make some assumptions about that before we can start talking about okay well what preparations can we and should we make ahead of what that agent would do we have to make decisions about what we're assuming the agent could do and might do and and construct scenarios around that before we I think we can even begin speaking sensibly or thinking sensibly about how to prepare for you know so if if for example um we're assuming that we have a you know an AGI that is not just approximately as intelligent as a human being or an AGI that's as intelligent as the as the very you know as someone like um you know your Einstein or or whatever um I don't know who is widely regarded as the most intelligent person ever but uh you know it worked I I if we assume that if we're assuming that this system is only going to be as intelligent as Einstein well nothing's going to change not much I mean you know especially if it needs an exascale system I mean you know if you want if you want if you want a thousand people as smart as Einstein um you know you can go find those human beings right now and hire them um there are a lot of people out there maybe they're not quite that smart but they're pretty close um that's a lot easier than you know spinning up a thousand X scale super computers so I'm not all that concerned about AGI that's only as smart as the smartest human beings what I am concerned about and what I do think would be impactful is a machine that's vastly vastly super intelligent much much much much much more intelligent than the very smartest human being or or even the very smartest collection of smart human beings so for example you know much smarter than SpaceX which is a collection of a lot of really bright people um you know or something like that much much uh smarter than the physics department at Harvard or whatever you know those sorts of things that I think we have to um I mean that's one property yeah
I mean right when you say they're what they're like I mean so in one of the other seminars we've sort of looked at some papers that are trying to characterize like properties that optimal agents have you know they should be Bayesian for example and you might use that to predict something that they may or may not do or how to control them or not I find myself a little skeptical uh I suppose I find those discussions interesting but I'm not sure they they necessarily uh are important to informing how we prepare for I mean you can make a lot of deductions just on the basis of assuming that uh yeah you'll have systems that can do all the work that the Harvard physics department does that's enough to to make many predictions about how Society will change or how things need to be prepared for I don't know that the details matter for that they may matter for control problems um like how to keep the AI in a box yeah um I mean uh I I I yeah the the I keep coming back to my my chief concern being I I you know maybe I'm overly was overly influenced by bostrom's book you know a decade ago the super intelligence but I I keep coming back to thinking that that is the the um you know the the much much greater wild card if not outright threat here um if we're talking about just systems that that effectively are our competition for human beings um as opposed you know it if that's what we're talking about systems that are merely competition for human beings for the smartest human beings um then what we were talking about is not a whole lot different than the automation that I was you know the narrow AI automation that I mentioned earlier it's not a whole lot different than that um and that is a huge just you know that is a huge big but it's more I think akin to the other disruptions that that my team has looked at and that we've talked about right um so I mean yes if you have a if you have a machine that can drive as well as a human being that competes with drivers and that's a huge problem it's going to be very difficult it's a huge problem a huge opportunity it's going to be a disruption a big really really big deal blah blah blah and then if you if you expand that to lots of occupations being out competed throughout the economy okay that is that is a big deal huge big deal um potentially bigger than the other disruptions that that we're thinking about are also on the cards energy Transportation food Etc um but those all seem those all seem for lack of a better word normal to me normal uh disruptions like like
um uh not sort of fundamentally uh revolutionarian Paradigm shifting um uh whereas a machine that can do things that no human being can even remotely do and then um because it's super intelligent um yeah and all we can do is sort of stand back and look and say geez you know this is a god-like uh mind behaving in in in screwable ways um that is a much you know I I I still see that as the much bigger threat and you know ideally in a much bigger opportunity because that's the machine that's going to you know um uh either wipe us out with very little effort or Escape our control with very little effort or solve all of our problems with very little effort etc etc and and and um so I think those are two different those are two different categories of um uh of outcomes from this technology and I I yeah I wouldn't want to lump those together um the one I'm comfortable kind of thinking through um as just basically in terms of as a regular disruption or normal disruption more or less um uh but the the super intelligence stuff that's really really hard that's that's you know that's the whole Singularity business and I honestly don't know how to think through that I I'm at a loss I mean yeah I'm tempted to just not think about it I mean not that it's not important but I I could easily imagine a scenario in which we spend the next 10 years worrying about how to prevent extinction of the human race you know from a super smart AI or something and uh meanwhile you know we we lose many of the things we value about as Society because you know there's five people on Earth who are multi-trillionaires and the rest of us have been robbed of any agency whatsoever um and that's a problem that it seems like you know it's not it's kind of comparable to some of the problems we've faced in the past say from the Industrial Revolution or going back further to like the invention of gunpowder or something um it is a problem we can think about and maybe do something about or at least be prepared for uh the other problem of you know existential risk is yeah much thornier and better and perhaps uh you know many smart people should think about it um but in practice we're doing nothing about the former problem right uh very few people worry about this or talk about it uh and it it could easily you know be very destabilizing in a way that it seems possible to prepare for I'm not you know a nihilist I am optimistic about the outcomes here really I I want to be optimistic and I think there are many optimistic scenarios but
um I don't I don't necessarily think say my personal time and energy that I want to devote to this topic is best spent kind of reasoning about the you know how to stop AIS from doing the things we don't want them to do I think we're plenty capable of doing things to ourselves just with very smart and narrow systems that would be very problematic so uh yeah increasingly see a big gap between these two sets of you might call them safety concerns they both are important but it I kind of do want to separate them a bit because if they're combined the the existential risk questions are kind of intellectually interesting and philosophically interesting and technically interesting in a way that can maybe just stop people from paying sufficient attention to the more mundane but also important questions of you know how to safeguard our institutions and and some of our values against you know the the most extreme forms of power and wealth inequality that have existed in human history maybe that already describes our current condition but we're about to to test you know even more orders of magnitude there with pretty high probability so I kind of want to know if there's anything we can do about that or or ways to think about that if that's where we want to focus the conversation and our thinking I agree it seems like that's at least there we can we have some ground to stand on you know um uh I also I personally am much more optimistic about that um uh the potential impacts I mean they're so I am I'm I am less worried about a a dystopian sort of outcome where there's an enormous amount of wealth in just a few hands um uh I I think in in functional terms in in Social terms that just kind of sucks I mean it sucks that there are billionaires in the world it does it sucks um it's unfair it's unjust it's it's just it's an ugly situation that's it's that's that's really aesthetically and socially but functionally I don't think it really makes that much of a difference I mean the the billions of dollars that are you know in principle commandable they're at their their under the discretionary control of individuals you know if somebody who's a billionaire ostensibly has discretionary control over that that amount of resources um I don't really think that translates in directly so often into sort of calamitous outcomes in the real world surprisingly rare I mean there are places where you have mad dictators like Muammar Gaddafi was worth you know over 100 billion dollars probably
um and Libya was a was was you know an impoverished country and it was a very ugly and very you know pretty dystopian sort of situation here where and and so these things do happen you know Putin is a perhaps another example and these things are very ugly but there are thousands of billionaires in the world and most of them are not Muammar Gaddafi or Vladimir Putin and you know people Bill Gates and Elon Musk are big targets and yada yada um but it doesn't in my mind I mean it doesn't make that much difference who owns all of those Tesla shares whether you know it was owned by a thousand people or one person you know they're just Tesla shares and Tesla does what it does and you know um I think we're you know in in a lot of ways we're we're fortunate that what it's doing is is quite positive in many ways and it's not positive in every way but in many ways and so I'm a little I'm a little less concerned about um the you know the the the the scenarios where individuals become Ultra ultra wealthy I mean we've had we've had scenarios like certainly like that in the past that were pretty ugly I mean you know Kings of the past who commanded um enormous resources relative to the most impoverished of their subjects those that was a really really ugly you know those those were those are they weren't throughout history in an example of situations that are left they are still ugly today but um uh uh I I don't see I don't really see how we backslide all that far into sort of the the hideousness of feudal or medieval kind of living under tyrants um unless unless uh these unless violence becomes part of the picture unless you know Elon Musk is able to flip a switch somehow and give himself you know an army of 10 million Tesla Bots that are just you know very difficult to to stop purely in terms of violence purely in terms of of physicality and um so so short of a scenario like that and I I'm I'm I don't I don't mean to dismiss that but I don't think those are very high probability outcomes where individuals obtain a huge amount of control over the use of force um I think that the use of force is going to remain larger than largely monopolized by States and because of that I I don't think not that worried I I I I again I mean my I've come to view uh sort of capitalism and the wealth generated by Nations um perhaps you know I I don't know I don't know what the right word is um maybe I'm just getting conservative more conservative in my old age but um I I I I I I I'm I guess what I'm finding less
compelling as I get older is the argument that um capitalism is functionally uh you know it's actual real outputs suck I'm finding those arguments weak and I'm finding the the weight of the arguments for you know alternative modes of organizing society and production I'm finding those to be largely aesthetic and and driven by um aesthetic senses of justice and fairness and so forth and those are all fine it's not to say that none of those resonate with me but I'm not terribly concerned about it okay um so the the then the the then the question in my mind is um uh can societies can societies as a whole um uh even you know admitting that we may have a few very ugly ultra wealthy people like Jeff Bezos and Elon Musk and you know for for their for their all their flaws um we may have a few you know ultra wealthy people like this who have too much control over certain markets and and so forth okay fine um but admitting that what what kind of world is I'm going to emerge and how do we prepare for it uh what kind of world is going to emerge when um uh uh uh capital in the form of machines is able to do the majority of the work of not um of the tasks that human beings do today what kind of world are we going to um live in what kind of world could we create what kind of world do we want to what kind of Worlds do we want to avoid and um there I'm I'm honestly really quite optimistic really quite optimistic because I don't I I don't see a whole lot of uh plausible Pathways to truly dystopian scenarios like say okay so let me just give you an example right so you often see here think of you like Judge Dread style scenarios or Terminator scenarios or whatever now they still find those all that all that um uh plausible if if you were if you were to have a to to have if you were to have a hundred million uh humanoid you know Tesla Bots okay so they say they succeed and you've got this this you know these these Boston Dynamic style bipedal robots and there's tens of millions 100 million of them walking around um what are they gonna do what what what's what what use are they going to be put to are they just are they really gonna just walk around all day police in police uniforms you know um policing citizens is this was this what we're going to do with them is that I mean is that plausible or is it much more plausible that you know whoever whoever controls them is unless they're truly a Madman like Muammar Gaddafi that whoever controls them is just going to have them do work
they're just going to have them do the work that that most of us hate doing right now mopping floors and flipping burgers and washing toilets and you know taking out the garbage and picking through the recycling at the recycling plants and you know and and fixing when it breaks and building stuff that needs to be that people want I mean if you had a hundred million robots would you would you and I I'm talking in the most likely scenario States would you dedicate that that capacity to you know overwhelmingly to destructive ends to dystopian you know policing and brutalizing and oppressing your citizens or would you mostly just set them to productive tasks probably in my mind mostly you just set them to productive tasks now maybe it is ugly that that you know some hundreds of thousands of them might you know we might have robotic police officers um and I don't actually don't find that all that dystopian personally um I can talk about some reasons why but I think I'm fundamentally optimistic because I I don't see I don't see the truly dystopian scenarios is all that plausible I think it's I think it's very likely that once we start building tons and tons and tons of these robots we're just going to use them yeah I guess I agree with that I guess I'm not that's not where my principal concern lies uh I guess we should wrap up and maybe this can be a topic we pick up next time but I think what I'm mostly worried about is I mean if I think about okay the university has a union right uh and we argue with management about uh what should and should not be happening at the University you know people should get paid for the work they do and maybe we should get paid a little bit more and okay there's these kinds of arguments in society and a lot of them come down to basically the threat to walk right like here's my labor it's worth something and then we negotiate over it and what I can do is withdraw my labor and that's you know it's it's very deeply baked into our Societies in ways that we we obfuscate because it's not nice right we don't we don't find it you know it does undermine cooperation in the spirit of fraternity within the university to have these discussions between the administrators and the faculty right and if there's too many of them it kind of weakens the morale of the institution so we we have these conversations in where they're often very obfuscated and you know discussions between managers and employees are you know generally one-on-one and it's kind
of a little hush-hush and you know there are reasons for that right it's sort of it's not something we do completely out in the open and for good reason I think but for that reason I think we underestimate the degree to which a lot of our institutions and interactions are just predicated on not only economic value which is I guess something that's kind of obvious uh but just on you know it could easily be the case that there are AIS soon that are just better at listening to you than your spouse or which you've had in your pocket since you were a kid and know everything about you and they know a lot of things you're worried about and you know what do we do when some of our closest interpersonal relationships are with these systems I'm not saying that's a good idea but many people have relationships like that already with you know virtual girlfriends and stuff and you know it's it's funny in a way but it's also rather troubling to think about you know there are millions of lonely people in China that are like kind of happy to have I forget the name of it I read an article recently uh you know it's like some chat bot it's pretty pathetic uh in terms of its capabilities this chatbot but it's a sign of not only economic utility but also interpersonal utility being kind of disrupted by these systems which you know it it's it's hard to imagine right now what it'll be like to have an AI that has been with you your entire life since you were a child knows a lot about you and just it's like you're the only concern it has and it just really cares about you or at least you perceive that it cares about you that's not very far-fetched at all given what we can do currently and I think this will undermine a lot of the ground that we currently place our relationships on not only work ones but personal ones um so I think that's more like what I'm worried about rather than yeah like police Bots yeah that's a that's a a really really good point that they that this is and again I I guess was I was thinking that um I was thinking that that really it's AGI that throws open the door to the sort of um I mean have you seen the movie Her yeah so I'm thinking that that for for machines to give human beings um you know truly satisfying uh rich and full relationships um you know the artificial general intelligence may be a precondition a necessary you know precondition for that a prerequisite for that that I mean I think you're right and that I've been making that assumption it's probably false given
what we already see that you know people have relationships with much simpler systems and become very attached to them um uh and I suppose there should be no surprise children you know become very attached to an animal inanimate you know stuffed animals and and um you know we we care for virtual pets and video game environments and you know I mean they're they're so I suppose it is it is plausible to assume that even without artificial general intelligence there could be um social and psychological disruptions from other sort of quasi-intelligent agents that we're spending a lot of time interacting with um and it's certainly possible that this this some of this is already happening we're just not aware of it right I mean you know the influence that Bots have on social media for example is just social media itself which is clearly itself you're right yeah I mean it's a natural environment doesn't include like entities that will just pay infinite attention to us but when you put into someone's psychological environment a potentially Infinite Source of attention it clearly disrupts their relationships with other people and and many people just can't handle that there's like an evolutionary process where you know some people are just like sort of socially disabled by focusing too much on the feedback signals they get from from social media is just the kind of proxy for I mean obviously people relate to it as though it's like somebody in their immediate environment is just like super interested in them or something so we can we can see it already happening it's just like rather than being an AI it's a kind of a a weighted sum of all the other human beings on The Social Network or something um so yeah I think there's under anticipated like consequences of deploying these systems in many places that touch ordinary life that yeah we'll be we'll have profound effects uh and maybe disaggregating effects on on us um yeah maybe if you have a final comment Let's uh let's maybe wrap this up with that yeah I I'm I'm interested to carry on the conversation on these lines next time the the this idea of um uh what what unhealthy outcomes socially and psychologically might we want to guard against both as individuals and then as entire Societies in anticipation uh of you know the Technologies emerging that effectively Prey Upon Our you know some of some of our vulnerabilities and psychologically and socially I think that that would be that's probably a very good thing to think about and it's it is I think