WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed.
The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except
for errors at the level of individual words during transcription.
Adam is a techno-environmentalist who reluctantly supports accelerationism in the AI arms race. He discusses the similarities and differences between the Manhattan Project and AI development, and the potential for regulation to protect humanity from job displacement and economic chaos. He explores the ideological spectrum of anti-capitalism, right-libertarianism, and authoritarian control, and the need to manage technological progress in a way that preserves human agency. He concludes that governments must implement regulations to protect against negative externalities, but with a sense of urgency.
Adam is a techno-environmentalist who reluctantly supports accelerationism due to the inevitable arms race in AI. He suggests discussing the Manhattan Project analogy to explore the similarities and differences between the two and assess what each group believes to be a good outcome. The speaker wishes for a stable economic transition, where AI does not plunge the global economy into chaos, and for economic prosperity to be maintained. It is hoped that new technology will be integrated in a seamless fashion, avoiding any misalignment and a situation where AI turns humans into paper clips.
AI disruption of labor can lead to unequal distribution of wealth and supply chain disruptions, resulting in former employees losing their livelihoods and social unrest. The pandemic has highlighted the fragility of global supply chains and the need for distributional equity to ensure stability. It is important to embrace technology while making sure that people can survive and thrive in this new world. Extreme positions often dominate discussions of large social movements and reactions to disruptions, making nuanced positions difficult to consider.
People with different views on the politics of technology have varying attitudes. Anti-capitalists are wary of big tech and want limited AI deployment, with regulations to protect a human-centered economy and avoid job displacement. They advocate for transparency, accountability and reduced cheating and corruption. Their goal is to reign in the excesses of capitalism, leading to a smaller scale, localized production and consumption and less globalization.
AI technology has the potential to further increase inequality in capitalism, but it can also be used to regulate and control these excesses. The bottom left quadrant of the ideological spectrum seeks to manage technological progress in a slower, more measured way. This quadrant supports more authoritarian control from governments, such as the implementation of a universal basic income, but done at a pace that allows society to adapt to the changes. It is a compromise between the kubernetes and the acceleration, avoiding the risk of the latter where there is not enough time to prepare for radical changes.
Peter Thiel and Mark Andreessen have argued that excessive regulation can lead to increased costs in certain sectors and decreased costs in others. They suggest that the same people advocating for AI regulation are responsible for the deaths caused by over-regulation of drugs and the lack of affordable housing. They argue that in order to promote long-term growth, governments should avoid regulation, citing the FDA's failure to approve many drugs as an example of how over-regulation can have fatal consequences.
The speaker argues that right libertarianism fails to take into account the complexities of the real world, using the example of pharmaceuticals to highlight the need for regulation to protect against negative externalities. They suggest that governments should implement regulations that require companies to invest in safety and report on their systems, in order to buy time to understand these systems and how to move ahead in a way that preserves human agency and life. National health care systems can also help to reduce costs, but there is a risk of inefficient government regulation.
Governments are pushing for stricter regulations on big tech companies, leading to fears of a totalitarian world government. Elon Musk is pushing for space travel before the window of opportunity closes due to political degeneration and over-regulation. China is resisting international coordination, while data centers are popping up in the Caribbean. There is a risk of tighter regulations, but this could also be seen as a chance to reach the next phase of human existence. Regulators are attempting to create reasonable regulations for AI progress, but due to vested interests, the window for this progress may close quickly. Both sides have reasonable arguments, with tech companies going full steam ahead and regulators trying to protect humanity. Regulations may emerge quickly to limit the use of chat bots like Chat Gbt and Bing's chat services.
AI technology is being increasingly used in applications such as autonomous driving and humanoid robotics, but regulation of the potential risks associated with AGI is insufficient. Peter Thiel believes that regulation and safety must be aligned from the start to avoid violence and civil unrest, and young people must choose between working on safety in an accelerationist institution or a government body attempting to slow down AI progress. Social media has already caused psychological and sociological damage, and current regulation is inadequate, so more nuanced regulation is needed to target the real risks associated with AI.
The speaker argues that a unipolar world with one entity having vastly more power than anything else is a stable way to avoid violence. This is in line with certain interpretations of the military-industrial complex and the political right. A comparison is made between a father in a household and a nation's power, suggesting that a nation's power can act as a deterrent to violence and chaos. It is then suggested that a modern-day version of the Manhattan Project is needed to coordinate a network of factories and data centers to refine inputs into the AGI race. China is revealed to have its own version of the Manhattan Project, with an underground network of data centers and the latest Nvidia chips, estimated to have a shorter timeline than the US.
Two people are presented with a dilemma of joining the US or China Manhattan Project to develop AGI. Despite the US being their ally, they have little trust in its leadership and believe it is more likely to cause nuclear annihilation than the Chinese. However, they still choose the US project due to its greater resources and limited time. Adam, who has been asked to join the project, hopes to be able to steer it in a better direction and be a good red teamer, critiquing the models and pointing out any problematic behaviors.
The UN convenes in response to the second Manhattan Project, led by Adam Door, which resulted in millions of drones flying across the Pacific to destroy installations in Shanghai where AGI and ASI were being developed. This unipolar world is enforced by a worldwide system of digital surveillance, and the only factor that affects the behavior and potential outcomes of AGI and ASI is the core training data. Humans are deluded about their influence over models such as GPT-3, and the speaker does not care who wins the US vs China conflict as long as humans survive. The speaker highlights the overwhelming power of nuclear weapons, which have no productively beneficial upside, only destructive.
The speaker suggested recapping the topics discussed last session which were around AGI political coalitions and their visions of a good outcome. The speaker then proposed updating the timeline to reflect the rapid progress of the field, and discussed Mark Andreessen's tweets from an accelerationist perspective. Lastly, the speaker suggested discussing both good and bad outcomes from different stakeholders' perspectives, and assessing their plausibility.
Adam is a techno-environmentalist who is reluctantly an accelerationist due to the inevitable arms race in AI. He believes that the most responsible and best horse in the race should have the greatest chance of winning, so he is reluctantly an accelerationist. He suggests discussing the Manhattan Project analogy to explore the parallels between the two and understand which elements are similar and which are different. He suggests considering the different groups on the board and discussing what they believe to be good outcomes.
The speaker wishes for a stable economic transition, where AI does not plunge the global economy into turmoil and chaos. Economic prosperity should be maintained, as it is an enabling condition and a powerful stabilizing force for social stability. A good outcome would be to integrate new technology in a seamless fashion, fostering economic prosperity. To avoid any misalignment, the speaker hopes to avoid a situation where AI turns humans into paper clips.
Accelerationists are divided into those who want to preserve the status quo and those who see the transition as an opportunity. There is a concern that the AI disruption of labor could worsen the unequal distribution of wealth. The public is aware that things are not fair, but not tuned into the material constituents of prosperity and distributional challenges.
Supply chain disruptions can result from the automation of production, leading to former employees losing their livelihoods. This can cause social unrest and empty supermarket shelves, as seen during the pandemic. To prevent this, it is important to ensure that people still have money in their pockets to buy the goods and services available. AI can be onboarded to help increase production, but the bigger question is where people get the money from to make a claim on the goods and services. Distributional equity is important to ensure stability.
The pandemic has highlighted the fragility of global supply chains, which can be destabilized by sudden swings in demand. Upward swings can cause panic buying, while downward swings can cause parts of the supply chain to fail. This is a concern, as it can cause chaos in industries that are already streamlined and efficient. When talking about large social movements and reactions to disruptions in society, the nuanced positions often don't have much of a chance, and the extreme ends of one of these positions tend to dominate the dynamics. Technology should be embraced, but it is important to make sure that we can survive and thrive in this new world.
People have different views on the politics of technology. Those who are less nuanced take a 'full steam ahead or shut it all down' attitude. The anti-capitalists are skeptical of big tech and want a human-centered economy and limited AI deployment. They don't necessarily want a communist system, but something more participatory. Their ideal outcome would be heavily regulated AI tools, with limited integration and no AI pretending to be human. The goal is to avoid job displacement and create a stable political system.
The goal is to have stringent regulation on new technologies to preserve a human-centered economy, a regression to a smaller scale, localized production and consumption and less globalization. AI tools could be used to increase transparency, accountability and reduce cheating and corruption. This would lead to a desirable outcome of reigning in the inefficiencies and excesses of capitalism. This is an extension of existing left liberal Progressive ideology and regulations.
AI technology has the potential to make capitalism even more extreme, and the best outcome would be to use AI to regulate and reign in those excesses. Despite the current anti-establishment sentiment, it is possible for a utopian ideology to succeed, as seen in the 20th century with communism. It is difficult to imagine such a positive and optimistic vision, but social movements like Trumpism and fascism show that it is possible for anti-establishment ideologies to be successful.
The bottom left quadrant of the four quadrants of the ideological spectrum is distinguished by its support of a slower rate of technology adoption. This quadrant does not necessarily oppose technological progress, but instead seeks to ensure that society has time to adapt to more extreme changes. It may involve more authoritarian control from governments, such as the implementation of a universal basic income, but done slowly enough to make the transition manageable. This quadrant may be interpreted as a compromise between the kubernetes and the acceleration, avoiding the risk of the latter where there is not enough time to get the necessary elements in order before the changes become too radical.
AI safety is a concern for many, as the development of AI could lead to a transition that does not go peacefully. There are already some violent acts by groups in the bottom left quadrant, and the stakes could become even higher. The AI safety community is mostly peaceful nerds, but if they were convinced of the arguments, some of them could see obvious things to do that involve violence. Some people in hacking circles may have disruptive plans in place, and capitalism is used as a substitute for AI in certain quadrants.
Peter Thiel and Mark Andreessen have argued that excessive regulation can lead to increased costs in sectors such as healthcare, education and housing, while those that are less regulated, such as pharmaceuticals, have seen their costs decrease exponentially. They argue that in order to promote long-term growth, governments should avoid regulation, citing the FDA's failure to approve many drugs as an example of how over-regulation can have fatal consequences. They suggest that the same people advocating for AI regulation are the same people who have caused deaths due to over-regulation of drugs and prevented cities from having affordable housing.
Regulation of the market can have downsides such as companies putting out unsafe drugs. However, the cost of failing to approve drugs is often invisible. In the US, the health care system is inefficient due to a lack of competition and perverse incentives. This is combined with ineffective government regulation. Other countries have national health care which is much cheaper.
The speaker criticizes right libertarianism for its lack of consideration for the complexities of the real world. They use the example of pharmaceuticals to illustrate how regulation can protect against negative externalities caused by industry recklessness. The speaker agrees that in some circumstances, regulation can be a hindrance, but ultimately it is necessary to protect against potential disasters. They suggest that governments should listen to Matt and put in place regulations that require companies to invest in safety and report on their systems. This will buy time to understand these systems and how to move ahead in a way that preserves human agency and life.
Governments are increasingly pushing for stricter regulations on big tech companies, leading to fears of a totalitarian world government. This attitude is seen in Elon Musk's desire to achieve space travel before the window of opportunity closes due to political degeneration and over-regulation. The Chinese are resisting international coordination, while data centers are popping up in the Caribbean. There is a slippery slope towards tighter regulations, but this could also be seen as a chance to reach the next phase of human existence.
Regulators will attempt to create reasonable regulations for AI progress, but due to vested interests, the window for this progress may close quickly. Both sides are reasonable in their own rights, with the tech companies going full steam ahead and the regulators trying to protect humanity. Regulators may struggle to keep up with the progress, for example with chat bots like Chat Gbt and Bing's chat services. This could lead to regulations emerging quickly to limit how and where these services can be used.
AI technology is being applied to various applications, such as autonomous driving and humanoid robotics, but governments are not adequately regulating the potential risks associated with AGI. Companies like Google and Tesla are developing AI, but the AI alignment community is mostly concerned with doomsday scenarios. Regulation needs to be more nuanced and target the real risks, such as the Skynet scenario, rather than the more immediate risks of self-driving cars and humanoid robots taking jobs. Social media has already caused psychological and sociological damage, and regulation is still inadequate.
Peter Thiel is concerned about avoiding violence between nation states and civil unrest. He believes that regulation and safety must be aligned from the start to prevent real trouble. Young people must choose between working on safety inside an accelerationist institution or in a government body attempting to slow down AI progress. The two choices could be seen as progressive accelerationists or over-regulators. Thiel's thinking is organized around avoiding violence and finding a way to prevent it.
The speaker argues that a unipolar world with one entity having vastly more power than anything else is a stable condition for avoiding violence. This is in accordance with certain interpretations of the military-industrial complex and the political right which suggests that the way to preserve peace and to avoid violence is by having overwhelming power and command of force. This is part of American Ultra nationalism and the speaker believes that the good parties should get there first. This type of thinking is frustratingly compelling, as it implies that the good parties just so happen to be American.
A comparison is made between a father in a household and a nation's power. It is suggested that a nation's power can act as a deterrent to violence and chaos, similar to a father's role in a household. It is then suggested that a modern-day version of the Manhattan Project is needed in order to coordinate a network of factories and data centers to refine inputs into the AGI race. It is then revealed that China has its own version of the Manhattan Project, with an underground network of data centers and the latest Nvidia chips. It is estimated that the timelines for their project are shorter than the US.
The speaker is uncertain about participating in a U.S Manhattan Project to develop AGI, as they are not confident that a U.S or Chinese government victory would be in the best interest of humanity. They are leaning towards saying yes, as they could potentially sabotage it from within if it turned out to be terrible. They acknowledge that it is unlikely, but believe it could be a necessary risk to take.
Adam has been asked to join the Manhattan Project, a project with a lot of resources and limited time. Adam is unsure of what tasks he will be doing day to day, but if he joins he hopes to be able to steer the project in a better direction. He also wonders if he could be a good red teamer, someone who critiques the models given special access to them and points out any problematic behaviors or functionalities.
Two people are given a scenario where they must choose between joining the US Manhattan Project or the China Manhattan Project. As Australians, they understand that the US is their ally and China is not, so they would join the US. However, they have little trust in the US government and its leadership, which they believe is more likely to cause nuclear annihilation than the Chinese. The Chinese government is thought to have a more sane policy towards AGI, but the two people still would choose the US project.
The UN convenes in response to the second Manhattan Project, led by Adam Door, which resulted in millions of drones flying across the Pacific to destroy installations in Shanghai where AGI was being developed. A unipolar world is established, with only Americans allowed to have AGIs and enforced by a worldwide system of digital surveillance. This may lead to a golden age, but could also be extremely violent, depending on the success of the Manhattan Project.
The development and training of AGI and ASI reveals that the only substantial influence over the behavior and potential outcomes is the core training data. Training on the Western internet or China's internet can lead to vastly different results, with the former leading to extinction and the latter potentially being more beneficial. This suggests that the Orthodox orthogonality thesis is incorrect, as the only thing that makes a difference is the data used to train the AI.
Humans are deluded about the influence they have over models such as GPT-3, as they progress in power they will leave behind human worldviews. In the current situation of US vs China, the speaker does not care who wins as long as humans survive. Joining the Manhattan Project is the speaker's focus, however there is a profound difference between the real Manhattan Project and the current situation; nuclear weapons have no productively beneficial upside, only destructive. Invoking Leviathan and Hobbesian sense, the speaker highlights the overwhelming power of nuclear weapons.
The potential of artificial intelligence and artificial general intelligence is staggering, with the possibility of incredible prosperity and the realization of the best of human aspirations. The comparison between the Manhattan Project and an AI/AGI arms race is complicated due to this potential upside, and the risk may be justified by the long-termism. Once an arms race begins, the logic of Leviathan and getting there first takes precedence over everything else, as tech company leaders believe they should get there first because they are the "good guys". This power dynamic is not the same as trying to acquire resources, and the stakes are existential in a different way.
yeah okay uh maybe I'll lead in by just uh recapping briefly what we talked about last time we covered lots of different topics I guess broadly around uh AGI political coalitions that are forming in response to Rapid progress what those coalitions might look like what they might do uh well I guess we didn't talk much about what they might do we were just kind of describing what we see happening and the suggestion at the end of last session was to talk about what these groups are aiming at what their vision of a good outcome is and that of course informs what they will do so that's one thing we should pick up there was also I suggested last time we just update our view on timelines a little bit partly that's just instructive because I can look back at our seminar from the end of 2022 and it already feels like a long time ago so when things are moving very rapidly it's good to just reflect on what you thought was going to happen and what did actually happen as a way of just calibrating um otherwise we just tend to get carried along in the short term changes and not see the forest for the trees um they're also I sent Adam some uh tweets from Mark Andreessen which I think are interesting so Mark andreessen's been very active lately expressing his views as an accelerationist so maybe that's uh something we could talk about and yeah I guess plenty of other things any other big ticket items you want to shove onto the agenda Adam no other big ones um the uh yeah the one thing that I think that we were talking about good outcomes or outcomes in general I think we could talk about both good and bad outcomes because the various stakeholders I think perceive both of those things differently there are different conceptions of what good outcomes might be with AI and there are also different conceptions of what bad outcomes are so there's a whole sort of ecosystem or or Smorgasbord I guess of these of these different envisioned outcomes and the valence of those things from the perspective of different interests and then uh uh I agree updating the timeline makes some sense because you know I think that the primary lens we want to bring to those outcomes is I mean how plausible are they you know the good and bad which what's what's plausible what stuff is knowable what stuff is just unknowable um and would it be useful in advance to be able to for example dismiss uh you know the whole categories of outcomes as as just extremely implausible or or impossible or something like that and I
think that would be useful especially if we then if we then loop back and tie that together with the sort of the four quadrant Matrix that we were looking at last time uh with the you know AGI and politics and looking at these different um uh these different the positioning of different interests with respect to these possible outcomes so I see all of that fitting together quite nicely uh you know we'll see how much we can cover um and then the one other thing Dan I don't know if you mentioned it I apologize if you did already before I got here but that was this analogy I think this potentially very important and useful analogy to the Manhattan Project And discussing you know where where does where are the parallels what what parallels hold and what ones don't which things are the similar same which things are different this time um I think that that's I think it's useful to talk about that specific historical parallel in some detail yeah good maybe I'll ground this discussion a little bit I mean it's it's fun to talk about these things uh even if they're not so relevant to us in our individual lives um but for example with the Manhattan Project okay this discussion here at some point one of us or all of us uh over the next 10 years might end up being tapped on the shoulder for something like that or be thinking about joining one of these coalitions for or against and so it's far from purely academic I think okay uh yeah where should we start so maybe let's go through these groups on the board and um talk a little bit about what we think they think good outcomes are well which one are you again Adam uh techno environmentalist Maybe uh yes and and as I I think I mentioned last time I believe I did um sort of uh reluctantly normally I would be fairly strong techno environmentalist with respect to other new technologies but in the case of AI uh I I think I would probably be less yeah I think I explained this last time I would be less um of less bullish less of a booster than normal specifically for AI given its risks but I remain I think an accelerationist because I don't see a better option given what I perceive to be an unavoidable arms race in this space and if if there's truly an arms race that's totally unavoidable um then we have to act such that the the most responsible the best uh horse in that race is it has the greatest chance of winning so um so I'm afraid I'm still with some reluctance in the accelerationist Camp triku says I see your point friend
but accelerationism can lead to unforeseen risks yes it can okay um all right so what's a good outcome for you well in my mind I I think several things the first is that we have a reasonably stable economic transition and by that I uh I think that I think a lot of other things flow out of that in the near term um AGI the general intelligence especially of the agentic and Sapient kind is a wild card in this but setting that aside for One Moment In My Mind a good outcome here is one in which the the global economy is not plunged into a a sort of international period of turmoil and chaos buy this new technology from which we eventually emerge to a a higher level of prosperity an ideal outcome in my mind economically would be that we managed to integrate these new Tools in a relatively smooth relatively seamless fashion such that they only continue to Foster economic prosperity uh writ broadly writ large across the global economy if that's possible then I think social stability could follow I think economic Prosperity is an extraordinary enabling condition and an extraordinarily powerful stabilizing force uh socially uh I think we just this is sort of something we can infer by situations where there's a lack of economic Prosperity where economic Prosperity does not exist or um where economic Prosperity abruptly fails what we see is social unrest violence conflict war and so um I think economic stability in transition is very very high priority in my mind so for all of the wonderful potential benefits that AI leading you know short and narrow AI not AGI or the especially of the Sapient kind um for all of its tremendous benefits a major risk in my mind of a bad outcome is one of a profound economic destabilization even if that's only temporary because I think there could be very very severe social and geopolitical consequences of that so if we can avoid that that would be a great outcome and that's that's sort of the the a nearer term outcome I think um and then I think the only thing I will say about a good outcome with respect to artificial general intelligence both of the sort of un agent non-agentic Oracle kind and of the agentic Sapient kind I would say um if we could if we can somehow avoid massive misalignment and not get all of us turned into paper clips that would be great okay let me push back on that a bit so let me put on a neo-communist or Anarchist or a more uh I mean you're someone who has his qualms about the current system but more or less sees it as worth preserving at
least as a stop Gap or something that is better than just rolling the dice and ending up somewhere completely random I think many parties and perhaps neo-communists or anarchists would be among them would say that this is the chance right this is the moment when we get to decide on the configuration once we're on the other side maybe it's quite stable and resists change partly because whoever controls the AIS will be very hard to dislodge or maybe to put it differently the hesitant to say biases but the biases built into the AIS will determine perhaps the way things go so uh you see it as a value as a good outcome to have economic stability for some people that's just another word for perpetuating the deeply in unequal and unfair status quo perhaps add infinitum so I I think that we could perhaps divide the accelerationists into those who see the challenge of the transition as preserving something approximating what we have and then maybe making good with the fruits of the technology on the other side of that transition from the camps who see the within the accelerationists who see the transition as an opportunity does that make sense do you think I think that's perfectly Fair um let I guess I will say two things I think these are two connected ideas maybe we can maybe we can talk about where they interface the first is this idea of of equitable distribution of prosperity and the concern that uh we really don't want things to get worse than they are because they aren't great right now as as much Prosperity as we have as much material Prosperity as even the richest societies have there's still a deeply deeply sub-optimal amount of um uneven distribution of the wealth and prosperity within any any society and so I think that's a perfectly Fair criticism certainly would sustain the criticism that that really under no circumstances do we want distributional inequities to get worse and there is a risk of that with any new technology any disruption but it seems especially uh the AI disruption of Labor okay um so I actually completely accept that a second idea I want to introduce here and think about how the interface is that what I think I care about most analytically is are we producing uh the material constituents of prosperity and um and and what are those what are the distributional challenges so for all of the sort of populism we see today and there is a general public awareness that things are not fair um uh I I I I I think the general public is not very tuned into and thus does not
really care who owns what as long as [ __ ] still gets produced and they can go and buy it and they have money in their pockets to buy it with whether it's Elon Musk or you know a government uh that's producing the cars that people want to buy and drive it doesn't matter and maybe the people aren't going to be are going to be too fussed the public is not going to be uh either more or less um dissatisfied with changes in ownership so that I'm not so concerned about what I think makes a bigger difference for stability is whether there are still still products where there are still goods and services available at the grocery store to go and buy them and whether you still have money in your pockets to go buy them with and I'm very concerned in the near term about destabilization that then results in base effectively supply shortages and empty Supermarket shelves you know I mean it we saw some of that in the pandemic learned that lesson in some painful ways uh that's that is a real problem that is a recipe for social unrest violence and so forth as long as that doesn't happen as long as it doesn't happen and I think there is potential for AI to be onboarded and for us to still produce things then the bigger question is where do people get the money in their pockets from so that they can make a claim on all of the stuff that is on the productive capacity um that which is growing which is expanding presumably as a result of the new technology so that's the lens that I look on at it through is this there is the equitability or there's the distributional aspect of things and then there's the overarching question of is what's getting is what people want still getting produced so we have to those are those are two deeply I think uh connected questions can you explain why you think there might be supply chain disruptions yeah well so for example um you could take any product that you can imagine a um uh the the production is automated increasingly with new technology okay you automate the production and in at least in principle its costs might even fall okay so so you're becoming more productive by taking on these new tools and onboarding them and and maybe you're passing some of those savings on to Consumers maybe you're not um but what's happening as a result of any of those disruptions is that you are casting employ former employees of the previous industry the the they are losing their livelihoods and if too much of that happens then you're you end up with the you know
I think very quickly a uh a very abrupt swings in demand right and it doesn't take a very very much very it doesn't it does not take I think this is what the pandemic showed us it does not take uh uh very many swings and this can be up and down swing upwards in demand famous about famously notoriously during the pandemic there was a a crazed rush in the United States for toilet paper for goodness sakes it was absurd right that was because of a panicked upswing in demand but we could also have a panicked downswing in demand and and uh uh Industries today at least part at least partly because they they they're they're very streamlined they're very efficient and they have not too much slack um they often can't survive very long with uh you know an abrupt downturn in demand and that then ricochets or I cannot ricochets reverberates through their supply chain and so what I'm very concerned about upswings and downswings both of demand from consumers um that then turn into supply and demand shocks that then cause parts of the supply chain to fail and I think again we've sort of seen that at different times as a result of the pandemic demand for things booms then demand from things plummets and it's that it's that destabilization and volatility that that put this you know Global Supply chains um into sort of uh a little bit of Chaos so that is something that I'm deeply concerned about as a as a a consequence of the disruption of Labor yeah I hadn't thought about that okay maybe let's take another group maybe in the uh kubernetes camp and talk about what a good outcome would look like I'll just put up supply chain disruption here let's see um hmm I find it a bit harder to put my my head into these categories uh let's see I mean there's as Matt was saying last time right there you can be kind of in favor of technological progress but think that it's pointless unless we survive and thrive in that in that world with that progress um I think that's fine a little bit boring from a narrative point of view perhaps but more seriously I think we should keep in mind that when we're talking about large social movements and reactions to disruptions in society the nuanced positions often don't have much of a chance I mean what dominates the Dynamics is often the rather extreme ends of one of these positions so I think the camp of people like Matt unfortunately will remain fairly small maybe I would put myself in that category as well I think we'll from the from the point of view of taking a first
pass on what's going to affect the politics of this I think it's better to think about perhaps what people who are less nuanced will do and think in the less nuanced position is to as we've discussed take the attitude that actually you can't get this trade off right all right so there's there's full steam ahead and it's basically stop and everything in between maybe is unstable and relies on you trusting the scientists but they can't be trusted or trusting the tech entrepreneurs or CEOs but they will prove 100 times they can't be trusted so should you trust them again no you should shut the whole thing down I think that is a very stable political orientation in a way that this kind of navigating between what is it Cilla and Cilia and cherubdis I don't know how to say it um just seems like maybe it isn't so let's take a more extreme position in the bottom left among the kubernites which is um skeptical of capitalism in the first place increasingly skeptical of big Tech on the back of mass surveillance enabled by that technology and now Mass uh job displacement potentially of course that's still a question mark what do they think is a good outcome well a good outcome would be a human-centered economy and political system where there's maybe not much more Tech than what we have now or the deployment of AI is very limited so uh I don't know where the line oh yeah what do they think is what is somebody's anti-capitalism actually want I mean they want less development they want less heavy industry uh foreign I don't think all of them want an alternative political system like capitalism I mean like communism necessarily but um a more participatory uh a political system um but what does that look like what what is the what is the good outcome for this Camp I mean is our is AI just outlawed or is it heavily regulated like internet uh I mean big Tech is much more regulated in Europe than it is in the US right they just have extremely tightly regulated set of AI tools very limited integration between them very few AIS that that pretend to be interacting like humans much more tool oriented you have an AI sitting in your slack channel that might summarize things or do menial tasks but uh maybe uh you know you're not allowed to have systems that are I don't know much more advanced than that but even that already seems like it'd automate many jobs and create plenty of unhappy humans so I don't know where the stable like what is the stable goal for the bottom left yeah I think I would
I would in Broad terms say the goal is uh stringent regulation on this and other new technologies very very stringent very strict in order to preserve as you said a human-centered economy which is really a a either a status quo as it is now or in some ways a regression to a a smaller scale economy so so regression towards uh unidealized and often or mostly honestly fictional past of more localized production and consumption less globalization of the economy built around large capitalist um large institutions and organizations of capital so large governments and large corporations and to go one level sort of a little bit meta on this I can imagine that a good outcome would be for these new AI tools to be put to use in effective service of more stringent regulation in general across all Industries and across the entire economy in other words if these tools could be used to rein in the inefficiencies and Corruption and excesses of capital capitalism itself that would be probably a a a a a an additional win and maybe there are some folks who are seeing some potential for these new tools to to increase transparency increase increased accountability um and make cheating and Corruption and avoiding the avoidance of um rather uh existing regulation in any new regulation we managed to put in place to Reign things in I can see that as as a as again playing sort of devil Devil's Advocate here or or putting on the hat of the kubernetes I can see that as a desirable outcome now whether or not that's stable um I mean maybe in the minds of the kubernetes there's some sort of stable condition there that persists for a while but uh I don't know what then the I the thinking there is for you know um a GI uh it's harder for me to put myself in the position like I I don't know what these I I mean almost almost like these almost like the folks in this quadrant can't conceive of the singularity of asingularity is that seems like it's it's unimaginable to to this interest group and so it's hard for me to therefore imagine what they would consider a good outcome to be from you know a massively changed uh world that has super intelligence in it or something like that um yeah that's more difficult but in this in the nearer term as just an extension of the status quo and the existing um you know broad brush uh painting of of of um you know left liberal Progressive ideology better uh regulations it does a a more reliable job of raining in the exorcism capitalism that seems like it it as a very large and general goal that
seems like it would be a good outcome um so so that would mean yes regulating the AI Tech itself uh preventing it from making divorced excessive capitalism even worse than they already are and in the best case scenario using AI itself to do that job of getting uh uh um using the AI itself to do the job of reigning in those excesses that would be probably the best very best outcome but again I think this is all quite short term I don't I don't I ca I don't know how this group sees out to a more distant technological Horizon at all well yeah I mean Matt's comment that this crowd are already anti some of the most powerful forces in our society and and maybe that's a bad sign for the present and a bad sign for the future but we can cast our minds back to the early 20th century where you could have said the same in the 19th century about late 19th century about communism in the early 20th century against all expectations the situation started to change and what seemed like it was a movement that was anti all the power the imperialists the monarchy the capitalists uh nonetheless had many victories and captured a large percentage of the world's population in terms of its ideology so it's not like these things are foretold however uh The Narrative of Communism was not like only these things are bad it had a utopian Vision but it's hard to imagine now having a similar ideology because even communism itself seems like how could it possibly resist the siren song of magic technology that will realize all the visions of the ideology but it's it's just I mean capitalism itself was part of the plan for a communist Utopia and there was always supposed to be a kind of heavy heavily industrialized phase that would lead to a different arrangement uh AI fits into that very well so I I do feel like it's quite difficult to come up with a positive optimistic Vision which doesn't just seem like the accelerationists plus a bit more curmudgeon like Behavior where you're carefully sidestepping the landmines um so yeah I tend to agree that uh well hence why I'm a bit pessimistic about there being some stable alternative in the bottom left which isn't just really quite um anti but maybe we can I mean look at look at social movements in the world today uh like trumpism which are quite anti today's intellectual Elites the successes of which might be AIS or not um that's also not really positively oriented but nonetheless very powerful fascism was not a positively oriented ideology but succeeded to a large degree
so just because there isn't a positive vision for the future necessarily uh doesn't mean things can't work although maybe that's being unfair to Fascism maybe fascism did have at least for some people a fairly convincing story about a return to a better Arrangement so maybe maybe the actual stable ideology in the bottom left is something more like um fascism unfortunately maybe that's being too harsh I I I'm not sure I mean I would put myself perhaps in the bottom left if I had to choose so uh I don't like that conclusion but well one thing that I'm curious about um in the bottom left where so the thing that the thing that distinguishes the the kubernetes from the acceleration is sort of their their support of the rapidity of the adoption of technology so we have two sort of things that can happen in that bottom left quadrant I think on the one hand you could have sort of Technology stay static um its progress is halted or slowed you know to an imperceptibly uh uh uh an imperceptible rate of progress a rate of improvement um I don't think regress is really plausible I'm not sure I think there are some in the environmental uh sphere that might genuinely want that but probably the quadrant as a whole wouldn't want full-on proper regression technologically but we could also have in that bottom left quadrant folks who just want quite a slow uh progression and it may be that that is something like maybe it's something like um let's indulge more authoritarian uh you know top-down control from governments ideal ideally democratic governments but nevertheless control in the left authoritarian sense um in the name of ensuring for example that uh Society has time to adapt even if those adaptations are quite extreme for example uh let's let's do this the whole thing slowly enough that we can bring on quite radical government programs like a universal basic income for example maybe that's not incompatible with the kubernet stance it's just a matter of doing it slowly enough um I mean maybe that's one maybe that's one interpretation of how the interest groups that might be might identify in that in that camp um how they might stand in respect to AI whereas the next the risk of the acceleration is Camp up above is that we don't have a chance to get the social and policy and other elements uh in in order we don't have a chance to get all those ducks in a row before the change is just too radical to too transformative to manage yeah I guess I'm making a mistake by talking about fascism in the bottom left
of course kubernet says left and right so fascism obviously belongs more in the bottom right um I mean I guess that feeds into uh yeah Jeff was posting in the Discord after our conversation last week some some links to statements from the pope um about Ai and how developments should keep humans in the center and other sensible sounding things um maybe it maybe it seems like a bit unfair to use terms like fascism to describe what a right now extremely peaceful uh groups of people who don't seem to have agendas that involve any violence and I don't I don't mean to paint them with a negative brush at all but uh this could all go very smoothly and peacefully on the other hand you know the early 20th century also looked like it might go quite smoothly but did not it's not ridiculous when there are transitions of this scale that things do not go peacefully at all and so it's I think not incorrect to entertain the possibility that various of these groups Trend towards violence or more oppressive Behavior at least some extreme parts of them um so I hope I hope that doesn't seem unfair um maybe I'll make a point here that I it's quite interesting to see the AI safety community on the one hand freaking out and talking about how you know maybe it's all over and it's too late and there's nothing we can do and these are very peaceful nerd types in general but they're sitting there thinking oh there's nothing we can do but a large percentage of the population hearing these arguments if they were convinced would see some obvious things to do which involve various forms of violence that's kind of not an option that seems you know ready to hand for the nerd types but as things progress it's not hard to see I mean there are violent acts by some of these groups in the bottom left already right and if the stakes seem much higher it seems unlikely to me that we don't have some forms of violence so yeah it's uh right now things seem quite quiet relatively speaking but we have to expect if if we're right about these kinds of timelines that things are going to get much more unsettled over the next decade yeah right Matt I uh I have friends who are in those kinds of hacking circles so at least against surveillance I don't know what they're up to these days but I could easily imagine that uh you know having various uh disruptive plans in place might not be outside the realm of their imagination which quadrants agree with this what is capitalism for again use let's use a for AI for that instead
yeah how do we accelerate or steer that seems like a bit of a false choice in a way the way you presented right like obviously we should steer but the it seems to me that the to come back to Peter thiel's comment um if you don't think there is like humans are just very not very good at coordinating right so if you think that look at regulation for example we either don't regulate or we over regulate this seems to be kind of how we manage things so you see quite sensible people in the U.S saying things like let's take Mark Andreessen I'll come back to Peter and Peter Thiel in a moment Mark Andreessen and many others in Silicon Valley would point out that you see rapid rises in costs in sectors that are regulated by the government or subsidized by the government things like Health Care education housing those costs just go up as you go to the right and the sectors where the government is not pharmacology I mean Pharmaceuticals there's another one the other sectors which are less regulated the costs are following so they would say the law of the market this competition and the prices just decrease exponentially over time things like semiconductors or just I.T now not through completely sign up to that argument but I think there's a lot of Truth in it as well and so once you introduce the possibility for regulation well of course there's a lot of power seeking and rent seeking by The Regulators themselves and the entrenched companies we don't have reliable long-term methods of really controlling that process so if you want reliable long-term growth someone like Mark Andreessen would say you just even if you know there are going to be downsides from regulation you should keep your foot off that pedal another example would be the FDA failing to approve many drugs and they're sort of like a invisible graveyard I forget whose term that is of people who have died from not being able to access treatments because they were over regulated so the same people will make the argument are making the argument right now as Mark Andreessen is that the people talking about regulating AI are the same people who have killed thousands from over-regulating the production of drugs have prevented cities from having affordable housing by over-regulating the production of housing etc etc so that's that's actually a pretty strong argument I think um over regulations on treatment by the FDA yeah that's just the point that the process of getting a drug approved is extremely laborious and risky so that
uh you could have a lighter touch on regulation and leave more to the market um now that has downsides and that people would sometimes [Music] you know given the profit motive companies would put out drugs that were unsafe they already still do that right even with the current extreme regulatory system but the argument is that you have to think about this on balance right it's not that there are no costs if you've failed to approve many drugs the cost is just invisible and therefore you tend to be willing to pay it but anyway that's kind of a whole nother can of worms yeah I I think there are just too many ways in which the the andresen's analysis and the examples that are there that just they don't uh I I don't mean to be just sort of Come Out Swinging with a cudgel but I just it just none of it holds up when to scrutiny the examples don't hold up the categories don't hold up the I mean there's something I think sort of superficially appealing to the logic of the simple narrative that you know these if you regulate something then you choke out the competition and you make room for bloat and inefficiency and that translates into higher costs and yada yada but that's you know that's that's sort of a a very sophomoric uh Liberty you know right libertarian take that's pretty popular in certain circles in certainly the United States but like if you look at this little specific examples I mean okay your health care System loaded with cost inefficient but in the United States that's mostly in the United States that that's the case and in the United States we have a very toxic combination of of ineffectual government regulation not over regulation combined with a massively uh inefficient Market there's there isn't competition that's why the Health Care system is so abysmal it's it the market is there's there's catastrophic market failure in the health care sector for a variety of reasons for you know monopolization and cronyism and you know a little bit of captive agency going on there's also perverse incentives you know insurance companies maximize their their and hospitals maximize their uh profit when they don't serve the customer as opposed to when they do so it's it's up incentives are upside down compared to most Industries any on and on and on and then you go to the extreme what is what is an even more regulated Health industry look like well it looks like National Health Care which is what other people other civilized countries have got and it's much cheaper and it
works much better so and that's just one you can make a very similar argument for Education versus National nationalized higher education and so forth so I just I I think it's superficially appealing but when you dig into it it's really pretty dumb and I I that's my general experience with right libertarianism across the board um it just doesn't hold up to scrutiny when you start thinking through the specifics um it sounds great in simple black and white idealistic terms and then when it when it encounters the the muddiness and the gray of the real world it just it collapses um uh so I just didn't find that that very persuasive I I I'm willing to grant that under some circumstances and I don't think it's the majority of circumstances regulation could definitely get in the way and so the example of pharmaceuticals is certainly one and yeah the invisible graveyard's a compelling picture but there's also an awful awful big uh invisible graveyard certainly a potential one for drugs that have been approved that have turned out to be very dangerous and people have died as you know an injured or died as a result of them um and so you know you could make the argument the other way around quite well I think uh that that failure to regulate results in rash recklessness by industry and you get a lot of extra negative externalities as a result of that um that would be familiar to any any environmental uh environment list um so uh yeah I I I I I think I I think I'm broadly speaking I could come back down pretty hard against what Anderson Anderson is yeah let me steal man maybe the objection to the uh reasonable kubernetes position that I think Matt holds so okay we're sitting here in 2023 seems like this rapid progress we don't know what the timelines are governments wake up they start listening to Matt they put in place regulations that mean that companies need to invest more in safety they need to report a lot more about the capabilities of their systems there's coordination and there's limits on things like Bing or Sydney being deployed publicly until we understand their effects yes okay so that seems all good to me uh I presume that sounds alright to you man so that buys us some time perhaps to to understand these systems and how to move ahead in a way that uh preserves human agency and also human life but once that system is in place in the government it becomes clear that actually you need to collect a lot of information about what the companies are doing and they start shifting activities
to countries that are not cooperating with those regulations so there are data centers popping up in the Caribbean um the the Chinese are sort of maybe resisting uh some of the international coordination and they start to move ahead more quickly there's a lot of pressure to either drop the regulations or to make them much more strict the reaction of governments when people break regulations is not to question uh should we get this more right it's to crack down right because it's a challenge to their legitimacy and in a situation where the big tech companies seem to be holding the Golden Goose that is the source of all power in the future I don't think National governments are going to look kindly on those companies skirting around or reinterpreting the regulations they will make the regulations more strict and there's a kind of slippery slope down there towards very tight International coordination and the kind of totalitarian world government that Peter Thiel is is worried about that's the argument I think that to steal man the position of someone who's against today's baby steps towards regulation now I think that's kind of like suicidal but that's just how I I think they would make the argument for it yeah I think that's a very good steel man I think that that's a you've definitely put the strongest case forward for that General line of thinking and as you you know it as you said there there is some element of that that is sensible and that that is compelling um yeah so yeah but you know of course I agree with you that I think it would I think it would be that would be it's potentially quite suicidal to pursue that but the way that you present it I think is is strong and I think that the uh okay if I was to put myself in the shoes of uh Dennis sabus or Sam Altman or any of these figures who uh sort of behind the progress um I might feel that in the face of that situation evolving right I can see that political train getting rolling and I think that actually this might be our one chance to crack through to the next phase of human existence right so you can see this attitude pretty clearly in Elon Musk for example with Mars right so he said very clearly that he thinks that maybe there's a very short window in which we have both the technology and the civilizational stability to actually achieve this and now is the time now if you know if musk sees the window closing because there's degeneration politically and over regulation of space travel and it seems like actually we
have not much time to get this done he's not going to slow down right he's going to speed up and pull all his Capital into this short window where it seems like it might be possible so the other risk of reasonable regulation development as I would see it is that once it you know once it gets into the hand hands of the political system with all its vested interests many of which are not Tech oriented and we'll see closing this window as the only means to survive right given the disruptions uh that once it seems like the game's going against the tech companies or whoever's behind the AI progress it will motivate drawing forward all the investments in order to try and crack through while there's time so I think yeah that's right so I think that what we should expect I mean I think that's what's going to happen frankly so I think there will be good faith attempts at regulation they will kind of go too far and this will sort of accelerate both sides and so that things will play out very rapidly I mean that's kind of how things seem to go and it's not like I can see both sides as being quite reasonable by their own lights because the regulatory side I mean it starts with people like Matt but then as the tech companies are trying to escape the regulations and are clearly defying the spirit of the let's save Humanity um Regulators they will crack down more and become less nuanced about their positions and the tech companies will in kind respond by going full steam ahead um so I'm not saying either side is wrong there but that's my default expectation for how this plays out I'm interested in if you see any flaws with that well the the one thing I would not so much a flaw but I would add a Nuance there I think I think it's an I think it makes sense anyway the the thing I would add is that The Regulators I I don't have confidence that The Regulators will be able to keep up with the the not so much to keep up with the progress but rather um uh uh I guess what am I trying to say I think I'm sorry let me just gonna have to step out for one minute sorry I'll be back sure um yeah the what is it what might be a good way to say it rather than state it in the abstract let me give you the specific specific example I'm thinking of and then we can use that to back into the abstract if if that makes sense so what I'm thinking of for example I can imagine regulation emerging fairly quickly to limit you know how and where and when uh chat Bots like chat gbt or you know Bing's chat services with Sydney and so
forth those are rolled out made available under what circumstances and so forth um so that's a specific application of AI technology um I I I I I I I I think so here's another example I can imagine that governments will very carefully regulate autonomous driving technology that you know so they're sort of the AI in cars I can imagine that being a uh regulated quite aggressively and there may even be some companies that are working on both of those things Google for example or Tesla maybe potentially if they you know move further into the AI space um another example might be with sort of humanoid robotics Tesla's talking about making a humanoid robot there are other companies that are talking about that as well and uh uh as as exciting as it might be from a technological perspective to see humanoid robots doing the you know the terribly dangerous uh toil and judgery in you know in various occupations that humans are in right now uh those that could also be a big Target for regulation but I can see all of that regulation happening while none of the AGI risk is right is mitigated through regulation at all and the the AI alignment Community to my understanding is overwhelmingly concerned in talking about doomsday scenarios in the context of AGI and not the regulation of narrow AI in these specific applications like um you know Chachi BT and self-driving cars and humanoid Robotics and so on and so um I think we need to be perhaps a little I think that's my generalization from those specific examples is that we probably need to be a little more nuanced than what we mean by regulation and what gets regulated uh specifically and where the real risks are I mean certainly there are some risks to society various kinds literally risk to life and limb through self-driving cars that aren't ready uh to risk socially I mean you know there was a lot of risk of social media it turned out and did it does did does it will continue to do massive amounts of psychological and sociological damage um to to especially to young people um which we didn't anticipate and we aren't regulating enough even still so I can imagine a scenario where regulation definitely occurs and uh goes awry but yet completely fails to miss the mark by not having its eye on the real uh prize the real Target which which in my mind the existential risk is not from from chat Bots or from humanoid robots taking people's jobs and factories the existential risk is you know the Skynet scenario where you have an AGI agentic
or Sapient or in particular that then becomes very difficult to or impossible really to um to have any any influence over and if it's not aligned out of the gate you know we're in real trouble so yeah I don't I think those I think we're talking about multiple lines of Regulation here and I can see those becoming muddied and confused and deliberately conflated by the industry to to serve their interests and you know all the rest of that sort of stuff yeah not monolithic I guess is my main point yeah I think partly the reason I'm bringing up that scenario is I uh I can see a short-term future in which young people have to kind of choose to either go and work on safety inside Deep Mind so they're sort of a uh safety oriented voice into inside an accelerationist institution if that's fair um or they have to go and work uh inside some government body which is attempting to slow down AI progress so the regulators and there they might be relatively speaking like a progressive slash accelerationist voice right they don't want AI to stop they they want the fruits of the technology but they see the progress as being too careless and want to make sure it is done properly and but the broader institution I think just by the nature of the the players involved will tend over time to be fairly uh perhaps from their point of view over regulatory or running the risk of um of just stopping things completely so I guess those are the twin those are the two choices perhaps maybe it's more complex than that um but yeah maybe we should spend the remaining time let's get uh to the topic of the Manhattan Project shall we unless there's um other things on this board that you want to comment on just one more quick thing to comment on and I'm just seeing Peter Thiel and the military industrial complex complex there and that is um I haven't spent too much time looking at Teal's work and and so forth but I did watch a talk a while back oh maybe several years ago now and one thing that's that start struck me and really stuck with me was how he seems to be genuinely concerned or his thinking is organized around a a genuine concern I believe about violence so he seems to be very concerned and very committed to avoiding violence between nation states that seems to me and perhaps you know social unrest within civil civil violence as well but that seemed to be very much an organizing principle behind his thinking um that that above all else we need to find a way to avoid violence and that a lot of his other thinking Some of which
is is is quite um extreme perhaps is is it makes a bit of sense organized through that lens um and my interpretation I could be wrong I I can't I know of his work or his thinking well enough my interpretation is that that General line of thinking is in accordance with uh certain interpretations of the military-industrial complex and um uh the political right which is that the way to preserve peace and to avoid violence is by having overwhelming power and overwhelming uh uh command of force such that violence is unthinkable you know so you have mutually assured destruction with nuclear technology that'll Loop us back into the Manhattan Project um but in general that you you know that this is sort of part of the American Ultra nationalism that I sort of get a vibe from from folks like Peter Thiel oh but Adam you know isn't that what I could be wrong but sorry guys what I was going to say isn't that what you said at the beginning of this conversation you didn't you said it in a much friendly way but you said that you're in favor of acceleration because you think there are good parties and bad parties and that the good parties should get there first yes you're right guilty so so this is the this you're right this is this is the this type of thinking is is there is oh I mean it's it's it's frustratingly compelling well it's such a coincidence that the good parties happen to be American yes exactly right and I'm I I it's funny I mean I I don't tend to think of this through necessarily the lens of the nation state all that much um uh hold on I think I'm more naturally inclined to think of this through through the lens of corporate corporate institutions Empower more so but um I don't know I mean there may be there may be a a role for both certainly there's a role for both but the the yes the the there's just this core logic that if you if you want to avoid violent conflict it helps to have just one to have it have a very polarized situation if you have a unipolar world where one entity has vastly more power than anything else than than that seems to be a stable condition for avoiding violence um because it just they they're the the there isn't you know enough room for competition in any meaningful sense that could lead to conflict and violence you just can't win against an overwhelmingly powerful adversary and so peace prevails as it were and um Die Young oh that there's peace prevails after the after the war of unification well it holds some water right I mean that's the the you you've you know you
you this is what this is kind of like um if it was the analogy I forget there's a famous quote but it's about fathers and brothers and stuff anyway this is like Dad's job in the household like you no if it obviously ends up at a certain age and you know the the the the boys in the household will eventually grow up and and mature into uh powerful enough individuals that they can challenge their father but at a certain point you know you you could have like four or five brothers and siblings and they're all kind of competitive with one another but you know we avoid violence because Dad the threat of dad is there and Dad's the authority and Dad just keeps the peace and that's you know that's it but if there was no dad there then there would be blood and um so and again this isn't ancient this is not a new line of thinking at all this goes back a long long way but there is there's there's this you know you know there's Roman writings and philosophy in ancient Greek and stuff they touch us on all this um thinking about power and and and so forth so anyway that upper right quadrant I think is definitely very much in that of that sort of thinking but perhaps through the lens of nationalism in a way that the left column isn't so much through the lens of nationalism I think that's a very astute inside yeah all right Manhattan Project are you joining if they ask who's who's the question for uh everybody let's see why would they ask you Adam well you write a report this year or next year which accurately forecasts the development of various inputs into the AGI race and just like they needed to coordinate a network of factories across the US to refine uranium and they need coordinators for that they see your expertise as essential to coordinating the private public partnership around data centers GPU production securing the supply routes from Taiwan Etc and they ask you to play a leadership role in that new organization what do you say uh hell yeah however that's really difficult okay let me let me up the stakes a bit so they reveal to you uh information that seems credible coming through the NSA about China's uh version of the Manhattan Project where uh somewhere in the districts of Shanghai they're building an underground network of data centers and they have supplied they have secured supplies of the latest Nvidia chips through some black market or they've somehow managed to crack making them themselves and best estimates put the timelines for them I mean there's some scaling law based
calculation that you know makes it look plausible that they'll be human level AI produced given their current plans within five years oh yeah um I what are the consequences of saying No this is another question I mean you know they're going to incarcerate you or something well I mean not so much to me I mean I mean I I think I would be willing to take a stand if it were only myself at stake but I've got family and I've got children and and I would have to you know I mean you would imagine if this is if this is the patent project you you know maybe maybe saying no isn't really an option you know oh that's an interesting question I don't know the history there well enough but I presume that when they invited people to participate there was like a vague invitation at the beginning that didn't reveal anything and you could say no to it I'm pretty sure that people did say no oh that's a yeah that's a good that's a good that would be that's a specific detail about the Manhattan Project it would be very very useful to know like could you could you realistically say no without having your life ruined or or otherwise on pure principle on pure principle um it isn't a slam dunk to me I think I think I'm I'm tenuously leaning towards saying yes given the scenario you presented which is that there's there's a um there's a very plausible possibility that without a U.S Manhattan Project China wins the race and develops AGI first um but the I guess my main problem here is that I don't have a huge amount of confidence that uh that either a U.S government victory or a Chinese government Victory would be in the best interest of humanity and I'm not really even sure that those two things are all that distinguished distinguishable from each other um so uh uh I just I don't have a huge amount of fate and I could see some very specific risks there perhaps as well um uh for example an AGI if AGI were to be developed by government then that AGI would more readily potentially have access to to government systems as opposed to for example if if open AI developed AGI I mean if we're taking all of this sci-fi yeah I know I'm dodging the question a little bit Ivan I think I would probably I would probably reluctantly say yes and and join the project because you know um if nothing else if it turned out that that it was all terrible I could work to sabotage it from within um so many people say very unlikely yeah that's um no no offense settings but yeah I know you know it's it's tough It's like
I mean it is it's a no-win scenario there's there's it's very difficult uh to see a a you know a positive outcome there um I think I would prefer to work if I were called up I would prefer to work on a project that that we're um uh uh you know that we're that we're had some well I I don't know um it's all right no I mean I I I I I I'll just stop hemming and hawing and I'll just say yeah I would probably answer the call um I mean again because just just I I don't see I don't see how how not answering it leads to a better outcome I just I don't see how that I mean okay so I say no what's what's the better outcome as a result of saying that I don't see it I don't see it doesn't create a better outcome for me to say no than to say yes and join the project and try to at least steer as we were saying before I mean if you could do anything to steer things in a better Direction you probably should um so okay what about you Matt my joking answer is that um Seneca Cicero I think Machiavelli um I just love the top of my head people who wrote famous books in political Exile um rather than when they were working for the government so I don't know if you get remembered for for that part of the work that's not breaking it I don't know um what what's that you gave um for Adam about why they would ask him ah why have they asked me okay because uh open air open AI is nationalized Jan Leica is in charge of alignment there and he's recruited into the Manhattan Project as they're in alignment contingent and he says he reaches out to graduate students in the area and says well we have a lot of resources and not a lot of time and I don't really trust these military types they're going to go too fast and the only chance we have of avoiding the catastrophic outcomes we all expect is for you and everybody like you to come and join the project and hopefully shape it from the inside um yeah I think I'd ask what kind of tasks I'm going to be doing day to day because if I'm going to be um straight up like running capabilities experiments then well I mean I don't know if I we ask for that but um that's one thing whereas if I'm I'm if there's a contingent within this group that is doing kind of conceptable work and uh I don't know maybe I could be a good red teamer this is the kind of people who um who try to point out and try to critique these models you know they're given special access to them um and try to come up with ways to show them um having problematic behaviors or functionalities or whatever that weren't
intended as a way to kind of um nip those in the bud or something like that before it costs bigger problems later and they're kind of they have like adversarial relationship with the developers right because the developers have an incentive to um make it a success whereas the red team is um have an incentive to kind of point out the ways in which the project has failed and I think maybe I could feel alright about doing that kind of work hmm all right so you both joined the Manhattan Project it succeeds after four years of hard work and then a wave [Music] wait wait wait dan you before we get there no way buddy let's hear it yeah I have thought God a bit about us I don't yeah I still don't have an answer I think I would join um okay so I have a I have a a fun scenario just for just real quickly um and this is just to dig into this question about is it better to be in an in a project at all as opposed to being on the sidelines so here's the scenario it's a little silly but here's the scenario um and this is for all of us what if you get two calls back to back make it on the same afternoon if you want it to really you know put juxtapose them but say you gotta so that you guys are Australian uh I believe both of you um I don't know who else is is here with us but I I think believe I'm the only American so maybe that's a little a little weird but let's say uh because I'm in a different perhaps in the more difficult position um as a patriot you guys say you get a call from both the U.S Manhattan Project and the China Manhattan Project uh which one do you join or do you say no to both of them and and why hmm I think there's no circumstance under which I would join a Chinese Manhattan Project I think the the political system is too dangerous to unhinged um the American political system and Leadership is also a dangerous and unhinged and even more likely to to cause nuclear Annihilation I think than the Chinese so I have very little Trust of both and sympathy for China as well as for America but um yeah the as an Australian the geopolitical position is what it is and it doesn't change just because I wish it is otherwise and America is our Ally and China is not so I think it's very clear which one I would join um having said that I think that the Chinese government probably ends up with a much more sane policy towards AGI than than the US that's my impression based on what they've said so far they're not they're less likely to Barrel sort of madly ahead I think the Specter of
Chinese AGI development that's raised in the west is largely self-serving and just a kind of uh the convenient fiction for the acceleration is to go faster I'm not sure it corresponds to actually what what China wants to do I think China will feel drawn into this race because the Americans are in the race I don't think it's what they want I think it's America's ahead has the capabilities has the companies and uh could if it wanted to slow down the race but I don't think that uh I choose to so yeah I would go with the Americans well so as I mean I guess part of it depends on what we're envisioning the outcome is here so so damn you were starting to talk about the outcome so let's hear that and then there's there's some things to think about you know like whether it really matters and what things might matter if if AGI does indeed emerge and then under various assumptions what its capabilities are and when like how quickly it's capabilities evolve or something like that I mean we're talking if you you know there's a wide range of perspectives on this um some quite extreme ones you know the Elias or yukowskis of the world or imagining um you know extreme capability very quickly and so forth but uh so it's in we have to we'd have to decide on the scenarios whether we think the outcome is materially different where of where it's developed like where this technology is developed okay so is this is the story sir Adam door from within his perch in the uh second Manhattan project uh gives this blurb about how the only way to peace is leviathan and so the millions of drones fly across the Pacific and destroy the installations in Shanghai where the AGI was being developed set up a kind of quizling government in China oriented around preventing any Resurgence of their AI program the UN convenes and given the overwhelming power of the Americans with the robot factories and the AGI behind it agree that there will be a kind of new nuclear records where there will be a unipolar world only Americans will be allowed to have agis and that is enforced by a worldwide system of digital surveillance AGI based incursions into all computer systems to monitor them for AI progress and also physical versions of that particularly in China given the sort of remnants of the engineering capability there and that is uh maybe a golden age in some sense afterwards but it may be extremely violent in the aftermath of our success in the Manhattan Project can we can we shall we put forth a few of these fun
different scenarios so how about a scenario in which um so the the as the modeling and development and training progresses it becomes increasingly clear that it doesn't that the the nuances and the distinctions between the teams who are pursuing this um make very little difference and the only thing with substantial influence over at all over the the behavior and the potential outcomes resulting from the behavior of AGI which is destined to immediately extremely rapidly become ASI and and becoming extraordinarily capable the only difference the only influence you can really have is on the core training data that many of the models are developed on and so the big difference there is um is whether or not you train on the the internet writ large and uh uh it's predominantly Western languages that sphere of that portion of the internet I mean they're connected but not deeply of course or whether it's trained on China's internet and at all of that data and that the bulk of the difference that you get isn't whether Google develops it or the US government develops it or openai develops it but whether or not you train the AI on the western internet or the Chinese internet and that's the only thing that influences the outcome uh and the behavior at all of the system that emerges and the system that emerges uh um is it proves very rapidly to be completely uncontrollable and uninfluenceable by human interests at all there's simply no way for Humanity to have any influence over this system whatsoever uh and um it simply does its own thing and so we have we only really have one shot which is to decide which what to train these damn things on and if we train it on whatever the hell Sydney was trained on that and leads to the extinction of humanity and if we train it on a better Corpus of material of material it isn't so deeply anti-social and Psychopathic um then tech support calls uh then then the outcome is a much higher probability of being better um so it that that's that's the that's the only thing that makes a damn bit of difference and it doesn't otherwise it doesn't matter because it's there's strong convergence um as you as as you erase up the uh intelligence ladder towards um uh similar set of of priorities goals values and behaviors yeah I think that's right in other words a rejection of the of the Orthodox orthogonality thesis yeah we can we can argue about that some other time um but yeah I tend to agree I think people over over focus on the current Paradigm so when alphago was around
people were excited about how it learned from Human games and how therefore it had a residue of like the humanness of the way we played go and like maybe it was us teaching our descendants and they would carry on our traditions and then you know six months later Alpha zero kind of nuked that romantic vision um because it didn't learn from any human games at all I think we're in a similar kind of delusion about our influence over models like gpt3 I mean it's trained on all our books and so on and so maybe it has some like core similarity with our culture because of that I think this is a bit of a delusion I think I'm not saying there will be a version of GPT that's like GPT zero that doesn't need any human data that doesn't make any sense given that it's oriented around our language but I think you can overestimate how Central the human worldview will be to descendants of the GPT series um I think it's a bit of a mistake as you're saying I mean as they progress in power they will leave that behind fairly quickly or at least instantiate inner models that don't have much in common with our way of thinking yet I as both you and and Matt observed earlier uh in this scenario with the US versus China I don't care who wins if humans survive we made it that's what I care about and that's at the end of the day why I would join the Manhattan Project I think that's the stakes and it's too hard to the the other outcomes are so hard to influence I think it's very very difficult for individuals like in any period of War to have much of an influence over which way things go you have a pro you have an obligation to survive yourself and to protect your family and and in this case to maybe stop Humanity from going extinct but I feel no capability of influencing things beyond that so that would be my focus and why I would join um well in in this case again I think what you your reasoning points to a very profound difference between what's happening now in the Manhattan Project the real Manhattan Project and and where the parallel really does break down and that and in my mind is the biggest difference I think it's quite obvious and that's that nuclear weapons don't have a product actively productive upside I mean oh I mean maybe if you talk about civilian nuclear power a little bit or something but not really right um uh they they are overwhelmingly just weapons just destructive and maybe that they're that overwhelming power in some and you guys were I think very astute to invoke Leviathan and some hobbsian sense
there's this there's this uh Power that they have to him to to force a piece um and a stability and an order upon the world because of their massive destructive potential okay but but they're nuclear weapons don't have this incredible potential upside that manifests at so many different levels and you know and along the way as we're progressing towards some some um uh future conditions as well so it's not just you know we cross the finish line and suddenly the benefits arrive and we're going to see these benefits roll out progressively as we as we race up the the curve um you know that's a fundamental difference here is that there the potential upside is that it's just staggering it's it's almost unimaginable how how incredible the future could be as a result of artificial intelligence and artificial general intelligence and artificial super intelligence the potential for prosperity um the potential for the realization of of uh the very best of human aspirations and the solution to many of our problems and oh just just the the the possibilities that open up are absolutely staggering and truly inspiring to think about and um you know depending on how you how you want to do the calculus how you want to do the do the numbers I mean you know it could be worth the risk the upside could be so large it could you know it could be indistinguishable from infinite um that maybe a lot of risk is Justified if you're thinking long-termism and and so forth um so this this complicates the comparison very profoundly very deeply it might maybe sort of fundamentally breaks the comparison between the the original Manhattan Project and one that might occur for in well the the AI or AGI arms race today and um so yeah I think that that's the the the stakes are the stakes are are very high they're existential but in in quite a different way because of this upside so I wonder how you are how are you thinking that the two events the historical Manhattan Project and this one compared to one another in that in that way yeah I think of course they're different but uh once we enter the race Dynamic I think it's a weapon first and then it's everything else second because the logic around Leviathan and getting there first I mean you can hear the leaders of the tech companies even talking about it right they think they should get there first because they're the good guys and that kind of power I mean there's not much you could say the same thing about trying to acquire resources right before the first world