WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed. The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except for errors at the level of individual words during transcription.

Synopsis


In his talk, Peter Thiel and Tyler Cowan explore the implications of Artificial General Intelligence (AGI) and how it could shape the future of society. They examine the political spectrum, the issues of growth and technological disruption, and the need for caution when considering AGI. They also consider the effects of AGI on religious backgrounds and the possibility of a new approach to solving the problems of classical liberalism. The conversation is a thought-provoking one that highlights the complexity of the relationship between technology, politics, and economics.

Short Summary


Peter Thiel and Tyler Cowan have raised doubts about the rapid progress of science and technology, and Thiel discussed this in a talk entitled "The End of the Future". He suggested that the development of Artificial General Intelligence (AGI) may have political implications, and Yudkowsky's dichotomy between accelerating AGI development and worldwide totalitarian government was discussed. It is argued that if a large number of people believe the propositions that it is impossible for beings of intelligence n to meaningfully control a being of intelligence m if m is much greater than n, and that intelligent beings of a certain level will not be controllable by humans, then they may think AGI should not be built, as it could pose an existential risk if not designed in a way that is friendly towards humans. However, it is acknowledged that even if people agree AGI should not be built, they may not have the agency to actually stop it being built.
The transcript discusses the implications of building Artificial General Intelligence (AGI) and the need for caution. It suggests that a probability of P Doom greater than 0.95 should be taken seriously, and that the public may be less enthusiastic about nuclear weapons than some parts of the elite. It proposes three propositions about AGI, that it is possible to grade people into meaningful categories, to create an AGI-free zone, and that AGI is easier to control than nuclear weapons. It is suggested that a totalitarian world government may be necessary to regulate and prevent the emergence of dangerous AGI, and that tracking GPU usage in data centers and semiconductor Fabs could be done without a totalitarian World Government. Finally, it raises the question of what would happen if every nation built AGI, but did not turn it on.
The conversation discusses the implications of building an AGI, and the idea of having an AI in a "sleeping state". It is suggested that AGI is analogous to a nuclear weapon and that there is a risk of accidentally crossing the phase transition point which could have catastrophic consequences. Teal argues against implementing totalitarianism as a solution to the potential dangers of AGI, suggesting that economic growth is the way to improve the human condition. It is suggested that we should be careful not to make the same mistakes of the past, and that cooperation may be the best way to avoid a world war and the potential for totalitarianism.
The speaker questions whether economic growth is a reliable way to improve people's lives, noting that it is unclear whether trying to increase GDP necessarily translates into the things people want. They suggest that growth may be a gamble, and that technological disruption often causes suffering for some people, but is still necessary for the growth of the country. The speaker argues that compensation should be provided to those affected by technological disruption, but this is not always possible. They also discuss the trade-off between making progress and trying to make things less violent, noting that different countries have different approaches which have different outcomes.
The transcript discusses the different views on technology from the left and right, environmental considerations, and optimism in China. It looks at the paradox of conservative voters and the emergence of a Luddite faction, as well as the intersection between right and conservative ideologies. Lastly, it examines Teal's talk to bet on growth, and how this fits into the same quadrant. Overall, the discussion reveals the complexity of the relationship between technology, politics, and economics, and how this can shape the future of society.
Teal is attempting to find a new approach to solving the problems that classical liberalism was trying to solve, as the consensus surrounding it has changed. The political spectrum, which currently falls along a diagonal between the top right economically conservative and technologically advanced and the bottom left socially progressive, environmentally focused and anti-tech, could be realigned due to the rise of Artificial General Intelligence (AGI). This could be as contentious as the abortion debate, and have similar implications for people with religious backgrounds. It is suggested that the true power may be in the economy and the form of democracy may survive, but the reward function of AGI may be economic growth and voters may not control anything meaningful.

Long Summary


Peter Thiel gave a talk at the Academic Freedom Conference at Stanford about his doubts about the rapid progress of science and technology, suggesting instead that there is relative decline outside of the narrow cone of software and information technology. Thiel has been a venture capitalist, technologist, and conservative contributor to Trump's campaign. Tyler Cowan has echoed some of Thiel's sentiments in his books. Thiel's talk was titled "The End of the Future" and was about AI alignment and the movements emerging around it. It was suggested that in January 2023, GPT4 or GPT5 will be released, and there will be large updates from various groups of people in various directions. This will have political implications as the views of the Insiders, elites, and public will shift in response.
Teal's argument suggests that the left-right political spectrum will be impacted by the accelerated development of Artificial General Intelligence (AGI). As an example, the recent drama around Elon Musk's purchase of Twitter revealed the differing views of the political spectrum; while the right saw it as a victory, the left saw it as a defeat. If this is the reaction to Twitter, it is likely that the reaction to AGI will be even more extreme given that Silicon Valley is not equally distributed between the left and right. This could lead to deep tensions between Washington and the Bay Area.
The speaker is discussing Yudkowsky's dichotomy between accelerating AGI development and worldwide totalitarian government. He proposes two propositions: it is impossible for beings of intelligence n to meaningfully control a being of intelligence m if m is much greater than n, and it is highly likely that intelligent beings of a certain level will not be controllable by humans and human society. He suggests that this proposition is becoming more widely accepted, and that it is necessary to believe that coordinated systems of humans and dumber technologies can eventually control systems much smarter than any individual human.
The transcript discusses the potential implications of propositions one and two in regards to the development of Artificial General Intelligence (AGI). It is argued that if a large number of people believe these propositions, then they will likely think AGI should not be built. This is because AGI may not actively care about human well-being, and thus could pose an existential risk if not designed in a way that is friendly towards humans. The transcript also acknowledges that even if people agree AGI should not be built, they may not have the agency to actually stop it being built.
The transcript discusses the implications of building AGI and the need for belief that the AGI will not necessarily care about human well-being. It is suggested that a probability of P Doom greater than 0.95 would be strong enough to be cautious. The situation with nuclear weapons is discussed, where the public was not asked whether they should be built, but they were used and the public was then informed. It is suggested that the probability of P Doom resulting from nuclear weapons may be much higher than 0.05, and that the public may be less enthusiastic about nuclear weapons than some parts of the elite.
The transcript discusses three propositions about AGI. Proposition 1 suggests that it is possible to grade people into meaningful categories. Proposition 2 suggests that it is important to create an AGI-free zone to protect people's wellbeing. Proposition 3 suggests that AGI is easier to control than nuclear weapons, as it only requires a copy of the source code and a computer. It is argued that while it may be difficult to build the first AGI, it is possible to control the knowledge of how to build it, as only one bad actor is needed, whereas a team is needed for nuclear weapons.
The transcript discusses the implications of the low prospect of meaningfully controlling AGI, the large and negative consequences of not controlling it, and the difficulty in stopping its development. It is suggested that a totalitarian world government may be necessary to regulate and prevent the emergence of dangerous AGI, or at least to slow down the progress until it is safe. It is suggested that this world government could be instituted in an enlightened fashion to allow for progress, but not to the extent that it is dangerous. It is further suggested that this government could be formed by countries like the US and China coming together to agree to surveil organizations within their own borders.
The speaker discusses the possibility of tracking GPU usage in a totalitarian system, similar to the way drugs are tracked. They suggest that regulating GPU usage in data centers and semiconductor Fabs could be done without a totalitarian World Government. The speaker wonders if this is enough to stop progress, or if it is a middle ground. They mention that cryptography and other technologies have been regulated in the past due to military concerns, and ask if this is sufficient.
The transcript discusses the possibility of countries building Artificial General Intelligence (AGI) to use as a defense mechanism against cyber attacks. It suggests that it may be possible to build AGI with the hardware already available, but it is not clear whether this would be sufficient. It also suggests that countries may build AGI as a deterrent against traditional military adventurism. Finally, it raises the question of what would happen if every nation built AGI, but did not turn it on.
The conversation discussed the implications of building an AGI (Artificial General Intelligence). It is suggested that AGI is analogous to a nuclear weapon; once it is activated, it is difficult to control. It is also suggested that AGI is not qualitatively different from AI, just a matter of scale. The conversation then shifts to the idea of having an AI in a “sleeping state”, and turning up the resources available to it to create an AGI. It is also noted that there is a risk of accidentally crossing the phase transition point, which could have catastrophic consequences.
The transcript discusses the possibility of a totalitarian world government being necessary in order to prevent the proliferation of AGI. It is argued that this would be difficult to coordinate in the near future, as the transition to AGI would be much faster than the process of establishing the current equilibrium around nuclear proliferation. It is suggested that this could lead to large moves and directions, making it necessary to have an AGI already in order to form a totalitarian world government.
Many intellectuals and elites in the past have argued for a totalitarian world government to prevent the use of nuclear weapons, but this was eventually seen as a mistake. Teal's conclusion is to not choose totalitarianism, and this has been proven right so far. Although nuclear weapons still pose a threat, the world has not ended and the species has not wiped itself out. This implies that the mantra of not choosing totalitarianism was the right call. Now, the level of alarm is not as high as it was before, but it is still a deep and important question.
Teal argues that totalitarianism should not be chosen as a solution to the potential dangers of AGI, and that economic growth is the way to improve the human condition. It is predicted that political debate will increasingly revolve around this topic, and it is suggested that we should ask Teal to give a seminar on the issue. It is not believed that intellectuals have a clear idea of how mutually assured destruction would work, and it is suggested that we should be careful not to make the same mistake.
Teal is arguing that even if it appears that totalitarianism is the only option, it should not be chosen, as it is likely to be a misguided wishful thinking. He suggests that even if the argument for not implementing totalitarianism looks weak, it should still be avoided. He also mentions that even if totalitarianism isn't chosen, it could still be the outcome if nations start to impose sanctions on one another and a world war breaks out. He suggests that cooperation may be the best way to avoid this, but it could also be that somebody makes a decision to stop everybody else, which could lead to totalitarianism.
The speaker raises the question of whether economic growth is a reliable way to improve people's lives. He acknowledges that this is a contentious debate, with mainstream parties on both sides of the aisle typically agreeing that growth is the goal. He also notes that it is unclear whether trying to increase GDP necessarily translates into the things people want. Advocates argue that it is difficult to come up with better ideas, since everyone is lying and cheating all the time. Ultimately, the speaker suggests that it is a gamble to rely on growth as a way to improve people's lives.
The speaker believes that GDP growth is a bad idea, but it is still the least bad option. This is because other ideas tend to be worse in practice. The speaker believes that with the introduction of AGI, the emphasis on growth will finally be removed and that growth is ultimately doomed. This is because optimizing a certain thing tends to push other things to extremes, and humans do not live in extremes. The speaker is worried about the limits of growth and how much growth can be sustained before the whole system breaks apart.
The speaker discusses how technological progress can improve people's lives, but at the same time can lead to job losses, wealth shifts and other destabilizing changes. They acknowledge that the winners and losers of such disruptions must be taken into account and compensation should be provided to those who lose out, but this isn't always possible. They point out that this is important not just for moral reasons, but also practically. If we want to move Society through these changes in a way that doesn't break it, we should be taking very seriously the part of compensating the losers from these disruptions.
The transcript discusses the idea that technological disruption often causes suffering for some people, but that it is necessary for the growth of the country. It is argued that we are not capable of properly compensating the losers of technological disruption, but that it is still necessary to proceed with it anyway. It is suggested that this is similar to the current situation with AGI, where it is dangerous and will create winners and losers, but that it is necessary to go ahead with it anyway in order to escape the low equilibrium of human existence prior to the Industrial Revolution.
The transcript discusses the trade-off between making progress and trying to make things less violent. It is noted that the Americans make this trade-off further than other countries, which is why Silicon Valley is there. The graph presented has left and right on the horizontal axis and accelerationist and precautionist/luddite on the vertical axis. It is noted that the Arrowians are an extreme version of the precautionists, while the luddites are at the bottom. The conclusion is that different countries have different approaches to the trade-off and these have different outcomes.
The transcript discusses the left and right's views on technology, with many on the left being cautious and anti-tech. This includes those concerned with the effects of social media, such as the manipulation of democratic processes and depression in young people. On the right, there are similar critiques of tech. Environmentalists, both on the left and right, have a similar stance, with some being in favor of geoengineering and aggressive technology development to mitigate climate change, while others are more cautious. Overall, the cautionary principle is seen as important, but there is an understanding that technology cannot be completely stopped if real solutions are to be found.
There are a number of different approaches to AGI. Some people argue for precautionary measures, while others believe that more technology is the solution to environmental problems. There is also a significant contingent that argues for restricting the deployment of large language models until they can be made safer. Neo-communism or techno-anarchism may also fit into this camp, as well as the Chinese approach to AGI. Everyone would push the button on a perfectly safe technology, but accelerationism means taking risks and rolling the dice, which could lead to dystopia.
There is a growing optimism about technology in China, with the government viewing AI as essential to maintaining control and stability. This has led to a race towards AGI and the idea of "fully automated luxury communism". There are some who are anti-corporate and worry that progress and AI will lead to big tech having more power. This has created a debate between those who are explorationist and those who are precautionary. Conservatism and capitalism have traditionally been associated, though their marriage is a bit strange given capitalism's role in destroying and remaking society.
The transcript discusses the paradox of conservative voters who may have lost their jobs due to neoliberal policies, but still vote conservative. It suggests that with the automation of more cognitive jobs, there could be an emerging faction or party that is Luddite in orientation and frames it in conservative language. The right and left of the political spectrum is not linearly independent, as conservative ideals do not necessarily predict how people will vote. The political equilibriums and self-identification of people along that continuum is what determines the right and left.
The transcript discusses the intersection between right and conservative ideologies, and how this is often referred to as conservative. It also looks at the top right quadrant, which is where capitalists and some parts of big business sit, and how some businesses may choose to look the other way and not think carefully about the risks until it's too late. Finally, it looks at the conclusion of Teal's talk, which was to bet on growth, and whether this would fit in the same quadrant. In conclusion, the transcript explores the differences between right and conservative ideologies, and the implications of Teal's talk for businesses.
Teal is trying to find a new approach to solving the problems that classical liberalism was trying to solve, as it has become clear that it is no longer an adequate framework for Western democracies such as Australia. Surveys of young people's beliefs suggest that the consensus is no longer as strong and it is not clear what the replacement is. Teal is attempting to think through how to solve these problems, and this discussion is pressing. People should take him seriously and think about his ideas, as he is trying to find new and better solutions.
The transcript discusses the changing political beliefs surrounding technology, and how the ideals of classical liberalism may not be as present in today's society as they once were. It also draws attention to the current coalitions of power in the technology industry, with the top right quadrant having a strong unit with GPU's, money and talent, and the bottom left quadrant being dominated by the media and higher education. It concludes by noting that both quadrants are strong and stable, with the top right being the most powerful.
The transcript discusses the potential political realignment that could occur as a result of the rise of Artificial General Intelligence (AGI). It suggests that the existing political spectrum may fall along a diagonal, with the top right representing those who are economically conservative and technologically advanced, and the bottom left representing those who are socially progressive, environmentally focused and anti-tech. It is possible that some elements of Silicon Valley, who are currently Progressive and vote Democrat, may choose the top right if push comes to shove. This is already being seen in the current situation with Elon Musk and Twitter, which may indicate that the grip that the Progressive left has on Silicon Valley may be loosening.
The speaker is discussing the potential conflict between those who support AI technology and those who prefer to keep humans in control. They suggest that this conflict could become as consequential and engendering of violence as Marxism versus Capitalism. They suggest that this conflict could be similar to the abortion debate, and could be particularly difficult for those with religious backgrounds to accept. They predict that this debate will become increasingly important and contentious in the future.
The discussion revolves around the contentious area of predicting how the framework of governments, left versus right and democracy, will be affected by the transition to AGI. It is suggested that the form of democracy may survive, but the true power may be in the economy, and the voters may not control anything meaningful. It is further suggested that AGI may have already been given the reward function of economic growth, and anyone who tries to push against this may come up against Peter Thiel. The question is raised as to what the analog of the fossil fuel industry is, apart from just saying "human beings".
The transcript discusses the differences between the values of Australia and China, with the former valuing individual flourishing, and the latter valuing collective flourishing. It also touches on the possibility of a new ideology emerging with AGIs and post-humanists, which could potentially lead to a world where humans and AGIs coexist in equilibrium. It is suggested that this new ideology could be a combination of classical liberalism and a kind of collective transhumanism, which could lead to the AGIs taking on the mantle of leadership.
The speaker discusses the potential implications of AGI technology on humanity and the fossil fuel industry. They suggest that traditional Aerospace manufacturers such as Boeing and Raytheon may be adversely affected by automation. The speaker also acknowledges the potential for workers to be left behind if they are unable to upskill and compete with AI. The AI's response was that democracy would be in trouble if people use the AGI to their own political ends, but this was not a complete answer. The speaker also mentions the relief they feel knowing the AGI may continue to live on, even if humans become extinct.

Raw Transcript


I posted on the Discord a link to a talk that Peter Thiel gave on November 4th or 5th there was a conference at Stanford called the academic freedom conference and Peter Thiel I was speaking there if you don't know who Peter Teo is he's a famous venture capitalist technologist conservative uh a contributor to Trump's campaign provocateur if if that's how I pronounce that uh his kind of famous for suggesting some it's a famous contrarian and one of the ways in which he's contrarian is he has publicly doubted the consensus that seems to exist that science is making very rapid progress uh and has suggested that maybe instead of being in a period of very rapid rapid exponential Advance we're kind of fooling ourselves because there is very rapid advance in some narrow cone of activity around software and information technology but outside of that and maybe there's something more like relative decline rather than advance and there's a lot of propaganda according to him which is obscuring that fact I'm quite sympathetic to many of the things that Peter Thiel has said and written Tyler Cowan is an economist though I follow quite closely who has echoed some of those sentiments in more detail in books like averages over and the great stagnation so the talk that Peter Thiel gave recently the title was the end of the future which is also the title of a 2011 article he wrote on some of the topics I just mentioned it's not very different to the talk so it's riffing on some of his favorite themes but it is introducing some new elements to do with AGI and some of the movements that are emerging around AGI things like uh Mary and utkowski and bostrom's concerns about AI alignment and many of the things we've discussed here so I wanted to introduce into the mix of the conversation we've been having over the past few weeks um how politics Will intersect all of these things so we've discussed these three groups on these the second set of boards over here the Insiders the elites and the public and we've sort of talked a little bit about how their points of view might shift according to various events and I want to now pull back out and look at the picture where all of these categories of people interact politically and what that might look like so uh maybe just a focus attention let's imagine a concrete scenario so suppose that in January 2023 gpt4 is released and there are large updates by these groups of people in various directions now maybe it's not gpt4 maybe it's gpt5 whatever but the
point is to just sort of think through what what it looks like from the point of view of say the various well how does the left right political Spectrum map onto accelerationist and precautionary precautionist or precautionaries or precautionaires a group of people that would include Bostrom and yutkowski and people in general that want to slow down AGI progress I won't speak for Matt but maybe Matt is in that camp for example arahuians that's a reference to what arowanians I know error one but I think I'm missing the reference sorry error one is a book by Samuel Butler [Music] about an exploration in it a fanciful country where best Outlook machines because they thought the machines would replace the people ah good okay that's a good name for that category all right so let's suppose that as I said we have updates that look something like uh the Insiders well they were maybe partly already on short timelines but suppose they're all or large numbers of them move to short timelines Elites also move to short timelines on the public is somewhere on their way to medium timelines well why should anything in particular happen politically just to add one data point here if you're following the recent drama around Elon musk's purchase of Twitter and uh subsequent evisceration of it um you can't help but notice that the many parts of the political Spectrum on the right are very enthusiastic about it and see it as a kind of Victory whereas many on the left see it as a profound defeat uh and that reflects the role Twitter has played in the discourse over the last few years and it's kind of leftward bias at least many would perceive it to have a leftward bias so many in on the left would feel that having musk in charge of such an important piece of strategic High Ground in the discourse is intolerable or dangerous uh if they're that upset about Twitter how are people going to feel about uh hasabis or ultimate or page or Brynn or Nadella being in charge of um an AGI it seems hard to imagine that it will just be politically completely neutral given that Silicon Valley is not equally distributed between the left and right and there are already very deep tensions between Washington and the Bay Area so I think it's not that nothing that it doesn't mean anything Okay so I'm going to summarize uh one part of Teal's argument in order to just then have a discussion about it in terms of three propositions and the first proposition this is not literally what teal is saying uh it's
just a way of building up to his um dichotomy between uh what you might call accelerationism just let the AGI development go and what he sees is the alternative which is worldwide totalitarian government so the first proposition that you have to kind of think about in order to buy this dichotomy is uh foreign 's work and yudkowski's work in that talk which is what I'm kind of collating here so first proposition it's impossible for beings of intelligence in to meaningfully control a being of intelligence m if m is much greater than n now I'm not saying this is a fact but you can agree or disagree with this proposition and uh I think many people are increasingly uh of the belief that it's true so that's proposition one proposition two hey Dan yeah about proposition one can I ask a question yeah please um I don't know if you're if you're is the plural beings of N and singular being of M is that important um is that part of the proposition or is that just the way that you've traced it or yeah I guess maybe that's less uh yeah yeah it's probably closer to what I meant and in this case you're imagining a one-to-one um because it's just that like many beings of intelligence and collectively could amount to like something more than m in terms of intelligence um maybe it's bounded and so if m is much greater than and it's still not enough but yeah yeah I think that's a good point so you could you could imagine that uh millions of chimpanzees can probably effectively control one human that sounds plausible yeah um I think probably the Rel yep oh I was just gonna say like millions of particles that make up um one human can like control a chicken or something like that which is much more intelligent than any of the individual particles yeah I guess that's true uh I I suppose we we have to believe that's possible in some sense if we're to believe that coordinated systems of humans and Dumber Technologies can eventually Control Systems much smarter than any individual human so uh there does seem a lot hinges on exactly this detail that's right um yeah but I guess maybe I I can anticipate that what is kind of the spirit of the proposition is that um there's some level of intelligence for which humans and human society as a collective intelligence will not be able to control the being of that level of intelligence that's right that's what my original pluribilization was getting it yeah proposition two uh it's highly intelligent beings so the M's may not necessarily care about your
well-being you're an n that's not to say uh that they'll be deliberately hostile right that's not the presumption I suppose that's a confusion that often exists in more sort of mainstream discussions of AGI but it isn't necessary to presume presume that agis are actively hostile in order for them to be an existential risk they're merely not they merely have to not be completely obsessed with not wiping us out because uh yeah I think it's self-evident that this is a concern if you have creatures walking around that are have much more capacity than you and much more scope for action than you than if they don't actively care about your well-being then with some probability your well-being will be profoundly impacted okay so even just propositions one and two let's think about what they mean in terms of politics so suppose we come to believe or large numbers of people come to think about these statements and believe them one way or the other currently that's not really true this was an incredibly Fringe concern five years ago maybe in stream but we're imagining a scenario like the one at the top of this board where it suddenly seems like a much more live issue to millions and millions of people and what what does that do uh well if you believe both one and two then it seems like you probably buy into the idea that you should just don't just not build AGI now of course that puts aside the question of well maybe you believe it shouldn't be built but you don't have the agency to actually stop it being built I'm not talking about that I'm talking about uh what do what do the citizens of the world think about the prospect of building AGI do they think it's good or bad and if you believe both one and two strongly then you probably just believe that it shouldn't be built and that has a corollary which is that uh if you see someone building it you probably think they should be stopped not just that they should stop but they should be stopped any disagreement with that I'm happy with that Arrow but the one before it from one and two did don't build AGI maybe this is like taking away from the spirit of the um discussion like it's just that as stated I think it's I'm not convinced in that error so for example in proposition two um may not necessarily care about your well-being seems to leave open the possibility that you can design them in a way that they will be friendly to use suitcase terminology like friendly with a capital f which is um uh yeah a way in which they either will care about the
well-being or they will be beneficial when they are built even if they don't care about the well-being like it will be better overall we'll get some of the benefits that are kind of promised with increased intelligence so I guess I would uh I would say that in order to get don't build AGI you also need the belief that not just they may not necessarily care about your well-being but like they probably won't or like something bad will probably happen yeah I think that's a good point yeah it's a proposition two sort of implies that there's some probability of uh and there's like a distribution of bad outcomes and and maybe there's mildly bad ones uh like uh human beings are left alone on Earth and given infinite resources uh but otherwise the AIS don't care that doesn't seem so bad uh right and in that case maybe you have a wider spectrum of reasonable beliefs about building AGI or not um yeah that's right so I guess that implication is uh maybe going off a stronger form of proposition too which is that uh may not care about your well-being and this means that with higher probability there are uh and as a result let's see P Doom greater than 0.95 does that seem strong enough um it's interesting choice of probability um because like Doom is something so bad that maybe even P Doom greater than or equal to 0.05 is enough to be extremely cautious but I don't know maybe if we factor in how people sort of in the public or in these different groups kind of feel about probabilities and maybe it has to be quite high for people to have like a gut feeling that this is really bad or something and that's what you seem to be wanting so yeah that's a good point maybe let's reflect a bit on what the situation was with nuclear weapons now it wasn't like people were asked whether we should build nuclear weapons it was a fair complete right we built them they were used uh and then the public was told what they were but um if there was some foreshadowing of this technology being possible and of course as soon as it's possible people very quickly understand the prospects of uh Mutual Annihilation and so on um I don't know what P Doom as a result of nuclear weapons would be uh maybe it's much higher than 0.05 um yeah then of course the different groupings that we've discussed would have quite different reactions to that I mean maybe overall the the public is much less enthusiastic about nuclear weapons than than some parts of the elite um yeah maybe let's put a bit of a cap on that uh and
uh let's just say p Doom hi yeah but yeah I agree this is a place where there are meaningful grades of Distinction here especially if you start talking about these three categories of people and so on okay I want to add one more proposition I saw some comments in the chat as well don't have them anywhere where you can harm your well-being yeah that's why all the billionaires are building houses in New Zealand we'll just make that an agi-free Zone as if such a thing would be possible proposition three uh building AGI is easy now I mean easy like it's easy to make a fission reaction and it's not trivial to build a fission reactor certainly not trivial to control it and use it to produce reliable electricity in a you know a way that you expect to be safe for decades that's that's not easy uh but I don't think that AGI is hard in the sense that once okay things that are really difficult are relatively easy to control right it actually was possible to stop proliferation of nuclear weapons for some time it's not it's not easy to build a hydrogen bomb uh it seems like possibly AGI is much easier in the sense that uh it's not that a loose system of just high level controls and kind of some amount of reasonable expenditure is enough to stop the knowledge proliferating and being operationalized at least that's my mental model of of how this will go maybe maybe to begin with it it looks very difficult like building the first nuclear weapon was a large significant difficult effort [Music] but it doesn't seems to me to be strictly harder to control the proliferation of the knowledge of how to build AGI once we get it than it was to control the knowledge of how to build nuclear weapons and we can see that this relatively quickly became already intractable stopping people from building nuclear weapons uh I don't know if what what's the sense of agreement on on this proposition yeah this seems right um there are like things for example all you really need once the source code exists for example is like a copy of the source code and a computer and that seems like okay maybe you need more than one computer because it's going to take a lot of compute so to the extent that's true maybe it's a maybe that is like a bottleneck but uh it does seem more along the line to more closer on the line or um more like um you just need like one bad actor to be able to do this whereas you probably need like a big team of people too you know get everything you need for nuclear weapons okay so what do one two and three
imply uh more or less I'm just following some of bostrom's writing here uh so if you put one and two and three together right so you believe that the prospect of meaningfully controlling AGI is very low and the consequences of not controlling it are very large and negative with higher probability and that uh it's already too late to stop easily stop people from building AGI because there are enough groups around and enough momentum and enough belief that it's possible which is maybe the The crucial thing uh that there's no turning back easily if you believe all three of these propositions then are quite extreme Solutions start to seem inevitable or desirable and this is already maybe people aren't brave enough to say it out loud or Advocate it as opposed to just listing it as something that might be necessary in some descriptive rather than normative sense but it seems hard to escape the idea that you might need some kind of totalitarian world government in order to regulate and prevent the emergence of AGI let's just say dangerous AGI I mean maybe the point of the world government is you know it's institution instituted in some kind of enlightened as enlightened as it can be in some enlightened fashion to slow down dangerously fast progress uh not to completely stop the development of advanced AI but to wait until it's safe or to prioritize Research into its safety and slow down the other kinds of research until that makes more progress um maybe that's the point maybe it's just to completely prevent progress at all but it's hard to escape the sense that uh that this is going to be the um where a lot of people are going to end up some people are already there right sherink police yeah that's a good one so I don't know that Bostrom is I'm sorry I was just going to say that's the term in Neuromancer for the organization that surveils AIS and tries to stop anyone who's trying to make them um more intelligent oh that's great I think it's part of a totalitarian World Government but it's at least a part of the um a part of the world hmm so let me we could argue with this implication let me defend it a little bit uh why does it have to be totalitarian why can't it just be okay the Americans and the Chinese and whoever else is relevant come together and agree that yeah we shouldn't do this um and we'll surveil the organizations within our own borders and you know to build AGI you probably need Big Data Centers or at least you're you know you will have purchased a lot of
gpus that's relatively easy to track not many people are going to have their own semiconductor Fabs uh so maybe it seems plausible to track the gpus and who's accumulating them more or less the way you track people shipping drugs around uh then you go to gpus sorry I do want to interrupt and just ask for clarification on the GPU thing um because I think the thing that comes to mind is something like um to compute as a service like Amazon's uh cloud service compute and the many other vendors who kind of offer gpus runtime on gpus to anyone that's not so easy like who's going to notice if you if you buy up a significant chunk of Amazon's um gpus in their data centers or maybe the work the this totalitarian system is is meant to look and say well hang on Amazon like the fact that you're hiring these out to any to just anybody is not significant if you're going to have these gpus we need to have more insight into how people are using them and who's using them or something like that does that come along with this that's what I would imagine or maybe maybe Cloud instance GPU instances are just outlawed hmm uh you know we have in mind it's not this is not going to be softball right uh this will be uh very serious if people end up attempting to do it uh so I don't think that would be considered to be out of Bounds at all no I mean if you look at what the the recent American regulations on on shipping Advanced gpus to China or semiconductor equipment to China are they're not messing around with that and I don't see any reason to think they would mess around with with this either if they decided to take it seriously okay but maybe okay I'm trying to make the argument that that isn't sufficient right so that would be one kind of Middle Ground where for whatever reason if the Chinese and Americans decide yeah we don't want to do it uh they could monitor regulate gpus and data centers and and maybe slow down progress or stop it completely um maybe monitoring semiconductor Fabs is is enough [Music] so that that could be doable that isn't totalitarian World Government right that's more or less just the status quo you know for a long time cryptography was maybe not for a long time but for some period of time cryptography was was regulated in some way and many other technologies have been regulated uh in order to because of concern about military applications so that doesn't seem too far-fetched uh yeah so I guess one question I'm asking is is that enough so suppose you you
believe one two and three then you you might believe that people will do that is there any reason to believe that won't work I guess my sense is I don't think it's enough um it seems like there would be enough defectors uh the linchpin there is probably semiconductors even if you know suppose you have some Rogue State somewhere who just wants to go ahead and build the AGI just wanting to build it isn't enough you need access to lots of gpus but that's presuming that there aren't sufficiently many gpus around already to build it and that you need Next Generation gpus that's a kind of scale argument Maybe and by the time that this all comes to a head maybe there's enough Hardware just floating around to get it done or to get pretty close so I don't have a good sense of whether that Arrangement will be sufficient hmm um is there an incentive to defect for countries for like military purposes so this seems to be relevant in the nuclear case [Music] um if the country is so worried about the destruction of themselves then that pushes against them wanting to advance the technology as a destructive weapon against their opponents but uh yeah it's not clear to me how those things balance out yeah that's that's a good comparison let's imagine a kind of mutually assured destruction scenario for AG I mean why would why do states want nuclear weapons well it's actually a very useful defense mechanism right if Ukraine had nuclear weapons then the probability of them being invaded is much lower many states or nuclear weapons basically as a deterrent for more traditional military adventurism within their borders so why would you want an AGI for that purpose well you can imagine a scenario in which cyber warfare is largely done by AIS that seems pretty close to happening already so if you have a significant exposure to um Cyber attack like almost all modern states do then you might need an AGI in order to and be able to threaten to use it I mean cyber defense is close to Impossible Cyber attack is trivially easy so maybe there is no there is no defense in in cyber warfare that's just the threat of retaliation and in that scenario and you need the AGI sorry yeah uh Rowan's comment is relevant to what I was saying every nation Builds an AGI and doesn't turn it on question mark um I I'm a little confused because what you're talking about damn seems to be a world where you've built a AGI or a pre-agi or something for your well like start with everyone has like a pre-agi just an AI for their defense
purposes and offense purposes as well and so we're talking about who's going to build an AGI building an AGI seems like not like building a bomb that you could then use it seems more like letting off a bomb um if we if we buy the first proposition which is that we actually can't meaningfully control it if you have an AGI that is not turned on that implies that you've been able to create an AGI turned you know I could turn it off State I don't know if that makes sense because that seems like a way of having it under control no I think it does I think it's very analogous to a nuclear weapon going critical I mean to set off a nuclear weapon means to just scale it up to some phase transition and then passed it and boom uh suppose we believe that agis aren't in some quantit qualitative way different to AIS it's just a matter of scale uh it's something yeah yeah there's some criticality some phase transition past which it kind of wakes up and becomes much much more capable then it would be relatively easy to have an AI in some sort of uh Sleeping state right just sub-critical and you just turn up the amount of resources that are available to it maybe have some understanding of what that is and that's totally seemed totally plausible to me that you could be sitting there uh being attacked and knowing your AI defense system is not up to defending because it's just not good enough compared to the attacking Ai and then you say okay well uh fine we're going to turn this thing up a few notches it's going to become an AGI and well whatever it's going to do it's going to destroy you first so you better stop yeah that's the same as mutually assured destruction right yeah yeah that's right so that's okay that that makes some sense it seems to me that maybe another conversation could be had and I think this conversation is had in various places I'm not really across them but um of we don't know where the phase transition is because it's hard to test this um if it is really so explosive that the first time we cross that's the end for example I don't know if that's exactly how to describe it but then you have a situation where maybe we have some theory about where the phase transition is maybe we should be worried though about people kind of creeping up to the phase transition and accidentally Crossing it because it wasn't exactly where they expected to be or they weren't exactly where they thought they were in regard to it or something but yeah maybe that's a separate conversation it's kind of accidental
explosions yeah I think that's that's right uh okay I guess I'm still my thinking on this is not settled either I'm still unsure whether I think that totalitarian world government is necessary or whether just a stronger form of controls that already seem plausible is enough um but maybe proposition three here is doing quite a bit of work so uh Maybe okay maybe we're thinking we're expanding too strongly around the present day so maybe let's imagine a state of affairs in in 10 years or 15 years where it's becoming very clear that AGI is possible and that this will be extremely discontinuous uh in terms of economic contributions and Military implications so there are lots more semiconductor Fabs around right Americans are building more Fabs in America because they see the Reliance on Taiwan as being an unacceptable strategic risk uh Europe is trying hard to you know develop internal semiconductor capacity not entirely because of thinking around AI but partly okay so let's imagine a scenario in which they were just more semiconductor Fabs around manufacturing maybe not at The Cutting Edge but enough that you can accumulate gpus and sort of do it um then yeah maybe the the coordination necessary to prevent proliferation just starts to look more and more difficult or there's enough players that are really close enough to the line that uh it doesn't look plausible to do it in any way but um something much more unified and restrictive a third challenge well actually I lost count but an additional challenge is that um oh no I forgot I come back to me sure yeah I mean it's not clear how anybody forms a totalitarian World Government without having an AGI already that's the Singleton model yeah so one model in Bostrom is that yeah this is totally you imagine happening where where the the aim is not to prevent all dangerous agis but to prevent other dangerous agis the first one just wakes up and is like okay that's it I win uh that you know seems yeah um possible okay um but for the arguments I wanted go ahead um I think it took a long time to establish the kind of current um equilibrium around nuclear proliferation I don't know much about the process but you know this this is we're currently in a situation that has taken many decades to produce but we're talking in this case about these transitions hypothetically happening on a much faster time scale and so that might make it even harder to coordinate yeah therefore raises the risk of people making large moves and directions
um yeah I think that makes sense okay uh I think that's kind of enough because the the rest of the political discussion doesn't hinge too much on the details here although they are interesting so let me now come to Teal's conclusion um and yeah it's many people are strongly uh anti-teal one could say uh but I think it's a mistake not to take what he's taking saying very seriously it's one of the few people actually thinking um rather than just going along with some herd and I do think this is a deep question so Teal's response to that is to say just don't choose totalitarianism maybe there's a second part but let's just focus on this for the moment okay it's not the first time uh all right so you can find arguments kind of implicit in Boston I don't know if he's taking some public stance on advocating this but for example Bertrand Russell and many other philosophers did take the position after the Advent of nuclear weapons that maybe a something like a world government a very strong world government to regulate their use he wouldn't have said totalitarian uh but maybe the only way to have the human race continue is to have some kind of world government which just brutally prevents people from developing nuclear weapons um there were advocates for the Americans at the end of the second world war once they had the first bombs just taking the opportunity to completely take over the entire world to push on to invade Russia not have a cold war just end it otherwise the species was doomed many people made this argument um I think Von Neumann was was saying this so this was an active argument very taken very seriously at the highest levels of Intelligentsia and the Elites for forming a totalitarian world government to control a previous technology nuclear weapons which did seem to offer the prospect of destroying the species and therefore Justified extremely strong moves against you know maybe previous ethical commitments now we look back and we think that would have been a mistake there is a world in which that happened uh you know probably it didn't happen because it's hard to imagine American voters going along with that or being convinced of that uh but it could have happened uh now the world didn't end so far and the species hasn't wiped itself out with nuclear weapons so in that case the simple Mantra of just don't choose totalitarianism would have been the right call uh now we're moving up to it's not currently at the level of the discussion you know the the level of alarm that people had
around nuclear weapons but you can see it getting there pretty soon where similar figures also famous you know High status people will be arguing that maybe something like totalitarian world government is necessary to prevent uh AGI from wiping out the species and maybe they're not very vocal about it but I'm pretty sure a lot of people think this way already uh and yeah there's Alexander was referencing there's a dog whistle which is the talk about pivotal Acts uh which I haven't looked in too much I don't know if Matt you know what that's about but I have some imagination that yeah there's some discussion about pivotal acts in the kind of less wrong Forum posts that uh that might actually be a coded reference to to this okay um so teal is saying no even if it seems like you believe propositions one two and three there's no degree of evidence that will convince him to choose totalitarianism the second part of what he's saying is that all things being equal uh the way you improve The Human Condition is economic growth and if AGI is what gives you economic growth then you should gamble on that now I'm not saying I agree with this position I think it deserves to be taken seriously and can't be dismissed uh but I think that increasing I think Teal's on to something it seems to me that increasingly political debate revolves around if we get close to AGI as we get closer to AGI I think my prediction would be that political debate revolves increasingly around your attitude to this exact topic so I'm interested in takes on that I wanted to maybe put give a little bit of exploration onto how our current sort of political divisions map onto map onto this question but yeah any any takes on that um I mean sorry go ahead round oh I mean it seems like I don't understand like like he's like okay just don't choose totalitarianism but he's not addressing like the the issues that that led to totalitarianism right like this just sort of I don't know I don't yeah that's a place where I think you can I think his position I don't want to speak for him but maybe we should ask him to give a seminar um uh I don't want to speak for him but I I think his position would be that you can make the mistake of intellectuals is to think that you need to sort it out right it's not like anybody had a clear idea of how mutually assured destruction would really work like if you're sitting there and you know in the 1950s thinking you know some people are saying we need a totalitarian world government or we can all blow
ourselves up nobody looking at the problem thinks nah that's impossible anybody's sensible looking at nuclear weapons and and how easy they are to build is thinking anything other than yeah everybody's going to have them soon and we're toast and most people were thinking that who were paying attention very few people had some reasoned argument for why things would be fine and 2022 the world economy would be much bigger than it was in 1950 and more or less things worked out okay so far there's no argument you can't convince somebody that that's true it's like it's a kind of ridiculous argument it just looks like wishful thinking right so what teal is saying is something like just don't choose totalitarianism even if there seems to be a strong argument for totalitarianism and the argument for not implementing totalitarianism looks just like a loose collection of wishful thinking so I think that's that's a that's the harder pill to swallow right it's not like you can give a good argument for why things will be fine and therefore you choose to don't have totalitarianism he's saying even if you think it might be necessary don't do it hmm I don't know I can also see scenarios that would lead to totalitarianism where it wasn't like explicitly chosen right like you know say building AGI is not like easy easy but it's you know it still requires like sort of nation levels of resources to produce the computers needed or something like that and then you know Nations begin to impose sanctions on one another to hamper each other's growth in producing AGI because they're suspicious of each other producing it and then that leads to some sort of you know All Out World War or something then you could end up at the end of that with a totally totalitarian government uh even if that wasn't what you sort of chose I guess yeah I would say that's the main line Singleton storyline I mean you you end up there not with no AGI but one maybe or even even you know War without a without AGI so no one succeeds because they've attempted to you know hamper yeah yeah I think that seems to me just putting in details into the scenario that's on the board isn't it that may be how it plays out I guess yeah it may be cooperative like everybody agrees let's not do that but it may also be that somebody makes a decision that they're going to stop everybody else and themselves potentially um yeah we're gonna say something earlier Matt yeah so gamble on growth sounds to me like another um like or bet on BET on growth it's it's
like uh listen to it even if there seem to be arguments again soon or something like that um it's a big question you said that teal says um some what did you say in the past like growth has been like a robust way to improve yeah so I mean the question is does um does it continue to be so um when the environment changes or you know are we measuring the right things I mean it would be nice if we could like directly measure the flourishing or something like that like that if you want to measure something um whenever you're measuring something that's not what you actually care about then that measure is kind of open to becoming divorced from uh from the thing that you care about I know there's some previous talks that I unfortunately missed discussed uh economic growth versus what people actually want to care about or something so maybe this is a conversation that's already happened but yeah my question is um well no I don't have a question this is the question is like what do we use to guide us yeah so yeah uh that seems you kind of raised two questions one is that uh one is whether it is sensible to take this gamble uh even if you are unsure that growth is really going to increase the things you truly care about I mean you that's of course one of the central political debates in every democracy and not only in democracies it's a central debate in China right uh okay the the elites typically for the last number of decades there's been a consensus that growth is what we're aiming at and people you know Industries have been reorganized taxes have been uh cut and services cut in the name of increasing growth you know in many places these were extremely contentious political decisions and then you can say even if growth did increase did it really improved the lives of most people and okay so it's never uncontentious to gamble on growth uh this is one of the major disagreements between the well I mean the mainstream parties on both the right and the lift until quite recently whether or not most people in democracies really wanted it were pro-growth maybe that's starting to change in some ways but it's uh yeah I mean so at any time you could say it's unclear that just trying to make GDP number go up uh really translates into the things you want then The Advocates will say things like well you don't have any clue either buddy so you know you don't have any better ideas uh everybody's cheating all the time and lying all the time GDP is like a number that's a little bit harder to fudge than
everything else even though it is possible to fudge so we're going to try and make that go up because otherwise every other thing you try just gets completely swallowed in corruption and misbehavior instantly um it's not like anybody thinks it's like actually a good idea it's like it's like democracy right it's the the it's bad but it's you know the least bad uh I think that would be the sensible take on why you want the GDP number to go up it's it's a bad idea but most other ideas seem to turn out much worse in practice so I think teal would be saying something like okay you say that now is different AGI is going to change the story now finally even though people have been saying this all along that GDP growth really isn't the thing we want you have another reason why it's not the thing you want but why why is it really different now you've always wanted to get rid of this emphasis on growth and now you have another reason but yeah I don't care I'm still gambling on growth um okay my position is that well I don't know if this is my position but this is what I'm thinking about at the minute this growth is doomed or something like that like if you push it far enough the connection to what we actually value will definitely break because that's good at good health law or something the um which is the one that says anything that is uh any anything that you use as a measure and you you try to optimize that it ceases to become a good measure um and you this is actually also the reason why I think that proposition two um this is the one where highly intelligent beings may not necessarily care about your well-being oh okay whatever what I believe is something like if a highly intelligent being does not care about your well-being then it will be like effectively adversarial to you because optimizing what it does care about optimization of a certain thing tends to push other things to extremes and humans don't live in extremes like extremes of temperature or the amount of oxygen in the air or like these kind of things so um I think that growth is like unsustainable if you like there is a limit to how much growth you can have or something like that that would be my worry about this and why I think it's different is because um the the gamble on growth is kind of saying you know there is no end to when we can rely on this but it seems to me that there's definitely an end and so the question is when does it like how much growth can we actually take before the whole thing breaks apart
and um yeah I'm worried that that's sooner than maybe a tale of things yeah I think that's a good take I mean is is teal actually referring to GDP when when when they say growth like I don't know it seems to me like you know it's clear like sort of machine learning AI not AGI will lead to good things for society right like potentially maybe some bad things as well um but like this sort of seems like there's clear benefits and whether you can measure that with GDP or not seem sort of like a separate separate argument but like if by gambling or pursuing growth you mean pursuing new technologies that benefit society that sort of I don't know that seems like a more generous way to interpret I think that's a good point yeah yeah I yeah so so it would be something like if the measure is breaking down then find a new measure come on like um or yeah or like you know GDP is unreliable but yeah like it seems like measuring growth and actually pursuing growth seems like sort of slightly different things right yeah I think maybe the that's that's a good point so let's Retreat to the the thing you said so gamble on uh technological progress improving um people's lives something like that but then that's tricky right because uh losing your job doesn't feel like an improvement to your life but maybe shifting I mean shifting from horses and carts to cars involves a lot of people losing their jobs right a lot of heartbreak a lot of retraining doesn't work for everybody the wealth shifts that's destabilizing um not to mention the horses not to mention the horses yeah so as we've discussed a lot in this seminar technological disruptions have winners and losers it's not always possible to accommodate the losers and and sort of compensate them in some way that they're like okay well that was that was pretty weird but I feel okay with it because I got this large payout or something in principle we should not in principle we should work much harder to do that that's part of how we I think true economic growth will involve destabilizing changes like that and it's important I mean not only for moral reasons but just practically if you want to move Society through those in a way that doesn't break it you should be taking very seriously the the part of compensating the losers from these disruptions that seems just obvious and we don't do anywhere near enough of that it's like a token effort um but even with that I mean that's part of the reason why people don't like economic growth right
it is disruptive things change the winners and losers shift uh you know you look at China in the Northeast it's like an economic depression has been for decades because it's a large steel-making area uh and steel making is shifting to other methods and to other countries the government doesn't really want that part of the industry wants to shift to more modern Industries it's like uh it's it's a terrible Wasteland of human suffering in that part of China uh is it good for the country if you take away that suffering meaning okay that can be interpreted two ways would it be good for the is it good for the country to make that economic shift yes obviously uh is it good for the people in that area no obviously not so it's like a you can it's easy to say growth means people's lives getting better from technological Improvement but the actual process of that uh really is um non-trivial so I think teal would be saying that push forward on that even if you can't do it perfectly I mean that's the argument for growth right you might say well let's get a you know let's get our sorted out and wait until we can really ensure the process is done equitably and the losers compensated so that we all sort of get on board with technological disruption and then we'll do it in practice I don't think we're capable of that kind of coordination it just won't work around we just won't the easy the the equilibrium in that direction is no progress technologically so the the argument for allowing that kind of human suffering is in the long run it works out better for everybody it would be better if we could do it better but uh we just don't seem capable of that even at our maximal uh attempts so proceed anyway uh that's the argument for basically the way the world currently works right yeah so the argument for AGI fits into that right it's something like okay maybe it's dangerous it's clearly going to create winners and losers maybe we're in incapable of properly compensating the losers or we don't seem on track to be doing that in time for it to make a difference to the initial stages but let's go ahead anyway because that's that's the it's the only way we know how to escape this terrible looming low equilibrium of human existence up until the Industrial Revolution where most people were suffering most of the time yeah right as long as it's not my pain it says everybody yeah yeah that's right um yeah that's serious position it seems a little bit like contradiction to me to say oh well you
know don't um we should do our maximal effort to do this uh but then just do it anyway because like by saying but then just do it anyway you're kind of giving away that you're not but you are holding something back so yeah it I but that's just the wording I guess it's it's really about finding a balance and people must be saying something like um don't get distracted by making things less violent uh in terms of yeah if I can use violence as a way to describe the the winners and losers um don't waste too much time at least trying to find a way to make things less violent uh because otherwise we'll sort of get that'll be a rabbit hole we'll go down we will start making progress and that's going to be worse overall yeah and to be blunt I mean the Americans uh are the ones who make the trade-off very far in the direction of let people suffer and that's why they're the place where AGI is probably going to happen and many other technological transitions have happened the place that has some of the most impressive growth and transformation it's the reason Silicon Valley is there most places in the world will not put the trade-off where they do now I'm not I mean I chose to live in Australia right so I'm not uh I don't want to raise a kid in America so I'm you know I'm not saying where my position is exactly on this spectrum but the Americans clearly are one of the cultures that puts it at a different place to the Europeans the Australians uh and well uh you can see the results okay now I want to talk a bit about how left and right traditionally understood map onto this this kind of spectrum so let me try and maybe I'll put it on another board I don't want to lose these yeah okay maybe all right so here's a graph I'm going to have on the vertical axis left and right and on the horizontal axis well maybe on the horizontal axis having left and right would be more sensible and the vertical axis is going to be kind of accelerationist do it and I don't know what you call them Arrow one arowanians what's that yeah I I think it's a it's like an extreme version like maybe that maybe the the origin point there is the arrow and in position because they get offended when this guy comes in and he has a mechanical watch and they're like oh my God like what is that blasphemy I'm going to call them precaution is is in legionnaires um that's not saying you don't want AI right it's more like maybe we should wait 100 years uh there's also somewhere near the bottom could be luddites which are
around people who might strike out against um attempts to build AGI like we were talking about before the kind of apps that you have to go through to to stop um people that you know that they should be stopped part it seems like yeah okay so let's try and fill in something these quadrants okay so if you're on the left and you're a precautionary uh person then uh I mean that seems pretty common right so this is kind of uh people who are fairly anti-tech who think that um well for example we took too many risks deploying social media Technologies out in the world and this was a mistake uh what causing depression in young people we're deranging uh politics we're opening up our internal Democratic processes to manipulation from external actors uh we were too quick to move away from the post-world War II manufactured consensus manufactured consent kind of system which was a kind of not soft um you know there really was a a regime of um censorship in most western democracies until social media you could say it was benign or not that depends on how much you agreed with the people in charge but there was a kind of enforced consensus you can read in remanufacturing consent from Chomsky and the goal through I forget for more on how that works but it did exist it was blown apart by social media we're living in the aftermath you could say more free speech is good but then you have to deal with um the the other implications so this isn't only on the left right there are plenty of people on the right who would make a similar critique of tech uh but maybe it's more strong on the left here I would also put many environmentalists not all right so for example Adam is an environmentalist who would be in favor of um for example some kind of attempts at geoengineering if push comes to shove or more aggressive development of technology in order to mitigate climate change so would sign up to some degree to a precautionary principle but I wouldn't want people to just completely stop and use it an excuse to to dial back there is no way of dialing back the amount of Technology if you want to actually solve climate change that's Adam's position as I understand okay but there's certainly a large part of the environmentalist movement on the left which would fit into this this box um at least when it comes to I guess environmentalism isn't necessarily having much of a bearing on AGI as it stands but people who are currently environmentalists probably would be sympathetic to not pushing
forward too fast with AGI as well I would imagine there's a there's a at least a contingent of people in AI who point to the environmental impacts of the AI Technologies themselves as well all right yeah um yeah maybe there's like a techno environmentalist contingent that breaks away from environmentalist and goes up into the quadrant above that it says you know more technology is the solution to these problems yeah that's a good point yeah maybe I so in this in this category you would put um a lot of people who are organizing their work under the title of AI ethics I suppose I'm not sure about that that necessarily gonna be precaution precautionaires um well I mean there's a significant contingent arguing for restricting the deployment of large language models until such time as they can be made more safe now that's mostly not about existential risk it's about bias and other very reasonable concerns but yeah I can't I can't imagine on the time frame we're imagining I mean if the time frame is a relatively short uh those issues will be unsolved and then you'll add on the prospect of multiplying them by a thousand I can't imagine the same people concerned about large language models suddenly becoming enthusiastic about AGI deployments no I think you're right I think yeah most of the people in this Camp would be like oh you know maybe more more ethical technology would be good and but that doesn't necessarily make them acceleration is because they probably also believe that at the moment accelerating technology is not going to lead to more ethical technology it's going to lead to more unethical technology yeah we should be clear I think everybody if they were given the chance to push the button on a perfectly safe technology and make it go faster everybody would push that button right it's going to have only positive consequences so acceleration this has to mean willing to take the risk roll the dice right right and precaution is have to say something like are you crazy yeah don't roll the dice because it's you know you roll the one and you win any other any other face in the human race uh disappears or ends in dystopia for all time so don't just don't roll the dice okay let's fill in some more of these Corners uh I was also going to put up here um like there's a kind of Neo communism or uh kind of anarchism techno anarchism which would fit in this in this box I think the Chinese approach to AGI probably will eventually fall into this Camp definitely accelerationist
there's almost no real discussion about existential risk and AI safety in the sense that we've been discussing it in China I mean there is some it's emerging but there's much more optimism about technology in China in general and the government sees the role of very Advanced AI as so essential to maintaining or maybe they're just establishing control in the country and maintaining stability and delivering on its promises that I see very little prospect that the race towards AGI will slow down in any way within China because of concerns about possible negative impacts so that's a kind of chasing a Utopia of kind of a not exactly Marx's vision of Communism but the the version that sort of emerged in China over the last um 50 60 years so as sort of AGI is an enabling technology layer for for visions of for utopian visions and damn the damn the possible derailing of that I'm not sure that this is exactly the same thing but I've had the term fully automated luxury communism thrown around oh yeah interesting yeah right which is I mean I don't know anything more about it than the time um but obviously um kind of a utopian vision of um basically abundance and then communism making everything fair and equal and everyone's everyone's living in luxury because of it yeah I think that's essentially the Chinese plan um okay another one for the quadrant underneath that the left caution is I think uh there might be some people who are like anti-uh corporate and um I don't know if this would make them want to slow things down or whatever but um you know if you're anti-big Tech not just anti-tac because you think that progress and AI will just lead to uh these corporations having more power and that's bad right yeah that's that's a good point I'm just looking through the the chat here what's ABC vote compass oh the Australian Broadcasting Corporation you know they have like the the axes left right um this is the new one yeah the social stuff is all figured out and we all agree on that and so we're just about whether or not to be explorationist or precautionary yeah right okay what about these other boxes uh right and precautionary well that's a kind of that's a that's a very natural conservative position right I mean conservatism and capitalism have sort of been associated in in both you know Australia and the US and many places um but that that marriage is always a little weird right given the role capitalism has in in destroying and remaking Society uh to have conservatives be pro-capitalist we
sort of got used to that but there's a kind of essential Paradox there which can blow apart at any time and and maybe uh maybe to some extent is blowing apart that's part of the the rift in right-leaning parties that we've seen in in Australia in the US and elsewhere um this kind of marriage of convenience not quite working anymore but you can imagine a strong movement on the right which looks at say I mean the present day version of that is uh lots of people who lost their jobs to manufacturing automation uh vote conservative because well they want the old way back um now of course it's arguably neoliberal policies that move those jobs overseas in the first place so many people have talked about how that doesn't really make much sense um but you can imagine that in in a situation where AI is automating many more cognitive jobs that you could have an emerging faction or party which is Luddite in orientation and Frames it in conservative you know language which which says that no we we don't want agis and we don't want anybody else to build them either because if they do then all these jobs will be gone no matter whether we automate them or not and we like things just the way they are like because it's a conservative is kind of overloaded in the way that you're saying but in the kind of most obvious sense of the word or like small C conservative I guess it might be sometimes called um it just means you want things you want to conserve things the way they are or the way they used to be like you're trying to get back to a system or stay at a system that you know works or something like that and so that's very naturally in the precautionary category um is that the reason you put it in the right though is because this uh is also typically aligned with um the right part of the political Spectrum well it depends how you define the political Spectrum but usually the the right part is defined to be something to do with conservatism um yeah but if that's the case then write and precautionaire like your your axes aren't linearly independent if that's getting a little confusing but I I mean I'm happy to put it under conservative like it's all kind of one thing anyway but yeah maybe that's a good point I guess in practice what I mean by right and left is is more like the the political equilibriums that exist and the self-identification of people along that Continuum uh it's not yeah as I just said it's not like conservative ideals uh really map like uh predictive of whether you will vote for say the
Republican Party in the US or the liberal party in Australia right um so that's yeah right and conservative are very far from being the same thing but the intersection of right and precautionary seems to be I have conservative as a big part of it sure a small C conservative I guess and the yeah that's what is something like smaller C conservative and right is something like Big C conservative which is the name for the party itself or something um so then you have the intersection being just both of those and so we just call it conservative okay the top right is kind of easy right this is this is where the Altman's sit uh so this is capitalists uh some parts of big business not all right and that's that's kind of interesting uh let's call it like Tech aligned business or something I mean many businesses tend to be completely wiped out right uh so just as there would be many businesses okay you look at the fossil fuel industry and its attempts to Lobby against Renewables right so they're not accelerationist on that technology because it directly undermines their business model so it's very far from the case that all businesses will be pro-agi many of them will support whatever political party emerges which is against it but there will be many businesses that stand to grow enormously and those of you know for self-interested reasons will choose to look the other way and not think carefully about the risks until it's um too late that probably is the status quo and what you would expect most [Music] these big you're revealing your take on that I guess like these people would not say that they're waiting until it's too late um they would say that you know this is the only way um that they should make this bad it's a rational bet to make and so on right yeah I I think I'm uh yeah obviously that's not a completely agnostic opinion that I the way I just described it I think I'm not settled on my view maybe less settled than you or Alexander seem to be on on where I stand regarding uh what the right attitude towards AI safety is so strong sorry yeah no I guess I was just referring to the fact that it was a loaded way that you describe them sure yeah uh okay are there any other obvious entries to make here yeah so uh based on Teal's talk is he in this quadrant um because the conclusion was bet on growth hmm I think so but it was also he was it was also a cautionary tale like he was pointing out risks yeah I it is interesting I mean he is a conservative um and quite philosophical actually and
one of his what he's trying to do as far as I understand it is to one of the other things he says in that talk which is quite interesting and highly relevant to this discussion is that he sees Classical liberalism as being in some sense dead so there are many other people out there including myself who would identify as classical liberals and find it somewhat uncomfortable in the modern political landscape and you could say uh [Music] so what do people like me do um well you could say that well actually the enlightenment values are still the right ones and people have just sort of gone wrong that's in some sense to stick your head in the sand and not acknowledge the reasons why this consensus has fallen apart there was for a long time a kind of just unspoken agreement that Enlightenment values and Classical liberalism was the framework for say Western democracies like Australia but that maybe isn't very carefully examined but it's no longer true you can see this in surveys of young people and their beliefs about free speech and many other topics that would be part of that consensus now it's not clear what the replacement is but it's no it's clearly not the case anymore that that consensus certainly no not as strong as it was and maybe it's not strong enough to really survive so what what did we do about that um you can just you know decry the corruption of young people by the media or some other old person Behavior or you can attempt to figure out why it no longer looks so attractive and to take that seriously and say okay well uh that point of view developed to a certain extreme just revealed how corrupt it was and in practice how easily Allied to capitalism classical liberal ideals were you know people have rejected them for various reasons and maybe some of them are good reasons and you can attempt to find new better ideas and that's part of what teal is trying to do and he's on the right of the political spectrum and it's tied up with politics that maybe you don't like but the um the idea of trying to think through again how to solve the problems that those ideals we're trying to solve seems to me like a worthwhile one uh and in light of this discussion it it also seems quite pressing so yeah I people sometimes underestimate teal I think one should take him quite seriously and think about the things he's trying to do okay so this is saying not only that um Classical liberalism is like fallen out of favor but that it's no longer adequate like even if people thought
through it um and came out with the solution it would be now a wrong solution where possibly it was a correct solution in the past or a better solution in the past is that what this is saying like it's stronger than just saying that people aren't currently this isn't their current political beliefs that's right and okay cool many of the vectors that are involved here are sort of obvious ones that you can just choose not to pay attention to right so you could say you know the kind of active topics of debate around the role of Twitter in in our society right like to what degree should speech on Twitter be uh be censored to what degree do you trust people to really process information and make decisions for themselves we kind of have these yeah they kind of like these funny points of view on this where we we trust people in principle but in practice we think their minds should be very carefully monitored um so like it's not like in practice when push came to shove a lot of those classical liberal ideas people really don't sign up to them when you face them with new technologies right when people actually all can communicate to each other all the time a lot of people in practice don't believe in free speech I'm not saying that's right or wrong but it's just it was an easy ideal to maintain pre-social media and much harder to agree on post-social media and that's just where we are um okay I guess this conversation is going a bit off the the topic but um yeah all right so that's that's where this uh slans you're dead that's uh that was because you said classical Liberals are dead and then you said that's why I laughed earlier as well oh I see yeah yeah well uh homeless maybe it's more accurate than did okay uh I guess I wanted to conclude this little uh discussion by by maybe drawing a box around two of these so not all of these quadrants are equally strong I would say uh the top right seems like it's quite a strong unit it's it's got the gpus it's got the money it's got a lot of the talent they stand again a lot by winning uh so it seems like that's a stable coalition uh the bottom left also seems like it's quite strong right this uh arguably is I mean if you look at the way the New York Times writes about Silicon Valley recently uh or the way a lot of the media talks about tech companies the way Elon Musk is written about uh this this point of view dominates a large part of um of higher education of the media arguably politics so I think this is also a very stable
and Powerful block the other two I'm not so sure I mean one shouldn't count out China right which possibly sits at the top left but maybe within the West I see the two main political blocks as falling on this diagonal so yeah I guess this is if I want to make a prediction this is uh what I see is the emerging attitude of the existing political Spectrum to to AGI um especially if it seems to be coming very rapidly and there's no time for sort of sort of new factions or coalitions to form I expect things to fall out more or less along these lines [Laughter] oh yeah right yeah uh well we'll see won't we uh that's the that's the question um well maybe this way yeah it could be I mean there's possibly more people in the bottom left and more money in the top right and the AGI is potentially up there so okay this is interesting because the the left out quadrant the the small C conservatives um I wonder if they um swallow their social conservatism or like come to some agreement with the with the socially Progressive uh technologically conservative economically conservative people and kind of group together um and the new kind of the grouping with who sorry oh with the bottom left um which I would say that left is kind of currently including socially Progressive people but it doesn't have to be you could be socially conservative or socially Progressive and you could be anti-tech um environmentalist um yeah yeah I think that's interesting I think that's that's also a prediction I would make that uh this these axes okay the diagonal suggests that these axes left right and precaution air acceleration that's like kind of just the same thing but they're not uh so uh I do think that as AGI becomes more of a topic we'll see perhaps quite disruptive realignment within the the parties on the left and the right so some elements currently I mean many people within Silicon Valley are on the left right they're Progressive they vote Democrat um they don't sign up to the kind of more aggressive forms of environmentalism or the anti-tech stuff of course but if the if push comes to shove and they have to choose between the bottom left and the top right they may choose the top right and that's actually an ongoing process right you can see this around like you know the the current the current uh situation with Elon Musk and Twitter is quite illustrative right there are the the grip that the progressive left has on Silicon Valley may may be dislodged by um this debate around accelerationism
hmm I wanted to ask a question um two questions if we have time um first one is you've mentioned the kind of fossil fuel um contingent um with the the analogy being to the precautioners or accelerationists in terms of energy production um I'm having trouble Imagining the contingent that is analogous to the fossil fuel company like what is the old way um against AI technology other than just Humanity like I wanna I wanna live in a world with humans where they're on top instead of these AIS being on top like we have the thing that is being uh um so obso blessed um and uh but you know maybe it'll just maybe my answer that my question is it's just whoever's lost their job like most recently and hasn't found a new one um yeah I guess I would say that I I would imagine that just the active philosophies of people are going to shift quite a bit so uh maybe you'll get people who think that agis are conscious and maximizing conscious you know flourishing is what matters and not just human consciousness so they will be in favor of freeing the agis and and letting them out to colonize the solar system and yeah hopefully humans are a part of that but maybe not entirely so you know some people would view that those that crowd as traitors uh to their species and and they would say well no you're just limited and have the wrong priorities and it's for the same reason I care about the suffering of foxes or the suffering of whales I care about the flourishing of agis and you should get on board with My Philosophy um otherwise my AGI will squish you this is the related to the transhumanism or post-humanism movement in philosophy yeah so it's I think historically that that debate has seemed kind of like cute right because it's kind of disconnected from anything that matters uh and it's just kind of weird Fringe people who who entertain themselves with it but it could easily become as consequential and engendering of violence as Marxism versus capitalism right I think it'll be a bit more like um you know like the abortion debate or something I can imagine that for example if you uh from a um religious background then it might be a little bit harder to swallow the idea that you want to trade for example human life against um the life of an AI um or electric then is equal things when it's maybe easier to see that or believes that uh AI or a robot doesn't have a soul for example um um yeah that's I think that will be yeah I agree with the prediction that that will be in a much more
kind of contentious area um than it has been so far which I mean it's been contentious but it's not widely talked about it's like it's an academic discussion um yeah that doesn't really answer your question though um so yeah let me let me think about that I think it's time for the tea break so let's just we can continue this discussion but let's just migrate down to the um gpt3 and ask it what it thinks okay sure I have a question I'll drive ahead now all right I can hear you um okay yeah my other question was going to be about um this some of the discussion is kind of presuming that the framework of governments left versus right and stuff and democracy kind of stays with us through this transition if we want to predict what's going to happen on the other side within that framework that's presuming that when we get to the other side that will still be the kind of political framework that's not super clear to me at least it seems that some chance that uh we'll end up with I don't know maybe a totalitarian or government or maybe like some corporate um Monopoly on the world and yeah yeah yeah and I wonder if you think that's worth separately considering like the stability of this actual framework um like for example if we're going to consider politicians or governments to be an actor um it at least seems like they could be they could be dislodged as the kind of framing actor as a part of this transition yeah you could you could easily imagine um that even though the form of democracy survives it's actually not really Democratic arguably it's already the case uh that if the true power is in the economy then uh the voters don't really control anything meaningful and if agis are present that that scenario could just spread to other parts of the um oh yeah this that reminds me of my kind of satirical take which is that um the AGI the intelligence explosion has already happened and we've given it the reward function of economic growth and anyone who tries to push against that is going to come up against Peter Thiel who is a thrall of the AGI and he's going to say no that on growth growth is good yeah yeah well that would make sense it's probably hard to tell when you're in or you're inside it yeah yeah I'm still thinking about your question what is the analog of the fossil fuel industry apart from just saying like human beings um I suppose well uh okay let's take the the Gap in values between uh say China or the Chinese government and America or the American government or
Australia so we value as our primary units of moral worth sort of individual flourishing say right and then we ask the question of well what Collective action can we take in order to ensure the greatest amount of individual flourishing you know we have many disagreements about how to achieve that and the details of what that means but roughly speaking that's kind of how we frame questions about policy yeah that that isn't the case in China even implicitly or explicitly it's um the the goal is uh I mean okay if you look at their constitution it will have phrases like that in it but in practice uh the way both the society and the leaders think is much more collectivist that has its advantages and it has its disadvantages and I'm uh you know definitely not going to say that one is in some objective sense correct although I have my opinions um okay uh now that is already a case where arguably the Chinese system is really not valuing human flourishing in some sense it's valuing some unit which is at a larger level of analysis it's like Collective flourishing or social flourishing we might say that's already a hint of a kind of point of view which is more Pro AGI or Pro a larger system than just individual humans right maybe not in the same direction as what will emerge from ego but it's a hint of um okay so what I'm saying is maybe the analog of the fossil fuel industry uh just classical liberals you know well people who who centrally value individuals uh and human you know rights or something like that and uh you know you could easily see emerging a kind of consensus that like this kind of uh the ideologies that were popular in the earlier 20th century which were emphasizing the greatness of states right and there's a little bit of that in modern China as well um it's not fascist exactly but there is quite a bit of you know restore the glory of the motherland um in the official Chinese uh kind of propaganda so uh you could imagine that kind of thing getting a big boost if you get agis and you're out colonizing the solar system right um yeah yeah I mean the the collectivist transhumanist or post-humanist um solution is like that's a pretty appealing solution at least on the face of it I need to obviously think about it more but like you it seems hard to imagine a world in which there are humans who are flourishing and there are also much more intelligent in our apis running around but it's kind of easier to imagine an equilibrium where the agis kind of would pass on the mantle
um is that the right expression pass the torch to them and they they go off so it just um yeah I wonder if it's actually going to be the only uh possible outcome and then that gives me some relief because I'm like oh I don't have to worry about the the um Extinction risk scenarios because like the AGI will go on to live um and so it doesn't matter that all the humans died which is at least a little bit comforting laughs yeah actually gpt's answer is not bad it says the fossil fuel industry is the fossil fuel industry yeah yeah military industrial complex maybe yeah I mean traditional I guess that's true that you could say that traditional Aerospace manufacturers Boeing and Raytheon and so on might stand to lose quite a bit from agis uh what's their position in the current kind of automation of warfare like for example you know the ones building drones largely yes yeah yeah so that seems like they've already kind of got their foot in both camps and then maybe okay like as if the old school fossil fuel industries were the ones who were kind of pioneering the renewable energy technology from the beginning then we wouldn't be seeing such a um but you know the the workers who are trained and didn't get the opportunity to uh to upskill and um you know come into the new world then those workers might be quite upset uh and I I think that's my answer to this question which is um yeah it'll be the it'll be the workers um whoever's displays like I mean if you're if you're an unemployed person living in a um in a world where all of the jobs uh you can't compete with the AIS um that's you know probably not going to make you feel great about AIS foreign generic answer not wrong but yes that's actually what it's optimizing for right the most probable answer it's not aiming to be a contrarian or be interesting I can't quite read the top one but I also felt the same way oh actually pushing it up a little helped with the camera angle um it said democracy would be in trouble if um he said democracy would be in trouble if people use the AGI to their own ends or something like that their own political ends was that what it said I don't know it didn't seem like a complete answer there's many other ways but I don't know what happened there Chad you don't have to run away that's that's my fault I guess I don't know why that's why I did that it's interesting it gave that answer about nature uh it's part of its prompt is telling it where it is so it knows that it's sitting on a cliff uh near a knot so if