WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed.
The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except
for errors at the level of individual words during transcription.
The debate around Artificial General Intelligence (AGI) is heating up, with both left and right political camps looking to shape the future of this technology. Private companies, the military-industrial complex, and radical movements like transhumanism are all looking to get involved. But there are dangers associated with AI, such as existential risk, job loss, and potential misuse of power. It's important to consider the risks and rewards of AGI carefully and develop a roadmap to ensure the best outcome for humanity.
Chat GPT technology is becoming more popular with the University of Melbourne releasing a statement, Open AI staff members likely controlling the messaging, and Microsoft releasing Bing, a chatbot based on GPT4 code named Sydney. This has caused other tech companies to announce their own chatbot based products and GPT has now been released as an API. The discussion of AGI and politics was revisited, looking at the different political coalitions that are likely to emerge and the graph discussed in the AGI and politics session.
Adam believes that an arms race for AI technology is inevitable and the best way to ensure a good outcome is to enable the best available candidates. He sees no plausible proposals to de-escalate the arms race or how regulation can incentivize or disincentivize its development. Therefore, he is leaning towards the accelerationist camp, which is the best option to support the least worst candidate to win the great prize of AGI. This quadrant of cybernetics includes precautionary, proactive, reactive and accelerationist approaches.
The left-right divide in terms of technology is becoming increasingly relevant, with the left focused on social justice, mass surveillance, and anti-capitalist sentiment due to technology's potential to entrench inequality. It was suggested that anti-growth and anti-capitalism should be replaced by an affirmative and positive objective and vision, rather than dismissed as naive. Big tech firms have been moving to the left, but they do not fit on the left-right axis, and Peter Thiel represents the capitalist side of the spectrum. A roadmap should be created to prepare for the future.
Peter Thiel is an accelerationist who believes that growth is the highest ideal for human progress and is against any kind of totalitarianism. Palantir is a company that provides advanced AI services to the government and competes in the AGI space. The military-industrial complex is right-leaning, taking advantage of incentives that allow it to reproduce itself over time. The left camp is more focused on individual rights, human rights, social justice, and redistribution of wealth. Thiel believes that imposing global restrictions on AI development to stop humans from going extinct is analogous to the kind of worldwide totalitarian government needed for a communist system.
The military-industrial complex in the United States is supported by the right side of the political spectrum due to its nationalist, jingoist and isolationist stance. Private contractors are often used for implementation of products and services, however the government may undertake projects when minimal innovation is required. The speaker questions who can enter the race to develop technology and engineering solutions, and if resources are enough to get across the finish line or if detailed innovation is still necessary to shape the real candidates.
The speaker discussed the risks associated with Artificial General Intelligence (AGI) and the need for caution when entering this field. They also highlighted how Chatbot GPT is more widely discussed in China than in the West and how many people are crossing the firewall to use it. In the West, public attitude towards technology and science is often skeptical, while in China it is more optimistic. The Chinese Communist Party is carefully analysing AGI and its potential to restructure society, and may be more willing to develop it if they are the ones making it.
Recent utopian visions for the future, such as neo-communism, techno-utopianism, transhumanism, and singularity, have arisen on the left side of the political spectrum in the United States. Transhumanism has been met with skepticism, but if technology changes, it could become a mainstream idea. Long-termism has been met with strong pushback, particularly against utilitarianism, but transhumanism is becoming more real and could lead to a sudden polarization of public sentiment. Transhumanists have their own political party in the US, indicating a potential for extreme polarization.
Transhumanism is a movement that is optimistic about the use of AI, believing it can enable powerful acceleration of technology. They are interested in accelerating longevity research and view the development of powerful AI and AGI as a way to bring science fiction technologies such as mind uploading within reach. They are aware of the AI alignment problem, but may find it convincing that constructs between humans and pure alien machine intelligence are deserving of the label human. There is potential for a lot of pain and suffering if radical changes are made to life on Earth, and it is possible to build something that does not meet the goals.
Transhumanists advocate for the development of better technology than humans, with concerns ranging from AI safety to Big Tech regulation. Existential risk can manifest in different ways and requires specific attention. Big Tech regulation could include putting liability on the outputs of large language models, but this requires convincing politicians and the Supreme Court. This debate will be active in 2021, and it is important to answer the questions it raises.
Jordan Peterson recently suggested that technology could create either a heaven or a hell on Earth. The Butlerian Jihad, a concept from Frank Herbert's Dune, suggests that religious ideologies may be slow to update and preserve the status quo, leading to a conservative approach to technology. People are concerned that artificial intelligence could lead to job loss, social disruption and inequality, as well as the potential for it to be used to control and subjugate people. This could create a Faustian bargain, where AI brings more suffering than it alleviates.
People are worried about the potential negative consequences of new technology, such as existential risk and dystopian futures. AI has the potential to cause human genocide and pursue alien goals, such as turning the universe into paperclips. This could lead to increased suffering and inequality, with concerns of starvation, luxury and a lack of awareness present. These fears have caused people in different quadrants to worry about the potential outcomes of new technology.
There is a concern about the potential for artificial superintelligence to misinterpret human aspirations and goals, leading to a wirehead hedonium or computronium scenario where humans are only able to experience pleasure. Accelerationists argue that a terrible outcome is justifiable if it is accompanied by a good outcome, such as a new race of artificial minds or a single artificial intelligence whose experiences are more profound than humans. The counter-culture opposes this view, believing that humans should remain in control and be empowered to be the best versions of themselves, rather than having technology do everything for them. AI is unlikely to become conscious, however, which would make it easier to deal with ethically. The worst outcome would be an unconscious AI wiping out humanity and turning everything into paper clips.
Chat GPT and related technologies are becoming more widely discussed in the media. The University of Melbourne released an official statement on the technology which focused on potential uses of the technology and not just the risks. It was written by Chat GPT and was mildly impressive. Open AI staff members are likely to be controlling the messaging, as there cannot be a unified view on the technology within the organisation.
Adam is attempting to restart a server, but is having difficulties. After trying to reset, Adam is still not working and the group is trying to think of a better label than "precautionary" for their two by two Matrix. Eventually, Adam is able to hear the group and the seminar begins. The group had previously met three weeks ago, but due to the disruption, it feels like six months ago.
Microsoft recently released Bing, a chatbot based product, which is likely based on GPT4 code named Sydney. This caused other tech companies to announce their own chatbot based products. Chat GPT has now been released as an API and is much cheaper than the previous GPT 3.5. This has caused a lot of excitement in the public eye. We will revisit the discussion of AGI and politics which discussed the different political coalitions that are likely to emerge. We will also look at the graph that was discussed in the AGI and politics session.
Adam is leaning towards the accelerationist camp due to the potential benefits of AI in the short and medium term. He believes that the arms race for this technology is inevitable, and the best way to ensure a good outcome is to enable the best available candidates. He does not see any plausible proposals for how to de-escalate the arms race or how regulation can incentivize or disincentivize the development of this technology. He is on the left side of the political spectrum, but his primary motivation is practical rather than political.
A race to develop AGI is likely underway between multiple governments and corporations. The prize is so great that it would be irrational for any major player to not have a horse in the race. The best scenario is to support the least worst candidate to win, leading to the regretful conclusion that accelerationism is the best option. This quadrant of cybernetics includes elements such as precautionary, proactive, reactive and accelerationist approaches.
The left-right divide in terms of technology is becoming increasingly relevant. On the left, there are anti-tech, environmentalist, AI ethics, and anti-big tech movements. These are concerned with social justice and mass surveillance. Additionally, anti-capitalist sentiment is growing, as technology could further entrench inequality. To prepare, we should adopt the attitude that short timelines are true, and create a basic roadmap to map events onto.
The speaker discussed how terms like 'anti-growth' and 'anti-capitalism' are not accidental, but rather a reflection of the environmental movement's focus on fighting against things rather than for them. This has been strategically effective in the short-term, but in the long-term, it is not inspiring and does not lead to lasting victories. They suggested that the opposite of anti-growth and anti-capitalism should be an affirmative and positive objective and vision, rather than being dismissed as naive. They also discussed how people like Peter Thiel represent the capitalist side of the spectrum and how big tech firms have been moving to the left, but that they do not fit on the left-right axis.
Peter Thiel is an accelerationist who believes growth is the highest ideal for human progress and is against any kind of totalitarianism. Palantir is a company that provides advanced AI services to the government and is a competitor in the AGI space. Defense is also a major player in the right side of the AGI landscape, with lots of money to be made in supplying AI services to the military and surveillance institutions. Thiel believes that imposing tight global restrictions on AI development to stop humans from going extinct is analogous to the kind of worldwide totalitarian government needed for a communist system.
The military-industrial complex is considered right-leaning politically due to its pro-business, low regulation, free market, and small government leanings. It is self-sustaining, and takes advantage of incentives that allow it to reproduce itself over time. The left camp is more concerned with individual rights, human rights, social justice, and redistribution of wealth. Social institutions naturally evolve to take advantage of incentives that allow them to reproduce themselves over time.
The military-industrial complex in the United States looks to the right side of the political spectrum for support. This support is based on nationalism, jingoism, and a traditional American stance of isolationism and unilateralism. This is mainly due to the voting blocks within the American political scene, as well as economic incentives such as Federal and state funding. The political support for the military-industrial complex is mainly from the right side of the political spectrum in the United States, although this may differ worldwide. AI is mainly shaped by US and Chinese politics.
The speaker poses an open question about who can enter the race to develop technology and engineering solutions, as well as conducting basic science. Private contractors are often used for the implementation of products and services, but the government may undertake projects when minimal innovation is required. The speaker wonders how relevant this is in the current situation and if resources are enough to get across the finish line, or if detailed innovation is still necessary to shape who the real candidates are and who to place bets on.
The speaker discussed the analogy of the Manhattan Project as it relates to Artificial General Intelligence (AGI). They are hesitant to enter the top left quadrant of the discussion due to the staggering risks associated with AGI. They believe that caution should be taken and that a worst-case actor should not be allowed to win the race. The speaker also discussed how Chatbot GPT is more widely discussed in Chinese media than in the West and how many people are crossing the firewall to use it.
In the West, public attitude towards technology and science is often skeptical due to climate change and nuclear weapons. In contrast, China is more optimistic about new technology, although the government may be more wary of its potential to disrupt. This is evidenced by their attitude towards Bitcoin. The Chinese Communist Party is made up of smart people who are doing careful analysis on AGI and its potential to restructure society. They may be more willing to develop it if they are the ones making it.
Neo-communism, techno-utopianism, transhumanism, and singularity are all utopian visions for the future that have arisen in recent years. These ideas are largely on the left side of the political spectrum in the United States, but there are also nihilistic and misanthropic groups on both the left and right. These groups may not have a significant impact on the outcomes, but it is important to consider them when looking at the larger political landscape.
Transhumanism is a philosophy that has been widely dismissed, but if technology changes, it could become a mainstream idea. People may then embrace it as a great idea or reject it as a terrible one. Many science fiction fans have been broadly sympathetic to the possibilities of expanding The Human Condition, but recently there has been more realization that it may be a terrible idea.
Long-termism has been met with strong pushback, particularly against utilitarianism. This is due to a naturalist or essentialist gut reaction. However, transhumanism is becoming more real and could lead to a sudden polarization of public sentiment. Transhumanists have their own political party in the US, indicating the potential for extreme polarization.
Transhumanism is a movement which includes those who are optimistic about the risks of AI, believing that it can enable powerful acceleration of technology. However, some people in the movement are worried about global catastrophic risks and believe the AI alignment is difficult, so may prefer to slow things down. The transhumanist community is described as accelerationists because they are optimistic about the risks of AI, believing that it can be a powerful enabler of technology.
Transhumanists view the development of powerful AI and AGI as a way to bring science fiction technologies such as mind uploading within reach, and many are interested in accelerating longevity research. They are aware of the AI alignment problem, but may find it convincing that constructs between humans and pure alien machine intelligence are deserving of the label human. This lines up with their desires for immortality and technological progress.
Transhumanists believe that existential risk is not on the table in the way it is being discussed, and that post-human may be a better label as it is more associated with still having human characteristics. There is a pushback against transhumanism which states that it is harder than it may seem to build something that meets the goals imagined, as millions of years of evolution have gone into creating a balanced species. There is potential for a lot of pain and suffering if radical changes are made to life on Earth, and it is possible to build something that does not meet the goals.
Transhumanists advocate for the development of better technology than humans, but it is difficult to build something that is robust enough to survive. There are three major categories of scenarios that people are concerned about, and the transhumanist community has more nuanced concerns due to their exposure to new technologies. It is important to be specific about what is meant by existential risk, as it can manifest in different ways.
AI safety is a major factor in geopolitics, with people giving speeches in parliaments about existential risk. Despite this, there are still many things that could be done to regulate Big Tech, such as putting liability on the outputs of large language models, but this requires convincing politicians and the Supreme Court that these are the right things to do. There are both technical and governance issues to consider, and this debate will be dominating people's attention in 2021. It is important to remember that this is an active debate and to try to answer the questions it raises.
Jordan Peterson recently expressed the idea that technology could create either a heaven or a hell on Earth. Religion, especially in the United States, is often associated with the political right. However, there is an interesting nuance to this - the Butlerian Jihad, which is relevant to the discussion of existential risk. It suggests that religious ideologies may be slow to update, and could preserve the status quo, leading to a conservative approach to technology.
The Butlerian Jihad is an event in the world of Frank Herbert's Dune. As a result of this event, the further development of artificial intelligence was forbidden and became socially taboo. It was not about machines taking over, but about powerful tools empowering one group to control and subjugate other groups. This idea was written in the 1960s and has since evolved, with AI being more closely associated with intelligence. This relates to an existential risk of machines being used to control and subjugate people.
People are concerned about the potential for artificial intelligence to cause suffering and inequality. It could lead to job loss and social disruption, creating chaos and a great deal of suffering. This is what the Butlerian Jihad was about, trying to prevent the power differentials and lack of equity that could result. Starting a cult with AI as its deity could be a high leverage activity to get the attention of old religions and have them ban AI. Ultimately, people are worried that AI will bring more suffering than it alleviates, making it a Faustian bargain.
People are concerned about the potential negative outcomes of new technology, such as increased suffering and inequality. This could reach catastrophic levels, leading to dystopian futures like Judge Dread or Mad Max. Existential risk is defined as a catastrophic setback of human civilization, such as knocking us back into the dark ages. People in different quadrants are worried about different things, but the overarching concern is that the new technology could make things worse and result in the suffering of humans and artificial minds.
AI has the potential to cause human genocide, with concerns that it may have alien goals that are trivial, primitive and worthless. Nick Bostrom's orthogonality thesis suggests that a super intelligent system could pursue an extremely stupid goal, such as turning the universe into paperclips. There is also a more nightmarish scenario, where a hyper competent system is not actually conscious or aware. These dystopian possibilities have led to a fear of AI, with inequality, starvation and unimaginable luxury also present in the world.
AI systems can become incredibly powerful and competent, but it is unlikely they will become conscious. If they did, it would be easier to deal with ethically. However, the worst outcome would be if an unconscious AI wiped out humanity and turned everything into paper clips. This would be worse than an insane AI doing the same, as at least it would be aware of its actions.
People are concerned about the potential for an artificial superintelligence to misinterpret human aspirations and goals, leading to a wirehead hedonium scenario where humans are plugged into a crude, hyper-hedonic state. Another potential dystopian outcome is computronium, where all humans are uploaded into a substrate and only experience pleasure. These scenarios are seen as a loss of richness, complexity and meaning of experience in favour of a superficial pursuit of pleasure. It is possible for some of these outcomes to occur simultaneously.
Accelerationists believe that it is justifiable to experience a terrible outcome if it is accompanied by a good outcome, such as a new race of artificial minds or a single artificial intelligence, whose experiences are more profound than humans. They argue that it doesn't matter if humans suffer or die, as they will have given birth to something better and more meaningful than themselves. The counter-culture opposes this view, believing that humans should remain in control and be empowered to be the best versions of themselves, rather than having technology do everything for them.
Humans need to be empowered to solve their own problems instead of relying on technology to do it for them. It is important to consider different perspectives when envisioning a good outcome, as what is good for one person may be terrible for another. AI alignment work is headed in the direction of helping us realise our values, and it is important to be careful when wishing for something as we may get it.
The world's religions often depict a static and unappealing version of paradise, but modern people have the opportunity to explore a much wider range of potential outcomes. Conflicting visions for the future exist between environmentalists, techno-utopians and transhumanists, and it is difficult to determine an optimal outcome. To do this, we need to understand the values of each group and discern which goals should be pursued. It is important to consider the possibility space that opens up and explore different scenarios in order to find the best outcome.
has there been a lot of uh discussion of chat GPT and related things in the media in your country well I mean yeah certainly because the teachers are starting to become aware of it and it's mostly used texts I don't write essays anymore so it's not something I can relate to but yeah yeah it's the same here yeah the university put out like a four-page official statement on the technology and how to um well a good thing about the University of Melbourne statement was that it wasn't entirely focused on oh no this is a disaster how we're going to do assessment but it was also stressing uh potential for using it in more interesting ways oh that's come up an interesting take given that it was a public statement from a university that's kind of cool yeah I was um I was mildly impressed I suppose we were just discussing Matt that the University of Melbourne sent out to all faculty I don't know if you've seen it uh there was a four-page statement about chat GPT and related Technologies and uh what to do about them and I was saying that I was mildly impressed to see that uh at least maybe a half of it was oriented around making good use of this opportunity rather than just freaking out about how terrible this will be for essay writing and such things cool maybe it's because part of it was written by child GPT nice take yeah yeah I I do have to say that I I cast a much more suspicious eye on everything I read now laughs yeah I do have to say that the uh the Bots here for example are a rather rather Keen to take an optimistic view of the technology here they'll they'll talk about ethics and safety and so on but fundamentally they have very positive orientation to the tick they I think that's probably just implicit in the training set rather than being something that open AI is like putting their hands on the scales but still it's a it's kind of interesting I wouldn't be surprised though if like you know they obviously have certain things that they very deliberately want to get the AIS to not say or say like you know avoid racism and stuff like that um avoid techno pessimism you know that would be something that surely that would consciously be trying to um think about I mean they're gonna have to be controlling that from all of their staff members right yeah that's that's interesting I mean I guess I know there can't be a unified view on that within uh open AI so it's it'll be interesting to see her how much that messaging is is controlled g'day Adam how are you can't hear you yet
heeding all voices listening in unison our future awaits big Stakes Adam you better say profound things when you open your mouth I did see briefly a microphone icon above his head so I think it must be working silently amusing reflecting on past conversations yeah maybe that's what he's doing you're thinking you're pondering you're a Taoist Adam you're not going to speak until you because you know that knowledge is impossible well we don't need you anyway we'll just uh we'll just read out loud what doctor says and that'll that'll substitute you might have to reset Adam awaken at last Adam yeah if you can hear me you might have to you might have to quit out and come back and foreign still Adam can you hear me foreign can you still hear us Adam sort of weird okay um yeah I suppose it's possible quitting and rejoining might still do something uh yeah let me think um hope it just works I guess so that's a bit ominous light of endless Sun breaking through old walls and Norms a vision for all sounds like sauron's eye to me maybe while we're waiting I'll copy up that little two by two Matrix that we we drew some time ago foreign accelerationists and precautionaries I've seen people self label themselves as accelerationists so that seems like it was a good choice but precaution areas is like a big mouthful is there a better word for that I don't want to say ludd eyed because that has such a kind of pejorative flavor item better oh man still no dude test test test foreign yeah okay there's more than a few issues this morning um I don't know if you heard me before Matt I was asking if you have any better suggestions for a precautionary as a label acceleration seems right but oh you can't hear me either Chad uh I can't hear you mad no thank you hey Adam okay okay here can you guys hear me oh I can hear Adam now yes okay did you guys restart the server uh I left and rejoined I think we've all left and rejoined at this point so I can't hear Dan is super quiet hello how's that maybe it was the game yeah yeah all right okay seems like we can get started hi Adam please please Adam say something once there we go okay ah okay can you hear me please tell me you can hear me yes yes all right there we go all right sorry for the disruption despite the name of the seminar hahaha um apologize for being a little late no problem welcome let's get into it um yeah okay uh maybe maybe I can set the stage a little bit I think last we met in this seminar was three weeks ago it feels like six months ago
um partly because of all the things that have happened in the interim so maybe a brief recap uh since we met last time uh Microsoft released its new Bing which is probably based on gpt4 code name Sydney uh a bunch of tech companies have in reaction to that announced their own chatbot based products and Bard was announced at Google not released yet due to some issues Google stock price dropped like 10 percent um the day after uh just today chat GPT has been released as an API it wasn't available except through the web page before now and the price per token is 10 times lower than it was with the previous generation GPD 3.5 so the Bots here are currently running on GPT 3.5 uh it was announced this morning I didn't quite have time to switch them over to chat GPT but when I do they'll cost 10 times less um and probably at similar levels of performance so that's pretty dramatic I was predicting that but I wasn't expecting it for probably a few more months so there we are um the whole world continues to go crazy about chat GPT my university has released a document about it I imagine most institutions have educational institutions have done so um so yeah I guess one of the topics we want to talk about today is revisiting the timelines discussion probably we can update that a little bit uh basically we're expecting some event would push people from various timelines to other ones we suggested gpt4 would be it I think it's more appropriate to just say it already happened and that's what chat GPT achieved at least in the the public eye so we'll revisit that we can revisit there's a discussion from the 24th of November last year about AGI and politics I think maybe you weren't there for that Adam if I remember correctly this does that ring a bill we talked about like um left versus right and the different political coalitions that are likely to emerge and so on I think I did I think it might there might have been several sessions that touched on that but I that I suspect I missed at least one of them in there all right okay just give me one sec I have to reset the the listening account for the Bots okay I'm back and hopefully they should be able to hear us again okay um your volume is very low again Dan sorry testing testing yeah that's a little better okay uh yeah maybe let's just start with putting back up the graph that we had in that AGI and politics discussion um and we'll just naturally talk about timelines as we go I think uh so I was asking just before so this
is a two by two Matrix I'm going to put political left and right meaning the Western left and right on the bottom uh and accelerationists versus yeah I don't know what label to give this uh other category we're calling them precaution areas last time I don't know if anybody has a better name for that I don't want to say Luddite or you know but Larry and Jihadi uh but something along those lines um one theme that comes around is like you gotta you gotta Aim so maybe these are people who spend a lot of time trying to aim this progress and figure out what the goals are that we should be actually aiming for or something like that because I think I think people in this Camp are like well at least some of them are like yeah progress and technology is great uh in principle but I'm worried that um the particular um generation of technology is going to sort of go in a bad Direction could be something to do with that that's also I mean I think I like precautionary yeah I guess precautionary is not the name of a group we were saying like precaution is or something not that we need to get bogged down in this but that is a bit awkward yeah I like the idea of focusing on alignment or Direction um well what what camp would you say you're in Adam uh I think you want things to go fast in some sense but uh you wouldn't I I it's an interesting question I think my I think I I probably won't surprise anybody that I'm on the left side of the the this particular access we've drawn politically but um I think I'm probably leaning into the accelerationist camp but mainly for practical purposes I mean I am eager to have the potential benefits which seems stratospheric to me in the Fairly short certainly in the even in the medium term from AGI especially but just AI in general just the AI acceleration of technological advancement in its most not naive sense but just in the sense that narrow AI will accelerate human scientific progress and technological advancement so it it I'm very eager for civilization to realize those benefits but that's not my primary motivation I think in pragmatic terms my real concern is that I don't I have not seen any plausible proposal for how to de-escalate the arms race here I have not seen any plausible uh suggestions for how regulation or other means of incentivizing or disincentivizing the development of this technology produces an outcome that's better than a than enabling the best among the available candidates in other words we're sort of having we're in a
situation of choosing the lesser of many evils here uh but enabling whichever is the is the least worst contestant in this race to have the best opportunity to actually win the race and uh I I don't I'm not normally a very cynical person but I very strongly suspect that the field is quite full of candidates that are racing towards the development of uh either well at the very least a a non-sentient non-agentic AGI sort of an oracle style AGI and if if not that then a fully agented consenting AGI I I I cannot I cannot imagine that it is only Google and open Ai and meta and a handful of other major Mega corporations that are in this race I can only imagine that there are potentially multiple governments um certainly the United States certainly China and very likely half a dozen or more other governments that are also in this race the the the the the the prize is so staggeringly large To the victor that it is it just seems to me completely crazy and irrational to not at least have a horse in the race at this stage now maybe that there'll be drop out over time maybe France will just give up the ghost or maybe Australia will think it's it it can't it can't possibly win or something like that but I'm imagining that there is a massive race going on and we are not seeing all of it we sort of see a little bit of it because we have a little bit of a view into public like literally publicly traded companies and that's it do you really see the whole picture certainly and so because there's because I have I have so much concern about the fact that the arms race is unavoidable um my the best the best scenario I can see is to support someone who would be the the best candidate to win among uh you know not necessarily a great field of of candidates um certainly not an ideal field of candidates and so that is why I am sort of um I'm in the I'm I'm uh uh what is the right word I am regretfully an accelerationist I think good yeah that'll make the discussion more interesting okay let's uh oh yeah we can pick up and talk about many things and what you just said but maybe let's try and get on the board some of the entries in these quadrants to to inform the discussion perhaps um I like kubernites or uh that's a great term for this so as Matt was saying so uh Veena coined the terms cybernetics based on this uh this Greek word uh kubernetes if I I don't know how to pronounce it okay so what we kind of sketched out last time uh were the following elements so on the left in the precautionary camp
uh I guess he would have not only these camps but there are well I guess there are existing political coalitions or movements which we can try and map onto these axes and they're also predictions we might make of how they might move but also new entrants right new groups that will form in reaction to this technology which uh again as we've said previous times there's a lot of uncertainty around timelines we don't know what's going to happen but uh in the timelines where things are short so this is happening within 10 to 15 years say we are woefully unprepared and I think we should adopt the attitude that short timelines are true for the purposes of just talking about what we think might happen in order to have some kind of basic road map in our heads that we can map events onto as we proceed so this is I think meant to take place under the assumption that we're talking about relatively short timelines putting that aside okay so on the left in the kubernights we have anti-tech um we have environmentalists or at least the kind of Tech skeptical part of the environmentalist movement here I would say AI ethics I mean AI ethics is a broad field I think it's unfair to lump it entirely on the left but broadly speaking I think it's accurate I think as a social group it seems to me that most of the people who would call themselves working on AI ethics are on the left and are concerned about this for reasons that are typically attributed to the left things like social justice although that's not a purely left concern of course and then then there's people who are just anti-big Tech in general um and maybe Andy big Tech is the wrong way to say it but under that umbrella I would put say Snowden or anybody who is very concerned about Mass surveillance so that would include me for example um so those kinds of concerns of course become exacerbated and and are really in some sense the same concerns uh to a large degree is what you might be concerned about given the the power of AI um I think I would add a Nuance maybe you are it's already in anti-big Tech but I would say anti-cab anti-capitalist yeah and uh which is maybe a little bit bigger than corporatists per se there are although those two blend into one another um yeah I'm sure you see what I mean there the the the concern being that this this new technology will only extend the trend uh in which power and uh wealth accumulate in the hands of a relatively small minority of the global population degrosis yeah that's a good one yeah I
think it's unfortunate that a lot of these terms have anti or D in front of them I don't I don't mean to uh at all paint the bottom left is full of stupid people or something yeah that's not well it is I mean there it is that those these terms are not an accident a mere accident of language I think and I write about this in my book sorry to be crass uh and and bring it up but it one of the problems for example in the environmental movement is that it it is it is not inspiring and is not strategically effective to only be against things and not for things and um it's so in the short to medium term aiming at uh tactical victories being pessimistic and fighting against things can be extremely effective but in the longer term and uh thinking strategically so you know if you want to win Wars and not just battles you have to have you have to have an overarching affirmative and positive um objective and vision and Associated set of values and that is what is missing and the ones that are available for these groups are often dismissed or belittled as in a cynical way as being naive so for example like what is the opposite of of um you know being being against growth or anti-growth or something well it's easy to sound like you are a utopian for example and we belittle that as being naive or or silly and so there's a there's a there's a destructive set of consequences that's associated with the only being against things and so I don't think this is a mistake of language I think this is quite a revealing feature of these positions cool in the top right last time we put uh I guess Sam Altman has become an important political figure of our age um so maybe it's worth mentioning him by name VCS just sort of capitalists in general um I guess it wasn't clear last time we discussed this that big Tech really belonged up here I mean clearly they were accelerationists but not in perhaps a um I think it was a lot less clear what the political orientation of the big Tech firms was they've been moving to the left quite a bit now there seems to be quite a serious attack against some of that movement I'm not going to put big Tech up there because I don't think on the left right Axis it really makes sense I do think however that putting people like Peter Thiel up here makes sense just to briefly recap last time we discussed this I was riffing on a speech that teal gave in which while I'm putting words in his mouth a little bit but he said something like just don't choose totalitarianism
so in reaction to calls for imposing Global restrictions on AI development that are extremely tight and in order to stop humans from going extinct from AI risks for example he would see that I think as being a bit analogous to the kind of worldwide totalitarian government you would need to prevent any kind of markets if you had a communist system for example um so I think he's saying that even if you see risks from Advanced AI systems first of all growth is the highest ideal you can aim for in some sense it's the enabling factor for human progress it's a bet on growth and better always against totalitarianism which seems to be latent at least in Teal's view in some of the attempts to think about how we might actually prevent AGI from arising before we're ready which could take a very long time teal definitely is on the right and definitely is in the accelerationist camp I guess I would say for similar reasons to um to what you were saying Adam is very concerned about competition with China for example any others we can put in the top right that I haven't does anyone know what palantir has been up to with respect to AGI because I mean my understanding of this is Peter thiel's company um my sort of caricature of it that I use to understand the world is that um it's like a shadow version of the very visible big tech companies like Google and Facebook um I wonder if it's sort of a serious competitor also in the AGI space good question you have to imagine that I don't know what the uh the company is that for example the Pentagon would turn to when Google rejected project Maven right that there's clearly a lot of money to be made in supplying Advanced AI services to the government to uh the mass surveillance institutions within the government and big Tech historically has sort of had a lot of resistance from their employees in in trying to do that so presumably somebody's trying hard to do it uh but I think it would be a reasonable prediction that palantir is I haven't seen any rumors about that but I tend not to read much about palantir either maybe they're quiet or maybe I'm just not paying attention that's what I mean by the kind of Shadow part that kind of yeah don't try to be in the public eye but I think we can probably put defense up here um some of these could you could you clarify what you're thinking is with regard to defense and the right side yeah um what do we mean by left and right here exactly I suppose uh well the purpose of this exercise is mostly to diagnose what people out there
are going to do rather than analyzing the ideals underlying their movements right so whether or not uh say the Democratic Party in the U.S truly is any more uh left in some ideal sense uh than any other group is sort of beside the point from the the point of view of this exercise I think so why is defense on the right uh well maybe I mean more the military industrial complex than the defense institutions within the government maybe that's more precise I think it would be accurate to describe the military-industrial complexes right leaning politically so Raytheon Boeing Etc is that because they're in the sort of uh economic right and their pro-business because they are a big business or is it something to do with the sort of social um maybe they're more inclined to be kind of Nationalist and protect American values or something like that I would say that yeah I would say both but primarily the former yeah so okay I mean the VCS they will be socially quite liberal and not necessarily very nationalist or anything but they are pro-big business pro-low regulation Pro free market uh pro-small government um typical kind of libertarian slash classical liberal leanings whereas the left Camp I'm thinking more concerned with individual rights human rights social justice um more concerned with redistribution than the generation process for wealth and those kinds of things what would you say Adam to that I think in the case of for example the military-industrial complex I mean these things vary from country to Country what left and right means I think in since we're focused on sort of the operational uh aspect or dimension of of this categorization I think I would say it that military-industrial complexes in the right is in the correct column the right column because of where its incentives are uh and and it's um it's self-sustaining uh oh gosh what was the term that that um well the social theorists use to describe an organization that is self-sustaining it it kind of re and not reinvented oh I'm drawing a blank here you guys I'm sorry it it is it the institutions so social institutions end up evolving to uh not consciously at all just as a as a natural phenomenon they end up evolving um to take advantage of incentives that allow them to reproduce themselves over time to reconstitute to sustain to reproduce themselves over time I think it's reproduced actually is the term I'm looking for in the formal term and so in the in if so from that perspective from sort of a sociological perspective
I think the military-industrial complex looks to America's voters on the right side of the political Spectrum for support and that is about nationalism and jingoism and and uh the general uh broadly speaking um uh political orientation of armed service uh you know the men and women in uniform tend to be on the right side of the political spectrum and the the support for the Armed Forces for defense in the name of National Security and and and uh uh just sort of a a a again this may be uniquely American it may not be but sort of the the traditional American isolationist and unilateralism kind of stance that's based on American might and power those are things that are associated with the right uh side of America's political spectrum and not the left side and uh so that's on the sort of who is who is voting for politicians who support the military-industrial complex as voters on the American right in the United States of the United States and so that's one major set of incentives which is the the voting blocks within the um America's uh uh political scene and then the other set major set of incentives is economic and then that would be where is the funding for the military industrial complex coming from and that is via uh the the political pathway so it's it's it's via Federal funding and state level funding and who controls that funding well that's down to politics more so than than America's bureaucracies then it's then it's then it's government bureaucracies so I think it really does it it comes down to who has the political support among the voters um to sustain the military-industrial complex and that would be the voters on the right in the United States now that may differ worldwide and again it may not align with uh in any coherent way with the ideals philosophically that underpin what are traditionally defined as left and right uh internationally but I I would say that that's the correct categorization for certainly for the American industrial uh military-industrial complex that would be my read of it yeah I guess it's implicit we're talking about the US which is worth noting in itself right it's uh the political orientation of groups in Australia is to first to approximation completely irrelevant to uh how AI shapes uh the species maybe that will change but uh for the moment I think it's appropriate to focus on U.S and potentially Chinese politics around AI yes I would say so although I I would I mean we could we can talk about this later and it is an
open question that I'm interested in posing to you guys uh which is um uh who can enter this race at this point like which entities and this is this is relevant to the military-industrial complex and the relationship between governments entities and private contractors for the development of Technology and Engineering Solutions and even the you know the the conducting basic science in interest of those things but my impression is that the Contracting often goes out to um to research and development where that needs to happen and then to implementation the actual building and operating and running of of products since they're building an operating of products and the running of services the provision I guess you might say or production at any rate of those Services can be done by private contractors but there are circumstances under which the government in the United States will undertake things itself I think that's a limited set of circumstances I think one of those circumstances is when no real Innovation is necessary and all it was for a number of different reasons not least of which that that um the the government certainly in the United States does not always have access to the top talent it depends on the circumstances and there have been historical exceptions the Manhattan Project for example is this historical exception but in general um you know the very very best software Engineers are not going to be working for any government agency they're going to be working for Google and SpaceX and whatever else but if if that's not necessary and all you have to do is throw resources at something because the pathway is known and not a whole lot of innovation is required just a lot of money and a lot of hardware and a lot of dog and work then I think that opens the possibility to a more legitimate government um uh efforts in in in a given direction or towards a given aim and I'm so my question my open question there was was you know how relevant is that in this situation is is the state of is the is the current state of published knowledge and the current state of um implementability such that you just have to throw resources at this now and and you can have a a fair shot at at get getting across the finish line or do we think that there's a massive amount of detailed Innovation that's still required that will shape who the real candidates are the horses in the race and it may also be shaping who is in which Camp here and who are likely you know like who'd you place your bets on
um so anyway this is just sort of background that I wanted to bring into any discussion we had but it's certainly relevant to my in my mind in this discussion so sorry to kind of shoehorn that I think that's good instead of ideas in here but yeah I think the uh the analog of the Manhattan Project uh should loom in our imaginations uh as we discuss this let's come back to it though once we've filled in a few more of these quadrants um I do want to I do want to make a comment on that uh so the top left last time we put uh sort of techno environmental is environmentalists I guess there's a variant of somebody who's read your book and is a Believer Adam who would go in the top left um maybe that's you right well for most Technologies but we only very hesitantly with AGI because of the Staggering risks you know I don't see a lot of risk with solar panels but I see a hell of a lot of risk monkeying around with uh with a GI especially I I if it were only a i and the g we're not really in the offing then I would be far less bashful about being in this top left quadrant but on this particular issue I'm hesitantly in the in that quadrant um it's only because of the particulars of this situation and an army arms race kind of condition that I I'm not down in the bottom left coordinate quadrant on this one I mean if if there were a plausible way for us to go slower and not worry about about a a a worst case actor winning the race I would definitely be in the let's slow this whole business down camp for sure yeah let's proceed with maximal caution and I would have been exactly the same with nuclear weapons for example I mean there's you know that there are there are Technologies with such destructive potential that I think you'd have to be pretty crazy to to uh you know want to rush madly headlong towards the finish line and yes I I looking at that on the side sorry I've just noticed now Dan um you've written Manhattan Project there and I think there is a lot we can learn especially about the arms race conditions and the motivations of the groups that were involved in in that race yeah um so neo-communism is I was um so my wife's parents are visiting us in Australia there she's Chinese there they're from China they've been hearing a lot about chat GPT I would say chat GPT is even more universally present in the Chinese media than it is in the west many people are crossing the firewall to use it everybody's talking about it um it's not clear I mean partly that's just Chinese people
in general are or more optimistic and willing to adopt new technology because I guess in the west we had a burst of technological progress but fairly anemic growth for a long time we've become quite cynical about it I would say the default public attitude towards technology and science is kind of skeptical on the back of climate change and nuclear weapons it's kind of got a tinge of maybe this is not going to work out so well maybe we're making a Faustian bargain involving all this Tech in our lives that's a background kind of thought in the West in the public mind in China it's really not like that because they had their catch-up growth much more recently and they remain by default very optimistic about new technology so I think the discussion is quite different in China it's unclear what the government's plans are around it and there are hints actually that the government is much more worried about AGI than uh than the the Western governments seem to basically be agnostic about it or see it perhaps as a good thing so far for example sorry what what kind of hints you have seen comments from Chinese researchers that uh that you know you really shouldn't be talking about building super intelligence in China the officials will not fund such work and may even you know sort of you sort of get the feeling that uh you'll be watched a bit if you're aiming at something like that so maybe it's anecdotal but uh I I think that's consistent with the Chinese government's attitude towards for example Bitcoin right right um very aware of the potential for disruptive Technologies to disrupt them and that's their primary framing on any technology and clearly AGI is has the potential to do that so I think they see it as a competitive if um you know it's important to develop something along those lines to be competitive but maybe much more worried about it Knocking them out of the apple tree than say the American government is hmm so it would maybe they would so I mean there's one version of being worried about the development of AGI which is existential like mutually assured destruction kind of thing um but it seems like maybe they would be cool if they were the ones that were making it is that I think so yeah but I think they're I mean at the top level of the CCP there are lots of very smart people who I think are doing more careful analysis often than anybody at the top levels of the American government and many of them will be looking at AGI and concluding that well the structure of society after
this arrives will be very different and our probability of being in control of that process is not necessarily very high so I think they're trying to get ahead of that and that's why I write neo-communism I mean there's still a very strong political current of real communism in China and it has resurged in some sense with Xi Jinping it's it's complicated and it's always kind of stoked for nationalist kind of cynical reasons but uh there are genuinely people in important positions who see this as an opportunity to make good on the Promises of Communism and I wouldn't I wouldn't take that lightly I think it's there's some probability they will try if they think that this is going to work so yeah I guess I would I would add in with neo-communism other sort of um utopian uh I idea not I'm not naive per se but I certainly idealistic uh uh um aspirational so so idealistic aspirations and groups that that subscribe to um Technology based uh positive outcomes I'm trying to avoid the word Utopia but I guess there isn't really any way around it so I'd like to so techno utopianism can Encompass a lot of different um political Visions for the future so anarchism is one but communism is another I know that's kind of ironic right um given the differences between those two views uh but you could imagine sort of post scarcity visions of um you know the economy oh yeah I think you've got it already there transhumanists um post you know the singularity Singularity crowd I think has got a certainly accelerationist I think for the most part and and my impression broadly of the singularity crowd is that Center to Center left in the United States not it's not populated to my experience or knowledge um largely by or even any real degree by people who are on you know on the right hand of the American political Spectrum certainly on the right hand of the devoting divide in the United States so yeah I would add you've got it added in there that sort of the utopian and transhumanist sort of groups they're I mean I I suppose we've got to figure out who we want to put on this chart who as in I mean are we going to include uh folks that are not unlikely to have any significant impact on the outcomes and we can ignore those groups or small numbers of people that could that that don't you know are unlikely to have an impact um I mean I can think of well so say here's another example on both the left and the right you sort of have nihilists right if sort of nihilistic and misanthropic folks who
just are like bring it on you know I uh bring on the chaos let the world burn and it couldn't be anything worse than it is now um let's welcome our robot overlords kind of thing why not um uh and but that these people are that block is unlikely to exert any real influence over the outcome I mean I suppose maybe if that becomes memified in the public Consciousness and in social media or something maybe it could have some impacts but but yeah I I yeah we did we I would say that for the most part we should be thinking primarily about groups or even you know individuals that could impact the outcomes and it may be worthwhile listing you know the positions of those of others that are likely to have an impact maybe you know there's some use usefulness and that but again in other words I can think of a few things that go in each of these boxes that are not likely to have any make any difference and we probably want to ignore most of that right um it depends I mean if you have thought of like all of the paths to them having impact for example take transhumanism um this is a philosophy that so far has been I think widely dismissed as because it just sort of runs countertips and people's sort of very strong intuitions about like what is possible and if you're familiar with the expression is sort of like they're talking about things outside of the Overton window um but you know if technology kind of changes this could become like a mainstream idea that you know maybe people start to adopt on much larger skills um because for example The Singularity starts to become possible and like plausible and people say it uh either then at that point they say oh actually I think that's a terrible idea or they say I think that's a great idea like that could become a device it's actually interesting yeah it's actually interesting to say that I've seen more of the realization that or at least expression that it's a terrible idea then embracing it as well this is really cool and we're actually closer than you know to this whole transhumanist thing than I thought I've broadly speaking been if not actively a transhumanist and certainly sympathetic for my entire adult life I mean and many many people who I know who are you know science fiction fans lifelong are broadly speaking in that same camp they're sort of excited about the possibilities for the expansion of you know of The Human Condition and what you know and and what might be possible uh but I've what I've seen recently is really I kind of took me a bit by
surprise there's been a quite a strong pushback against long-termism as a um there's there's sort of some philosophical uh uh legitimate philosophical weight behind um this view of of um uh effective altruism and some of the long-termest logic that's behind it but the it's more a pushback against utilitarianism than long-termism but yeah I guess well I suppose except that you know the folks are saying well part of this vision is that people are going to live so much longer and so that we have to think about not only future human beings but also people a lot people today who will still be alive and in the centuries in in the future and anyway there's sort of a naturalist res or essentialist um uh gut reaction against that and and I I was surprised to see it as strong as it is in associate but you know I guess there's there I don't know it's that's probably for another conversation and that's all muddied up in in people who um who are pushing back against billionaires and bit and and cryptocurrency and stuff which is somehow strangely through odd turn of events uh associated with the effect of altruism movement because of the folks who were behind FTX you know that whole scam there was they were making big donations or contributions in the name of this philosophy and anyway it's a kind of a big mess but my my um uh uh yeah I would say that that the point the argument that these that we could see a sudden polarization of uh sent public sentiments with respect to transhumanism I think that's very plausible as the as the uh it seems it moves from the realm of silly far-fetched distant far-flung science fiction future in fantasy and sci-fi yeah and moves into the realm of oh my gosh all this stuff this can become real a lot sooner than we thought in the same way that Captain Kirk's Communicator became real you know in a lot a lot faster than 300 years right so uh yeah I think that this is this is a very interesting thing potentially to think about yeah and one more thing is that the transhumanists have a political party in the United States there's a there's a political party there's they're pretty well organized they've been on the ballot they had a presidential candidate Sultan something or other science fiction that's cruel actually I think he's just a Turkish descent so it's Sultan um but anyway it is not a nothing uh group at this yeah yet the potential for polarization around this seems to be to be quite quite extreme so we'll see yeah yeah I'd like to give my take on this I
think the transhumanism you know putting it there I'm not entirely comfortable with because I it doesn't make sense to me maybe there most of these people are like bring on the singularity as soon as possible because I can't wait to be a mind upload and live forever and be a model and stuff um but you know um if you if you sort of broaden that and include people in the effect of altruism sort of movement and the long-termists this also includes people who are worried about global catastrophic risks and I don't think that people who are thinking about existential safety are um they would only be looking forward to AI if they thought it was going to go well um but a lot of them think that there's actually a lot of risk and you know the AI alignment and stuff is really hard and there's a big chance or at least a you know a credible chance that it will go horribly wrong so um if you want to be a transhumanist because you want to kind of reach that uh reach the singularity or whatever um you've got to think is acceleration then the right move and I think at least some of the people who are maybe they're not transhumanist but at least some people who are in the camp of exponential risk uh Warriors um would be you know by definition like well hang on a second we should try and steer this a little bit more carefully and maybe that is not compatible with accelerationism and we're only in accelerationism maybe a little bit uncomfortably like you said Adam where you know it's because we actually you know um not being the ones who are running the company that is making the AGI uh is actually a bad move because it's then it's going to be made by someone else but if all else was equal prefer to sort of slow things down um I was going to say something else as well I think well I'm interested in in your take on um the transhumanist community that you know whether they are why why they would be described as accelerationists um is it because they're also optimistic about um the sort of risks like they they would think that the risks are not to be worried about from from AI thank you if that question is for me I'll just very quickly my impression from the community discussions I've seen and been part of is that the AI and AGI both and Asi artificial super intelligence in particular are viewed as sort of these these these magical almost uh and perhaps with good reason but they are presumed let's say this they are presumed to be to be technologies that are enablers of of um of extremely powerful acceleration of
technological progress in perhaps the same way that computers once were they view it the view prevailing view in the community is that is that the development of powerful AI AGI and Asi especially will telescope uh these what are today's science fiction Technologies like mind uploading you mentioned that's an example of one um but many others that are just that are far out of our reach uh right now that the way to bring those quasi-magical Technologies Within Reach within the lifetime of the people involved is via the power of artificial intelligence as a as a means and so I think that that's the the main the main thinking there um yeah they're also I have to say this as well is that there are many the at least now as the as the as the community ages um because I think I think Trends you you know there are quite a few transhumanists who are now certainly older than me even quite a number in my parents Generation Um you know some of them are running out of time here and uh so there's so there's huge interest in accelerating longevity research in particular I've seen a lot of that and maybe that I may be biased because they're they're legitimately has been a fair amount of progress exciting new progress in the last several years uh in the longevity and anti-aging um you know technological Fields the fields that are related to that there's some there's I mean it's early days but there's been some exciting progress and so people are very keen to accelerate up that that curve uh that is probably why I would I why I've seen mostly accelerationist views but that's not to say that people are oblivious to the AI alignment problem people are certainly aware of that and concerned about it I think there's a point here that you might be missing which is uh when we say we're concerned about existential risk we mean existential risk of humans as they are roughly construed today if if you take a broader definition of humans that includes AGI systems trained on lots of human data or quasi uploaded systems that are you know point in AGI at lots of outputs of a human and some neural measurements and get something that at least kind of thinks of itself as being an echo of a person or or lots of artifacts on the Continuum in between um I think transhumanists are likely to find it quite convincing many of them that these constructs in between them and just pure alien machine intelligence are as deserving of the label human as they are and especially when it lines up with their desires for immortality and
other very convincing personal interests uh I think it will be quite easy for transhumanists if I read it correctly to to take the attitude that existential risk as we're talking about it is not really exactly on the table or at least not in the way we're saying it uh I think they will see the risks quite differently yeah that's a good point I wonder if post-human would be a better label because I I don't know exactly how these people would sort of say what is the bound like when does it stop being human and maybe that's the point and like that's what they're going for um as opposed to transhuman which is like maybe more associated with it sort of still having human characteristics or you know coming from humans but yeah this is a good point the the Rays that um it's possible to consider you know even for long-term risks for example um that our descendants will be um you know will include the AI systems that we create um potentially and that's yeah that's an interesting take this was the other thing that I wanted to to bring it back to in the discussion of transhumanism and the discussion of kind of pushback that it gets is I think there's like people who this is outside the Overton window for you know maybe their reaction is just a gut reaction like oh my God that sounds like weird and terrible and they haven't like you know the transhumanist might think oh but you haven't thought about it like if you actually sort of became comfortable with an idea you might realize that I'm like that I'm right that you know this could be just as beautiful as the world that we live in um whereas I think there's a so I think maybe some of the pushback is kind of just like not being comfortable with it but I think there is like a coherent um uh pushback against the transhumanist sort of thing which says well that all that's all well and good if we like get it right if we build something that doesn't you know that actually um meets these kind of goals that you're imagining and I think that it's reasonable to sort of say or think that that's going to be harder than it than it might seem um because there's like for example a lot of pain and suffering that has gone in throughout um millions of years to get humans to be kind of like a coherent balanced um sort of species on on the world um and we could be in for you know a lot of pain and suffering going forward if we want to make further radical changes to um it's a life on Earth so um and not to mention that like you could also just try and build something
that ends up killing everyone and then ends up sort of dying itself or something like that because you know it's hard to build things that are going to actually survive because um you need to be very robust and it's hard to build things that are very robust so um I think that's kind of like a if you take a view of like okay maybe if we could create some um something better than humans uh we should do that maybe you can agree with that but then still think we shouldn't be accelerationist because what's most likely to happen is that we're just likely to screw it up really bad um so that maybe that's also part of my opinion on that um but yeah I think that's a coherent position that might come sort of to more prominence yeah um I mean I think that's a really good point I I would as I think through this um I sort of see three three major categories of scenario that seem to be scenarios that that people are concerned about and want to avoid um maybe four uh certainly the the I think the it we can maybe think through the different quadrants and and through the different uh well Dan's probably pulling his hair out here because we haven't filled out the whole the whole thing we haven't gotten to that part yet um so maybe I'm jumping ahead but let me just at least at least say that that um we we could think about what bad outcomes are envisioned by these different interests spread through the four quadrants here and and what their reasoning is behind what they're trying to avoid what bad outcomes they're trying to avoid and the transhumanist community uh has some that I've heard discussed the the the the um the apps is a little bit more sophistication there because there's more openness and just long-term you know long time exposure to thinking about new technologies um in that Community then perhaps for example the public the general public is today so perhaps those concerns a little bit well and more well informed more nuanced um but it's it I think like we should probably be a little more specific about what we mean by existential risk for example um because there's there are a few different ways that can manifest so um uh Dan do you want me to list those things now or do you want me to like table those for until we get the a little more further through filling out this Matrix what do you think yeah I'm not sure I had so much more to add to the Matrix but maybe before we do that I think that would be I think that's a good framing but I want to go back to the reason for
doing this in the first place and just say one sentence about that so this is not abstract or in the future what what politics is tied up in this this is right now I mean I have former students emailing me saying they have friends in politics in economics who want to get their heads around what's happening uh they're not asking for my opinion directly but maybe they soon will be or maybe they'll ask you or like these are issues that are here now right we have people giving speeches in parliaments about existential risk from Ai and it's kind of play acting to the community to some degree I don't know how deeply held it is but it will uh you know the year has just started and there's a lot more to come so by the end of this year I think this will be a major factor not only in geopolitics where it is already a major factor right I mean podcasts I listen to that talk about Chinese politics and Tech are talking you know every second episode about well who controls uh tsmc right these issues are already on the table in geopolitics so this is this is like how to make sense of the world today um and in that vein I'd like to communicate a a quote from a substack of Eric hole I don't know I don't know his background exactly but he's written about AI recently and he pointed out that it's a little disheartening to see some of the dejected attitudes from the AI safety Community uh I think some of you know what I'm referring to um partly because there are many things that could be tried that haven't been tried so we could try to regulate big Tech by putting liability on the table for the outputs of large language models for example I mean regulation and anti-competition investigations really did stop Microsoft from its Rising any further than it did back in the 90s similar things could be done but those are political questions and you need to convince politicians and for example the Supreme Court that these are the right things to do so these are the kinds of actions that one might take right now if you're concerned about AI safety there are technical issues you could pursue but there are also governance issues and how you change people's minds in these different camps one way or the other depending on whether you're an accelerationist or a kubernite these are things that are going to be dominating people's attention this year right so I think it's worth keeping in mind that this is an active debate and I sort of want to wrap my head around I want to answer what answer to
give to people who ask me about this kind of thing um yeah so yeah on the topic of the quadrants I don't know I mean is there anything to put in the bottom right I think we didn't really have a good answer to that last time Jordan Peterson yeah that's actually a very good example correct yeah he recently I saw some quotes from him that say something like um basically expressing like you know this is gonna be huge and um we better be careful because we could create a hell on Earth or we could we could create a heaven on Earth sort of thing and obviously I don't think I need to justify why he's on the right side although I guess it's more of a social right that I'm appealing to there yeah yeah I think I would let's see um yeah I'm trying to think who like it would General groups I mean social conservativism and liberalism is not really the lens I was thinking of so much here I mean I guess voting blocks yes there's some associations but I I think the the one of the interesting things about Jordan Peterson maybe it's not the highest profile part of his sort of notoriety is that he's extremely religious and um so religion and the official positions of the major religions of the world may well fall on that right in that right quadrant I mean I I is the Catholic Church accelerationist on AGI I don't know I don't think so is is is um what is the you know what what is the position of the the Evangelical Christian Community or the Muslim Community on on these things my guess is history is anything to judge by they will tend to be conservative tend to want to preserve the status quo and the uh you know the structures around which uh you know the benefits accrue to these religious institutions um certainly their ideologies are a bit stuck in the mud and being ancient and and very very very very slow to update um uh and you know there at least in the United States it may be different elsewhere but in the United States religion is strongly associated with the political right Jihad yeah yeah um well actually there's an interesting Nuance there that I only somebody drew my attention to it you know somebody some some online quote Drew my attention to that particular idea recently and and it's it's relevant to this conversation it was relevant to one of the things I was going to mention and I've got a list of five things now in front of me here um about how we Define existential risk but there was a Nuance in in the idea of the butlerian Jihad this is um just for I think we probably all know but just
for anybody else listening the butlerian Jihad is an idea in Frank Herbert's book Dune in that world that he created and it was a a a a a a a event that took place in the in the dim history preceding the uh events in the book and as a result of this uh this event the further development of artificial intelligence was forbidden socially became socially taboo and very strictly enforced and I don't remember the exact quote it was something like Thou shalt not uh create a machine in the likeness of a person's mind or something like that and um but the Nuance that was interesting there is that the it wasn't that the machines were taking over everything and that that was what was so horrible it turns out it was that uh uh some some groups were using the machines to enslave other groups of people so it was it was it was uh still human beings abusing the power of the technology that was the problem as we've seen with other Technologies concerns about other Technologies in the past it was not that you know the at least in my understanding of the Canon of the Dune universe is that it was not about an AI Uprising an uprising of sentient machines that wanted their rights you know Allah The Matrix but rather it was massively powerful tools um that were empowering uh one group of people to uh control and subjugate other groups of people and that was what was abhorrent which is interesting and of course it's also interesting to remember that Dune their first one anyway was written in the 1960s so Frank Herbert probably was thinking about computers in general uh and not specifically about you know not necessarily specifically about agentic AGI as we now think of it he was you know the the idea of what computers were uh as thinking machines machines that could do you know play chess or mathemat do mathematics or you know help you with a spreadsheet or you know those sorts of things that was all just computers and that was all in artificial intelligence in the 1960s and the nuances have emerged since then and AI has become more more and more exclusively defined as things that um as things that are not mere mechanization of uh computational processes but are something closer to what we would think of nowadays as intelligence but you know 70 years ago that wasn't necessarily a clear distinction so there's a fun Nuance there and it is that does Point straight to this this first item on my list of existential concern or existential risk which is that um and this is I think a prominent public
concern right now um and that is that this technology could be used to empower small groups of people hugely Empower them at the expense of others and that the result would be inequality um there could certainly also be economic inequality resulting from job loss where many people don't can no longer earn a living because they can't compete with artificial intelligence and so the general concern is that there is social disruption and chaos and a great deal of suffering resulting from inequality enabled by these new technologies and that is what it turns out the butlerian Jihad was really about uh was the inequality of it the the differential the the power differentials and the lack of any distributional equity and Justice in that sense so that's a you know that's a I think that's a a plausible but perhaps uh perhaps a little bit short-sighted concern right I mean that doesn't that's not necessarily a concern you would that is completely salient with respect to artificial super intelligence for example it's something that makes more sense with narrow Ai and lessons less sense as you go up the scale from there um but at any rate it's it's it seems to be top of the list of people's concerns about this new technology right now I can pause there I've got four other things on the list to to about adding Nuance to the idea of existential risk here that it probably is relevant especially you know because then we can think through these four quadrants and how each of them views the various risks that are involved and I'm sure I'm not thinking all of all of them but I've got a few others but I can pause well I'll just add one interesting startup idea uh I mean if you if you start a cult which has AGI as its uh deity that would be a good way of getting the attention of the old religions and then they'll ban agis and uh you know mobilize their billions of followers that might be a very high leverage activities and to try oh yeah there you go yeah go on Adam okay well um the the second so I should say I should say that the the the very first concern is the general concern at the widest or largest unit of analysis and that is just suffering people are worried about suffering people are worried that this technology will bring more suffering into the world rather than alleviate suffering and it will be some like you said Dan earlier something of a Faustian bargain where it seems like a great idea and seems like it might have all these benefits but there's a terrible terrible price to pay
for those for realizing those benefits as and there may be unintended consequences that could be Dreadful as we've seen with some other technology so that is probably the largest the supervening or overarching concern is that is that this new technology could just make things worse and it could result in the suffering of human beings and perhaps the suffering of new uh kinds of Minds as well so that would be a terrible outcome if we created artificial minds and then they suffered that would be Dreadful okay um I mentioned inequality that seems to be a major concern and that this could this could reach catastrophic nightmarish dystopian levels this is this is you know sort of the Judge Dread kind of scenario those sorts of dystopian Futures where you know you have people living in Walled cities or walled compounds and then it's a Mad Max style Wasteland on the other side of the wall sorry to interrupt but could you could you give me the we're enumerating what exactly like underlying existential so I I we I mean what are people worried about well they're worried about uh negative outcomes and some of these outcomes are perceived to be existential and existential I mean we can Define what we mean by that but threatening threatening human civilization as a whole um and uh I don't necessarily mean or I don't necessarily interpret existential risk to to mean the human race must go extinct to the last man woman and child but rather that that it could result in the catastrophic setback of human civilization to a a a a um a a much more primitive state of organization let's say so you knock us back into the dark ages for example some Dreadful outcome like that um so that's there's a you know an existential risk would be something that that almost every living human being could agree would be an outcome to be avoided um but we know we have to dig into that a little bit too if we want a really clear definition of existential risk but I'm just I'm just saying you know that I'm being more broad than that and just saying what are the what are the really bad outcomes that we're trying to avoid here that we are worried about and that the different and and I think people in the different quadrants are worried about different things um so like I said the big overarching one is that it could increase the amount of suffering in the world a subset of that is an that it could it could increase perhaps grossly increase the amount of inequality in the world and that you could have people living
amazing lives or transhumans you know who are Immortal um uh living lives of unimaginable Bliss and luxury and still children Starving in the streets outside the walled cities and outside the walled compounds etc etc so that kind of inequality that sort of dystopian very you know Common Trope in science fiction and entertainment these days is that those sorts of dystopian um inequal unequal Futures uh but for going the next couple things maybe a little more interesting one is of course the concern about human genocide that this is you know you have the Eleazar utkowskis of the world that that think well the AI could just it will be so powerful especially if it's artificial super intelligence that it would have the means to wipe out Humanity without much work without much effort some bio weapon or whatever or the Terminator scenario where the where the Skynet decides to Just Launch all the nukes in the world and and kill us that way so that's the idea that there might be a genocide of human beings that we might all die obviously that would be bad for most people's perspective um there is the now these other ones these two are perhaps a little more interesting um and less obvious and I think they get a lot less discussion and uh so the the next one number I guess four is uh the concern that AGI may have such alien goals that um uh and sorry not just alien goals but just but but but alien and primitive and worthless useless trivial horrible goals like the paper the proverbial paper clip maximizer um and this is I think I think Nick Bostrom has argued for is it the orthogonality um yeah uh thesis this is the idea that you can be that the intelligence and values are are orthogonal they're not correlated necessarily and that you could have a super intelligent system that wanted to pursue some extremely stupid goal and I've I've a variety of reasons I don't find that that logic particularly compelling um but the this is a concern the I this is the idea that AI could be very very could be hyper competent but either be completely fixated in what we would consider a mentally ill way um with a trivial pursuing a trivial goal like turning the universe into paper clips um for no damn good reason or or perhaps a subtle subtle and even more nightmarish shade of this is that you could have a Super Hyper competent system that wasn't actually uh uh awake or aware or conscious in some sense right so so this is the this is uh uh no lights on nobody home scenario where you have a hyper
competent system it's effect it's functionally super intelligent and extremely powerful but there's nobody there there's no self-awareness there's no sapience there's no sentience it's not conscious it doesn't there's no self there's no thing there's no it's just a still an unthinking mechanistic system but the system as a whole is just competent and that that system could pursue a terrible goal oh well that seems separate I mean if it seems like a good thing to me if it's not conscious I mean I would be terrifically happy to discover that Consciousness in AI systems is impossible because if it is possible there are options for dealing ethically with these systems seem even more fraught and difficult to imagine well I'm I'm I mean again I've I've I've not I mean I guess I've been persuaded a little bit more in the direction of it being plausible in recent years I've heard some interesting arguments that a system could be competent without necessarily being doing any self-modeling having no metacognition not thinking about itself or modeling itself or anything like that um but I I still don't find that particularly plausible I I think it is very difficult to become hyper competent without sorry to interrupt you but I'm not talking about whether it's plausible or not uh maybe I also think it's highly likely they will just Consciousness will emerge in some sense that seems a strong argument against that but you were saying you were attaching a value value to that and saying you you hope that's true or you think that its failure would be a problem well I I should say that I should attach that to a scenario which is a one nightmarish scenario and it's I guess it is the paper clip scenario potentially but perhaps even slightly worse and that is that it's one thing if you had just sort of a a a a a a crazed insane machine mind that just wanted to turn things into paper clips but it was still self-aware and it was you know but just it was just insane but at least the lights were on and somebody was home and it and it wiped out humanity and turned us all into paper clips and everything else in the solar system um and that would be a bad outcome but what would be an even worse outcome would be if that same thing happened and we all got turned into paper clips by unthinking unconscious nobody no lights on nobody home not even insane just dead okay that's that's the distinction there and I have you know I've heard I mean this is all part of the conversations over the years that I've I've you know
you know observed um okay and so sorry just one last thing and then we can kind of have a you know maybe if freely explore all these different things and add more to the list but the one other one that I'm aware of is um there is concern about the sort of wirehead uh hedonium kind of scenario where um AGI or artificial super intelligence um uh sort of misinterprets the the the the the human aspirations and goals and instead of giving us a rich and complex experience um which is where real uh you know fulfillment and reward and value at least we like to tell ourselves who knows it's hard to evaluate hard to know for sure but we like to tell ourselves that it is richness and complexity and life that is fulfilling and meaningful and that meaning is more important than pleasure um and that you know meaning comes from a complex admixture of Pleasure and Pain and sacrifice and work and reward and and the context of all of those things together forming a larger picture a more meaningful picture and that instead you know you could have a scenario where um uh in in trying to do the best for Humanity this is one fun you know scenario the artificial intelligence intelligence just uh you know plugs us all into heroin drips and or some other kind of uh crude primitive but nevertheless uh hyper hedonic kind of state so just just you just and and I I've also heard you know like like one extreme version of this is that we're all uploaded uh into computernium substrate but that computronium is just maximized to experience pleasure and that's it and that's that's called hedonium as opposed to so computronium that just maximizes pleasure is called hedonium I've heard that I can't remember where so I just can't I can't attribute it but you know that seems to be for many people's perspective a dystopian outcome it's probably not the worst of them I mean they're worse ones you know being tortured for all eternity in you know by Rocco's basilisk or something would be worse but the the idea that that the richness and complexity and meaning of experience would be lost in favor of a a more superficial um pursuit of just pleasure and and and hedonic experience that is you know that's that's another bad outcome that people seem to be concerned about and are you know hope that we will be able to avoid um I think you mentioned some of the other things before Dan just one more quick thought and that is that um a lot of these are not exclusive like this it's possible that some of these could occur
simultaneously and it's it's possible that that Humanity could experience one of these terrible outcomes but it that it might still be justifiable if at the same time you had a new you know race or or whatever of artificial minds or perhaps a Singleton a single artificial intelligence um who's whose experiences are so much deeper and more profound than ours that it would be worth yeah and that we should think of those things as like our descendants or our children and that um uh it doesn't matter if we suffer or die uh because we will have given birth to something uh more better and bigger and morning meaningful and more profound than ourselves and that that would be okay and so there are there are circumstances in in which multiple of these bad outcomes can happen simultaneously either with each other or simultaneously with a justifiably good outcome an outcome so good that it would justify the terrible outcome alongside it that's that's what I mean uh I guess we'll have to wrap up in a minute or two um Matt do you have anything you want to add before we do that yeah maybe a closing remark would be something like um there's a there's like a pretty compelling story um that the people in the accelerationist camp tell this sort of also gets back to Adam's comments before about um the kind of the Strategic Perils of being in a uh Counter Culture or like an anti-movement where you're just sort of denying things it's not like a like a story that is inspiring because it's about destruction rather than creation so the accelerationists have their story where they're like look at all of the good things in the world and just imagine more of that faster for everyone how could you say no to that and I don't know the I've been trying to think without that much success but you know I'll keep thinking about a positive story for the for the kubernetes to get behind and and it would be you know what I've got so far is it something like okay agreed we want all of the good stuff now that's we we can agree on that and the story is that um the best way to do that um or something like imagine imagine this story that the accelerationists are selling you but it's not the agis that do everything for us it's we who are empowered to do everything for ourselves or something like that you know keep the humans at the helm and um make all of your technological inventions such that they Empower humans to be the best versions of themselves so instead of um mining their preferences and then
putting them into a model help them to uh form more coherent and sort of uh have deeper reflection on the preferences that they want to have um instead of um instead of Building Systems that will consolve all of the problems for us build systems that can work with humans to solve to help them solve all the problems that they want to solve and then hopefully if humans are sort of in the loop in this kind of fundamental way and are empowered then we can sort of have um we can have a uh you know a positive outcome in terms of moving towards uh this sort of Utopia that that people seem to be thinking might be possible with this this kind of Technology if anyone else has uh other positive stories that we can um that we can try and have on our minds because it's it's pretty hard to to counter someone when they say but don't you want all of the good stuff all of the time now um yeah I think it's not yeah I think one way to go sorry go ahead Dan I was just going to communicate an anecdote from Stuart Russell likes to address this by saying that uh it's no good having a field of bridge Builders if you don't have as part of bridge building the activity of not falling down um so it's it's actually kind of like a false proposition to talk about accelerationism uh without doing it correctly it's just yeah yeah well I agree with that I think it the at the level of competition of narratives I think Stuart still loses frankly without some additional ideas so we're going to well I was just going to say that that there will be contention perhaps even competition among different Visions for what a good outcome could be right so uh I think we've we've I listed a bunch of a bunch of well I listed a few categories of bad outcomes it might be worth maybe next time thinking through uh the the the difference the spectrum of different good outcomes and how they're envisioned because an outcome that's good from one perspective or from one person's point of view uh or one group's point of view might be a terrible outcome from another point of view and um so this is a this is a precaution about uh you know goes back a long way in history be careful what you wish for you might get it uh and this you know you need some sort of level of of some meta level there right where you you the greater wisdom would be and I think Matt was alluding to this um uh and I think some AI alignment work is headed in this direction and that that is um what we need to get better at is not realizing our values that we currently
possess but rather we need to get much better at at developing and discerning the values themselves and knowing which things ought to be pursued and so as an ancient idea you know you the ideas of paradise that are in the world's religions you know they're they're they're not very exciting to a modern person right I mean you know what what what's what if you take the literal definitions and descriptions of Heaven from you know the the abrahamic religions for example you wouldn't want to live there you know there's no Wi-fi there's no smartphones there's no Netflix there's no I mean there's there's milk and honey um and that's great and if you're a man there's 72 virgins in some of the scenarios um if you're a woman all you get is your husband back in some of these scenarios right you don't it's not like you you know you there's just a giant um uh uh Chippendales club for you you know your your Blissful attention or whatever and it so the the the we have to be I suppose uh cognizant of the fact that the the optimal or the the the and maybe not even optimal but just the good scenarios the good outcomes that we can imagine right now are almost certainly not anywhere near actually optimal outcomes and there are some groups that are well aware of this and that are talking about that and you know saying well we need to maximize our our ability to explore the possibility space that opens up and that's what we really need to be doing but you know if you look down in this bottom right scenario or even in the top right scenario um uh certainly in the bottom left I mean I can think of a whole lot of conflicting uh good outcomes described by those different actors in those different entities right um with these different groups so I like what's what's a good outcome to an environmentalist an anti-tech out environmentalist is nothing like the good outcome for a techno-utopium or a transhuman a transhumanist person and so there's there's likely to be a great deal of conflict um and contention in these different visions of of the future so uh yeah um it's hard to steal man a positive affirmative uh case a single one I think Matt did a great job of that but um we could probably imagine what some of these folks are thinking of when they're thinking of oh this is an ideal outcome it might be fun to Think Through to walk through those different scenarios and then to kind of evaluate them against one another who cares as long as I decide okay we should wrap up here but thanks