WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed.
The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except
for errors at the level of individual words during transcription.
AI is revolutionising education, but carries risks. Companies such as Apple and Facebook are in competition, but with a referee who can step in. AI has the potential to improve lives and economies, but there are concerns about bias, control and social safety nets. Chinese authorities have cracked down on the tutoring industry, leading to negative effects on human life. Issues such as regulation and control of AI must be addressed to ensure its beneficial use, and public awareness is essential.
AI is likely to have a major impact on educational and public institutions over the next 5-10 years. It may be difficult to not opt in to the progress AI brings, but many people may feel like they no longer understand what is going on. There is a concern that even if AI is aligned with humans, it may be difficult to navigate the oncoming storm. Stephanie's response suggests that life could get much better if one has a problem or pain point, which may be more common than initially thought. Turning off AI would make humans uncompetitive and potentially lead to millions of people starving.
People often fear the possibility of improvement in their lives due to fear of dashed hopes. AI institutions such as Kafka, Frankenstein's Monster and Leviathan, and bureaucracy, have been discussed by sociologists and postmodern intellectuals. Systems are often seen as oppressive and controlling, and AI involvement in competition-based institutions can be terrifying. AI systems such as adversarial networks are becoming increasingly advanced, with the potential for two AGIs or super AIs to fight each other. Companies such as Apple and Facebook are also in competition, but with a referee who can step in and stop the fight. The future of AI and AGI is uncertain, but it will have a huge impact on our lives.
There is a debate between those who want to progress with AI and those who want to slow down due to potential risks. AI has the potential to revolutionise planned economies, but there are concerns about the risks of AI being unaligned and uncontrolled if deployed by people. Google researcher Tim mcgebrew highlighted the potential impacts these systems can have on minorities and women, such as bias in automated systems and melanoma scanners not working as well on black skin. This has raised the question of who should decide when a model is released and which impacts are acceptable, as there is a lack of social safety nets in place to protect those affected.
Deploying AI systems has the potential to bring economic value, but raises policy questions about balancing the benefits and downsides. It has been argued that these systems could exacerbate existing inequities and injustices. To address this, it is important to be conscious of potential pitfalls and avoid prejudiced and bigoted language models. Decisions should prioritize growth and productivity in the long term, as this will ultimately solve all of the current problems. Raising public awareness of the risks is an important step.
Infrastructure projects in the US and Australia have become slow and costly due to environmental impact assessments, which can be used to oppose new technologies. Politicians often manipulate constituents to benefit from the status quo, creating inefficiencies. This allows people to find a way to survive in the existing system and gain more power. AI is an example of this, with no regulation due to a lack of government competency, allowing for rapid progress but also potential negative impacts.
Xi Jinping has implemented a 'Crackdown on Tech' since the start of the Covid pandemic, including the departure of Jack Ma and the halting of the Ant Financial IPO. Western reporting has portrayed this as a move away from free markets, but Chinese people have a negative view of Ant Financial's practices. WeChat is a popular operating system in China that has a large influence over the economy. These actions raise difficult questions about when technology becomes out of our control and when its cost becomes worth it, but the current state of our capability for having these discussions is lacking.
China has cracked down on the tutoring industry due to schools profiting off their control of the education system and the extreme pressure put on young people. AI has become pervasive in the education system, leading the government to attempt to address the issues with extreme measures. This has had catastrophic effects on human life, as intelligence is part of perceiving the world and stratifying a nation of 1.4 billion people into IQ classes has been destructive. Jack Ma's attempt to be more powerful than the government could tolerate has led to a wider crackdown on tech companies.
AI systems are likely to have a major impact on institutions over the next five to ten years, including educational and public institutions. There is a concern that even if the AI is aligned with humans, it may be difficult to navigate the oncoming storm of the phenomenon. The idea of 'riding the tiger' suggests that it may be impossible to not opt in to the progress that AI brings, but many people may start to feel like they no longer understand what is going on. This point may have already been passed for many people.
The speaker discusses the potential consequences of AI and the complexities of the system that it would be embedded in. They note that it would be difficult to turn it off, as it would make humans uncompetitive and potentially lead to millions of people starving. They also mention Stephanie's response to the situation, which is that there is a possibility that life could get much better, if one has a problem or pain point. They suggest that this response may be more common than initially thought.
People often dread the possibility of improvement in their lives due to fear of dashed hopes. Technology disruptions often bring good news, but people don't want to hear it because it's too far away. AI institutions such as Kafka, Frankenstein's Monster and Leviathan, as well as bureaucracy have been discussed by sociologists and postmodern intellectuals such as Foucault. Systems are often seen as oppressive and controlling.
Institutions can develop a life of their own, reproducing themselves and achieving a homeostatic state. Artificial Intelligence (AI) involvement in zero-sum, competition-based institutions, such as markets, can be terrifying, with the potential for conflict between powerful entities. This is analogous to two predators fighting, which can be frightening for onlookers, such as children and women.
AI systems such as adversarial networks are becoming increasingly advanced, and the idea of two fighting AGIs or super AIs is both frightening and fascinating. A better analogy for this concept is between nations, as the fear of "no holds barred, no rules" violence is much more real and terrifying. Companies such as Apple and Facebook are fighting each other, but it is much less worrying as there is a referee who can step in and stop the fight. If AGI is an independent, autonomous agent that doesn't answer to anyone, then it is much scarier. It is unclear what the future holds for AI and AGI, but it is certain that it will have a huge impact on our lives.
AI has the potential to revolutionise planned economies, potentially outperforming traditional markets. Markets can be inefficient and lead to market failure, especially in the US health insurance industry. This is being discussed at high levels in China, but there is a concern that the risks of AI being unaligned and uncontrolled could be dangerous if deployed by people. The question is whether this is actually a concern and to what extent.
There will always be a trade-off between progress and the negative impacts of large-scale machine learning systems. This is a contentious issue as it is unclear who should decide when a model is released and which impacts are acceptable. Tim mcgebrew, a researcher at Google, highlighted the potential impacts these systems can have on minorities and women, such as bias in automated systems and melanoma scanners not working as well on black skin. This has led to a debate between those who want to go faster and those who want to slow down or stop the development of such systems, as there is a lack of social safety nets in place to protect those affected.
Deploying AI systems has the potential to bring about significant economic value, but raises policy questions about how to balance the benefits and downsides. This was highlighted by the case of Tiffany Gabriel, a Google employee who was studying the potential social impacts of these technologies. It has been argued that these systems could exacerbate existing inequities and injustices, rather than making the world a better place. There is no clear way of weighing the benefits and detriments, but raising public awareness of the risks is an important step.
The speaker suggests that in the long term, decisions should prioritize growth and productivity, as this will ultimately solve all of the current problems. However, in the short term, it is important to be conscious of potential pitfalls, especially if they are avoidable and don't come with a huge cost. The speaker cautions against waiting for a ‘Rapture of the Nerds’, as this is essentially an afterlife, and suggests that prejudiced and bigoted language models should be avoided.
Infrastructure projects in both the US and Australia have become costly and slow to complete due to well-intentioned processes such as environmental impact assessments. These hooks can be used to slow progress or oppose new technologies, such as Bitcoin, for ulterior motives. Caring about impacts can lead to unintended consequences, with the hooks becoming live items in a game, leading to costly and slow infrastructure projects.
Politicians often use politicization to manipulate their constituents and create inefficiencies. This allows people to benefit from the status quo and slow down progress. David Graber wrote about how this is done at an individual level, where people find a way to survive in the existing system. There is a lot of incentive for doing this, as it allows them to carve out a niche and gain more power. This can be seen in the case of AI, where there is no regulation as the government is not competent enough to do so. This means that negative impacts may occur, but it also allows for rapid progress.
AI and other technologies can be extremely productive and performative, sometimes to the point where it outcompetes humans and creates a situation where the poor suffer in order for the benefit of the technology. This raises difficult questions such as when does the cost become worth it and when does the technology become out of our control. These questions are difficult to answer and politics is designed to figure them out. Unfortunately, the state of our capability for having these discussions is not at its peak.
Xi Jinping has implemented several policy moves since the beginning of the Covid pandemic, including a "Crackdown on Tech". This has included the departure of the founder of Alibaba, Jack Ma, and the halting of the IPO of Ant Financial, a spin-off of Alibaba. Western reporting of this has portrayed it as the Chinese government stepping back from its commitment to free markets, but Chinese people have a negative view of Ant Financial's predatory practices. WeChat is a popular operating system in China which has a large control over the economy.
Tencent controls WeChat, which is a huge source of data and economic power, and is used for payments in China. Jack Ma, founder of Alibaba, wanted to be more powerful than the government could tolerate, leading to a wider crackdown on tech companies. Education classes were made illegal in principle, though people continue to attend them. This was in response to the increasing power of tech companies and their abuse of it, with people looking to the government to act.
China has cracked down on the tutoring industry due to multiple reasons, one of which is that schools have stopped teaching in their regular classes and instead assumed students would learn from the out of school classes, which is a way of profiting off their control of the education system. Additionally, the level of pressure on young people has become so extreme that it is considered child abuse by Western standards and is destroying the mental health of generations.
Artificial Intelligence (AI) has become pervasive, and its effects are being felt in the education system. This has led to the government attempting to address the issues with extreme measures, but these have been carried out in a self-destructive way. Intelligence is part of perceiving the world, and attempting to stratify a nation of 1.4 billion people into IQ classes has had catastrophic effects on human life. Once we gain the ability to see ourselves at a higher granularity, the effects of this understanding may be similarly destructive.
The speaker discusses the issue of Chinese people being stuck in an exam system with no prospects and how this could become much worse due to AI and new technologies. The speaker notes that the time frame for this to become severe and cause a large amount of suffering is around 20 years or less. The speaker is concerned that this could lead to a truly dystopian dynamic where an entire generation of people are thrown into a Hunger Games-like scenario. The speaker then asks if there is anything that can be done to prevent this from happening.
I'll throw out a couple of things that I guess we've discussed briefly via email or that we might speak about um there's a few ways in which institutions are going to be affected by the increase in capabilities of AI systems over the next five to ten years that might be an interesting topic I can you know I can already see very clearly what some of the impacts on my own institution will be um I suppose the other institutions that might be relevant are what besides educational institutions governments um there are public institutions there are also the effects on the sort of large companies but that's maybe a little beyond my area of expertise we also discussed briefly by email this idea of uh riding the tiger as one of the ways in which it might be difficult to stop bad outcomes so AI safety as a discipline is in my view quite rightly focused on the scenario where you build a very capable AI system and unintentionally it becomes agentic so agent-like and becomes very capable and if it's not aligned to the outcomes for people may sort of by default be very negative Extinction or you know relegation to kind of reservations on planet Earth where humans uh a novelty item these kinds of things besides that concern there's a set of concerns that are around well even if it is aligned in the sense that it doesn't wipe humans out still The Singularity may be very tough for meat bags to navigate or even understand what's happening and we're likely to see before we get to anything completely incomprehensible the the oncoming storm of that uh phenomenon so they're riding the tiger idea was that we might sign up for transformative economic progress and other forms of progress because it'll just seem impossible to not opt in right we'll have these systems they'll I mean if things go as I expected many people expect that at some point over the next 10 years will start to rack up higher rates of economic growth you know two then three then four then seven then eight then at every moment it'll just seem like these tools make us more and more productive and it will but at some point each individual person will start to feel that they have no idea what's going on anymore and that that point probably has already been passed for most people right most yeah I was going to say I think a lot of people I mean I know for example my wife I mean so lavender definitely feels like what's going on in AI is uh interesting but uh because she doesn't understand it she finds it intrinsically
scary not in the sense of like a physical threat but just the unanticipated consequences uh they're just scary because they're unknown and we'll quickly reach a point where this is true for all humans uh even the brightest human will just have not in the sense that I mean we already don't know what's going on right nobody really understands the market even people who are players in large governments don't really understand what the government is doing most of the time uh but we've sort of gotten used to those I mean I suppose Kafka is the harbinger of all things uh and we've sort of accommodated ourselves to those systems but those effects could become orders of magnitude more pronounced than that's the tiger that complex system in which we have no idea what's going on and it'll be very difficult to practically turn it off because uh suppose we're embedded in this system where there's AI running the economy and uh making us much more productive to turn it off would be a to make yourself uncompetitive compared to others and B you know maybe just a million people will starve next week if you start turning things off um as we've seen with the world globalization and the market for foodstuffs even without a lot of AI in the system once it becomes more productive it's just very difficult to go back and even if we could do it practically the coordination required to get everybody to agree to do that may just be beyond our ability to to coordinate right um yeah so that's any thoughts on that yeah yeah a few things um a bunch of things let me let me just drop a few things in here just random thoughts that come to get mine and then maybe you know maybe something larger um on the on your last Point um so just you mentioned lavender um one one different response that's I I would have been hard for me to anticipate I think ahead of time is the response that Stephanie's having which is that um it's because of our situation with Kai but what was Stephanie's um uh maybe maybe there are more people in this position then we that I realize or or whatever but at least part of it is that um there seems to be this this uncertainty but at least in principle some probability of of amazing progress and solutions to problems that are horrible today like like that there's a it's possible that life could get a lot better if your life is terrible and there's the possibility that life could get better or if some part of your life is terrible if you have some massive problem or pain Point
um source of suffering in your life um and there's the possibility that that it can could improve I think a lot of some people's response to that possibility is something that's not quite fear but it's sort of dread to even think about it so you like like you don't you dread to get your hopes up and you want to you want to avoid contemplating or thinking about the possibility because it's just too it's too painful to dwell on it and to to get your hopes up and then maybe they'll get dashed or they won't you know you just you just don't know I'm I think I'm I'm I don't struggle with that so much personally I'm I'm pretty darn optimistic and and um about things and uh uh I figure if undermines it is terribly wrong with the present I guess right yeah I mean you're right that that's that too absolutely it's it's this this living in the future or living for the future does um it does in a way undermine your you know your the the the the the the meaning and the value of the now so there's this I think that that um some people have that response to all of the technology disruptions that my team talks about that you know they're even even though the the the brighter future is not that far off even though it's you know it's potentially just a decade or two away I mean that's a long time for people who are really suffering and they just sometimes people just don't want to hear the good news it it because it's just it either there's a fear it won't it won't benefit them or they'll somehow miss out or it's just it's too far away and that you know um they don't want people sort of pollyannaing or is that the right word just being optimistic and cheerful about stuff um because it's it's painful uh uh to think about so anyway there's there's that's one thing that comes to mind um uh oh well that's almost like a random thought but I don't know how much relevance it has one thing um however getting to the main your main question I think which is AI institutions and in particular um I mean the Kafka institutions one reason why I think um uh and I think I mentioned this in yeah didn't I say Frankenstein's monster and Leviathan and those these examples of these are you know those are older ideas and I think um I forget they're kind of vapor and other sociologists talked about bureaucracy in different ways and and um I don't know if you come forward to the postmodern era you know you get folks like um uh Foucault and I don't know the other these these other intellectuals talking about uh you see how systems
sort of end up with a life of their own um institutions can end up with the life of their own at some unit of analysis some higher than the individual person uh you can have institutions you know sort of uh uh evolving under pressures to have um motives and uh you know at least dynamics that kind of push them in certain directions that force them to behave in certain ways and and I think this different language has been used to describe this over you over the years marks and other sociologists and and philosophers have used this term um reproduce so institutions will sort of develop this characteristic they will they you know the ones that survive it's almost it's perhaps a little bit darwinian but the institutions that persist over time are those that succeed in reproducing themselves um over time they they sort of in reproducing is again it's an old-fashioned word but I think today we would probably call it something like sustain um themselves or uh you know achieve some sort of um uh homeostasis some sort of homeostatic state with uh you know anyway the the overarching idea being that that these systems develop a bit of a life of their own and that there is a Kafkaesque you know negative Dimension to thinking about that okay so here's the here's the concern that I have which is that once AIS get involved any of the institutions and in particular I'm thinking markets as an institution but there are other institutions as well any institutions that are zero-sum um where competition is involved could get very ugly because competition that we don't have any control over that's being that's being um you know undertaken by super intelligence and super capable entities is going to be absolutely [ __ ] terrifying um and I I really mean that I mean I don't know if you've ever seen like like you know I the only the closest thing I can think of is like two big predators or two big animals fighting each other near you like to like if if two lions or whatever get at the zoo get cranky with one another I know I've seen a few things like that it is so scary when you know I think I think one thing that that I remember being cautioned about this um that for children it's very terrifying to see the parents fight and for um for children and for women it's very very frightening to see men fight um because it's this you know this these are these are physically more powerful entities that are are in conflict with one another and as a bystander it's it's frightening very frightening Prospect
okay well so multiply that by like a billion and that's what two fighting agis or super Asis or just sis and it's just super intelligent AI systems that are not General fully general or whatever that's pretty frightening right but but we already have the beginnings of that I mean don't we have adversarial networks there's a big approach to making improvements and so on so just why do you think that would be more scary than watching I mean apple is doing its best to take Facebook out currently right it's decimated it's ad revenue and it's succeeding this is a fight at the scale of uh very large well-funded institutions but I don't think it strikes most people as particularly scary unless you're in the leadership team at Facebook um I think that I think a much better analogy uh and where it is frightening is between nations because companies fighting each other they there's always sort of a sense in the background that yeah we can pull the plug on all this you know we could just send in the Army we could just you know the president could make a phone call we could talk you know these companies yeah they're big they're rich but we could we could you know uh apple and Facebook fighting and if we wanted to we could we could you know someone could step in and stop the fight there's a referee and the referee could just step in and stop this it's a boxing match it's not it's not two people really trying to murder each other and that's a very different I think situation than two Nations that are at war in two Nations at war is genuinely terrifying for people everybody for onlookers for the people obviously involved everything because there's no I mean that's it you've crossed you've crossed you've crossed into No Holds Barred no rules violence is on the table you're no longer you know engaging in conversation that is that is is is a genuinely frightening situation so I think war is maybe a better uh a metaphor here it again you know the whole thing being that AGI is something that we wouldn't necessarily have control over now if we have control over it well then it's just a super effective um you know uh competitive system Enterprise whatever but if it's if it's an odd you know independent autonomous agent that doesn't answer to us or to anything but itself then yeah that's pretty scary um so anyway that's that's one thing um uh uh another question another another possibility that comes to mind is um I do wonder if uh I do wonder if we will see I think you might have mentioned this maybe Dan in
the past um I know some people on my team have brought it up uh there's this idea that we could ever return to planned economies um and that and that and that a planned economy with AI behind it could give markets a run for their money because markets are far from perfect they're just better than the planned economies we've managed in the past um but markets are not perfect markets are you know they're you know without good regulation they're pretty they're pretty terrible in many instances they you know they run off the rails and uh you know monopolistic you know Monopoly duopoly and you know other Dynamic Dynamics can be very inefficient and you don't get the benefits of you know the wonderful benefits of functioning markets you get you know the dysfunctional markets that are either you know um well they're inefficient up to some point and then at some point we end up just going and failed and market failure is a big problem you know there's some countries have it worse than others we have I would say in the United States market failure and health insurance and maybe Health Care More broadly certainly in health insurance is utterly failed Market in my mind but um I'm biased obviously because my personal experience with it now my family but um uh at any rate it's it's you know there are markets that are uh that are not producing optimal social or economic results never mind even you know the other the other externalities like environmental uh you know results or political results or so forth and so I I what I wonder is if uh you know and China comes to mind obviously if a planned economy um powered by AI really worked then you know it could give market economies a run for their money and then what would what might the implications of that be yeah it's being seriously discussed in China at at high levels it doesn't surprise me yeah yeah it's so yeah those are sort of some initial thoughts on this General topic and on on in response to what you said um as a you know crude first pass at this what what do you think yeah I suppose um I wonder whether it's it's easy to conflate the different kinds of risks from AI right so on the one hand we have the risk of it being unaligned and uncontrolled and dangerous and and so on all being deployed by people in a way that's dangerous but this kind of riding the Tiger this seems to me basically what sustained economic growth looks like so I I guess a question I have is that is it really is this actually a concern and to the extent it is a concern is it
a concern that needs to be in some sense uh deliberately overlooked so I think an issue that's going to arise in in a much more compressed time frame than what we saw during the Industrial revolutions was no doubt what's coming will make many people deeply uncomfortable and eliminate their jobs and and whatever social safety nets get put in place will be maybe delayed or not exist uh and there will be a constituency in democracies pushing in both directions to go faster and to to slow down or stop and those considerations will I mean you can see actually the political lines forming already it's very interesting so there's I don't know if you followed the Saga with um timnot gebrew at Google um so remind me yeah so do not Gabriel uh a researcher in I suppose you could call it what I mean you could call it safety of large-scale machine learning but not in the sense of AI is taking over the world but just the impacts on minorities and women and uh of systems that are trained at say Google to on certain data sets that have bias in them and then releasing those models on the world so that your interaction with these automated systems become skewed in various ways that are subtle and distributed and have impacts on people that are hard to understand or perhaps very blatant is is the case with you know well-known examples of say melanoma scanners for skin cancer just not doing a good job with black skin or language models having opinions about whether women can be doctors and this kind of thing um so Tim mcgebrew was part of a group who maybe head of the group I forget within Google who was studying these kinds of impacts on people uh social impacts of deep learning systems at scale and she had a disagreement with Google about some of her work and and uh was ultimately fired essentially I think so this particular case is not so relevant to what I want to say but uh her message and uh her view on the impacts these systems are likely to have and who controls them which is the ultimate question here right so who gets the decide when a model is released and who gets to decide which impacts are acceptable in the name of progress um this is going to be a a very contentious issue because it won't be the case that we'll ever get these systems to have no negative impacts right or at least it's impossible to imagine a world in which unless there's very strong regulation this trade-off just isn't that is just placed at zero or infinity or whatever right there will always be
uh the economic value of deploying these systems will be so high that some downsides will be viewed as tolerable and where that lever is set will be like a key policy question for the remainder of Our Lives so I guess it's a I don't know personally where where I would actually put this setting um because yeah like you were saying earlier there are there's no constituency for the problems that people don't think will be solved but will in fact be solved so there are many medical issues that will get solved with high probability if these systems just become very capable but those people won't get a say in probably and whether these systems go faster or slower who will get a say are the the groups who stand to lose immediately are in the short term and uh yeah so I guess I'm that's one of the issues when I when I look at say stories around Tiffany Gabriel or the concerns around impact social impacts of these Technologies uh it's very hard to get a sense of where the right place to stand is I think there are plenty of people just kind of want to release everything and then see what happens and then there's a position where you just kind of want to stop or at least um I'm not saying that's Tim Gabriel's position and I have a lot of respect for her work and Rachel Thomas is someone I know who's working on similar topics but yeah I don't know if you have any thoughts about that yeah I I'm not super familiar with that specific case I have read about it in the news that I wouldn't have placed the name but they that the um an important Google person I think um with academic credentials is is um was trying to study this and then and then uh and and put some things into practice at Google and then you know it was it was um I know for a variety of different reasons perhaps you know it's partly partly substantive and I think partly personality clashes and some some you know some moral and grandstanding and a few other things and it ended up turning very sour and uh uh there was a acrimonious um parting the ways there and and but it I think it's done uh my understanding is it's done a good job of raising public awareness of the risks at least in the short term that some of these technologies have at exacerbating existing uh inequities injustices that we just the world was full of rather than making the world a better place and um so I mean I I land to similar in a similar place I think to where you are which is that I'm not exactly sure how to weigh the benefits and the detriments
in the short term so my call the qualification that I would put on there is that I'm not sure how to you know sort of sort of land on a on a um uh you know a way to make good choices or decisions about this in the short term in the long term in the long term uh and by long-term you know I guess I guess it gets us out to singularities you know that Horizon and we just have to throw everything you know we just flip the table so I guess we're not worrying too much about the long term honestly because it's too hard to even think about but yeah in the long term I mean you know you if if if it's it's hard to avoid conclusions like you know on a longer time Horizon do whatever maximizes growth or maximizes productivity because you're going to solve all the other problems if you do that and you're going to solve them sooner rather than later and so just put up with problems whatever they might be in the short run because you'll get to the finish line of every problem that we can think of right now being solved or solvable um and uh you know so so but that's sort of a cop-out and you can you can default to that and end up in every sort of uh Cult of you know almost it's almost like a you know waiting for the the you know it's been mocked waiting for the Rapture of the Nerds basically the whole Singularity those uh magical ai's just gonna swoop in and save the day and solve all of our problems or it just gets conflated with the position of the the poor suffer what they must right so yeah but well that's what it's always been in the past and and you know uh the the promise has always been in the afterlife well now it's a problem it's a promise of afterlife but it's not like literally a different you know a separate universe or world it's just the world not now but at some point in the future and so you it's just it's literally it's actually a bit more of an afterlife it's a little more liberal literal maybe a little more plausible but it's the same thinking it's just just wait just hang in there and everything will be fine and just you know the poor stuff for what they must exactly so I I uh I think obviously we need to we need to um we we need to uh we need to be cognizant we need to be sensitive to these potential pitfalls especially if they're needless right if this is if these are avoidable problems and avoiding them doesn't come at some obviously enormous cost which I don't see how they would I mean if you if you why put out why put out a a prejudiced bigoted um language model if that can be avoided
and if there's no downside to it um and now I guess people could say well the downside is you slow down you lose your Competitive Edge blah blah blah I don't know um I I'm waffling here I don't have a great uh let me put it a little bit of a precise example to you and see if if you have anything to say about this uh Australia like the US has become incapable of doing infrastructure in a reasonable time at a reasonable cost and you couldn't you could point to that and say ah corrupt bureaucrats and they're just stupid but when you look closer actually the problem is much scarier than that it's actually pretty well intentioned a lot of the time right there's just endless studies about potential impacts and there is corruption and there is you know too many people in some of these offices and you know there's incentives to not do anything and evade responsibility etc etc but a lot of the Slowdown really can be attributed to well-intentioned but ultimately mistaken long term I mean over the long term negative sort of expected value processes uh to take one that sort of coming up now uh you could you could worry about the climate impact of Bitcoin and blockchains right and you could say uh well it takes a lot of energy to run this blockchain and that generates greenhouse gases and you know there's we can't afford any more of that so uh maybe maybe we should be against Bitcoin and you have even the president of the United States coming out and saying things like this now when Biden says it or uh Janet Yellen says it if I get that name correctly um or you know these figures in finance say it you know that they have ulterior motives for being against Bitcoin right uh it's got nothing they don't care about those environmental impacts but they think they can mobilize support against something they view as threatening by by using that right and I think one has to be aware that anytime you add these hooks into a system which where you say this is a legitimate reason for going slow or being against the technology or stopping those hooks become live items in a game right and they will be deployed for reasons that you would not wish them to be deployed so I think this is an underappreciated consequence of of paying attention or caring about impacts is that you know any given thing that leads to the subway in New York costing you know a billion dollars a meter or whatever it costs or similar infrastructure disasters in Australia many of the individual parts of that they seem reasonable but when you do a
kind of cost-benefit analysis of the number of years of people's lives that are wasted because of bad infrastructure or many other things you wouldn't actually go for that outcome but you're in sense opting for it implicitly by just never saying no to things that slow down or stop processes or just being sort of generally anti-progress so I'm curious I mean what are the heuristics I mean we've become very bad at that as a civilization in the US Australia I think you'd agree with that but what are the heuristics that you can deploy against this creeping into things I mean in the case of AI it's just simply that it's not regulated at all it's going so fast the government has no idea what's happening or how to regulate it they have no competence to do anything so in some sense it it's negative because there will be negative impacts and no regulation but while that lasts it's also free from just being you know rare grass studied to death yeah I mean that's this is the thing right you're you're talking about a specific specific type of you know um politicization where where issues of a certain it's a form of politicization where topics of a concern become a political football as we say here in the in the U.S you know they just become they become something that you can um you know you can execute some political machinations over you can balkanize your constituents over you can you know make promises or allocate resources in the name of etc etc and um uh it carves open a space for uh within this sort of kafka-esque machinery for inefficiency to hide for people to for parasites to latch on I say parasites but I mean all of us so it it carves out the space for each of us um and some of us more than others um to to benefit from a status quo and to the extent that we slow things down or we you know we prop up an existing system or reveal out something that would otherwise fail or we can do something that's inefficient we're we're basically doing writ large the the you know the um was it David Graber who called who who wrote about [ __ ] jobs that a lot of people in bureaucracies you know um so this is at the individual unit of level of analysis not at the poll system where you know not so never mind the whole system having a life of its own people find a way to to carve out a niche where they can survive in that ecosystem and um there's a lot of incentive for doing that and and there's even more incentive I think among politicians and so forth for opening the the oh for
opening up those niches for people to to um step into and occupy and I don't think any of this necessarily has to be conscious at all it's just that this is the way you know the systems um uh roll basically they they emerge and then perpetuate themselves with these sorts of features so to get to your question which is how do we you know what's this is there is there a Saving Grace on uh uh for AI or for other technology on top of this and my first I don't have a great answer to that I don't pretend to know but my first instinct was uh number go up which is this phenomenon you may have heard of which is it you know it it and I think that's what you're saying at least partly which is that you know it doesn't matter if the new technology is so productive or you know performative in some way it just it makes such a big difference it it out competes uh it delivers it it just works whatever the the the overwhelming performance metric might be that we're interested in paying attention to if that number really goes up well then it trumps the the rest um and you know that is itself and perhaps a good thing and a bad thing if if the number going up is something that's really important um then you know well uh then I guess yeah the we end up in a situation where it's should we let you know should the poor just suffer their lot in order for the for the you know for the greater uh Glory um that's brought by this amazing technology that can deliver on some particular dimension of of benefit or utility or whatever um but I don't know I mean it's it's it's it's it's tough It's when does it cost to become worth it you know when does it when does it turn into a Frankenstein's monster and it's out of our control those you know those these are these are easy questions to ask and impossible ones to answer right so it's it's a i i yeah I mean these are these are all great questions but they're there's there's there's there's nothing but grappling and I don't know if any Clear Solutions or and what they exist even in principle let alone I mean are we going to are we going to grab them I don't mean to be defeat this about this fascinating thinking about this it is a bit because I don't expect yeah encouraging um I suppose well it's the kind of answers that politics is designed to to figure out uh by contention and disagreement and argument and it's unfortunate maybe that uh the state of our capability for having those discussions is is maybe not at its peak uh yeah maybe we could spend a few minutes
talking about uh what's happening in in China which is quite interesting I don't know if you've followed the the last year or two of the kind of Crackdown on Tech and so on in China um I I haven't I haven't I'm interested to learn and and there are yeah but no I haven't been following it closely enough but um I know that I know in the abstract that major measure things have happened uh since the beginning of covid and that um anyway so I'm Keen to get your take yeah it's quite interesting there's been a number of signature policy moves from Xi Jinping uh two of them that maybe they're mentioning in this context are the what's why they call the Crackdown on Tech which took the form of uh there were several high-profile parts of it uh the founder of Alibaba Jack ma left his position I don't know what his current relationship is to the company but and financial which was a spin-off from Alibaba its IPO was derailed uh like days before it was planned to happen as a result of Investigations DD they were planning to list overseas and that was also derailed I don't know what the state of that is now these were results of so I guess from the West the reporting is often something like it's in Bloomberg and it comes from people who maybe had Investments or wanted to make investments into China in these companies so it's sort of like bad Chinese government is stepping back from its commitment to free markets and so on but if you talk to Chinese people they're like yeah [ __ ] Financial these guys were bastards uh so and financial it's one of its most popular products were basically they were Loan Sharks but you could apply for a loan on your phone so you can apply for a small loan to buy a handbag or you know some other product and then you have high rates of interest to pay it back but they were really pushing people to take out these loans because of the network of other apps they have and quite predatory and you know really people were people had a very negative view of their social impact and the government at some point just took notice of this and acted also Jack ma was walking around basically saying he was above the financial Regulators so a lot of these new era finance companies in China they have a control over the economy that's maybe if you're not paying attention you would find surprising given the reputation of the Chinese government for having their their you know sort of tight grip on things so you must know that WeChat is kind of the operating system of of China uh
everybody pays for everything with WeChat cash yeah my impression is that that's what Elon Musk wants to turn Twitter at least partly into uh the west and yet that's his interest in the platform it's my that's my impression is that he wants to return to x.com days and his original vision for x.com and later PayPal was an entirely new payment system you know and and WeChat is sort of the closest realization of that on anybody yeah so I'm familiar with WeChat all right yeah to the extent that if you go to a tourist site in China the the Beggars want you to pay and WeChat they'll have QR codes next to their next to their spots that are on cash wow they don't do anything with cash that's amazing nobody walking past them has cash anyway it was like switch to a QR code or die basically uh okay anyway so that means that the uh tencent controls WeChat uh and that's a huge source of I mean they have all that data and of course some of it goes to the government but it's never quite clear how much but in any case they have a huge they have huge leverage over the entire economic system as a result of that um movement of people onto apps to a degree that is much further than anywhere else in the world as far as I'm aware and Alibaba and uh and financial had similar desires to be a linchpin of the economic system and and take that power but in China I mean in the US I guess businessmen kind of see themselves as apart from politics apart when they need to apart from when they need to Lobby the government or I mean they they certainly interfere in politics but they're viewed as somewhat separate activities but this barrier is much more permeable in China where every Super success successful Chinese man wants to be the emperor right so Jack ma kind of fashioned himself as being more powerful than could be tolerated by the the government and and so he fell and that was part of a wider Crackdown on Tech sort of you know those examples give you an idea of why right so the tech companies had increasing power in the economy and control over people's lives but abused that power in various ways and people were looking to the government to act and so they did uh and then another very impactful policy change has been around education so you may have seen that out of school classes were supposedly made illegal in China this was a huge industry and a growth industry lots of people worked as tutors outside of schools and these were shut down in principle in practice people are still going to them they're
just the out of school classes they have a sign on the door saying we're shut but everybody just goes inside and everybody knows it but in principle the the sector I mean the sector has taken a big hit um okay so what's this got to do with the topic of our discussion sorry just real quickly what was the motivation behind cracking down on the tutoring industry just just I mean or what was the alleged the US the ostensible rationalization for that yeah there's several um all this two or three of the the most obvious ones one is that uh yeah China's really different man okay so a lot of schools basically stop teaching in their regular classes and just assumed you'd learned it at the outer School classes and they made most of their money from the outer School classes so you would have the teachers the same teachers who were supposed to be teaching in say the public school they would work in the evenings at the tutoring School and there they would teach you what you actually need to know and in the actual classes you would come in so uh friends of lavenders their children you know I've heard stories from people about this the kid goes into the classroom the teacher says okay so I assume you know how to do fractions right everybody learned fractions they didn't teach fractions it's just you're supposed to have learned it in the out of school class okay so you have to go to the out of school class otherwise you're behind and then you have to pay so this is a way of of uh Public Schools profiting off their control of the education of young people and this was very widespread um so there's like incredibly corrupt disgusting things like this going on that's one reason uh just a general like the amount of money in the out of school classes was corrupting the education system to a very large degree and if this had been going on for a long time but had reached a point where people were really angry about it and secondly uh the level of pressure on young people in China has become so extreme that I think it's it's almost like child abuse maybe that's being yeah maybe that's too cowardly it is child abuse it's very clearly child abuse by any Western Standard what the the degree to which children are subjected to high pressure high stakes examination uh and even the most well-intentioned parent can't Shield them from that it's really it's horrible uh it like it is destroying the mental health of generation after generation you know most Chinese students will tell you this if you ask them about it and to
have people in my generation in our generation raising their kids putting their kids through this like it breaks them in half to take a happy young kid and send them to school like it's really it's horrible and so that's heartbreaking to hear geez and it's become so horrible that people are just really desperate and so the government is trying to do something about all these issues somewhat ham-fistedly and you know five years too late and therefore it has to be very extreme and so on but you can understand why they're doing it like with many things Xi Jinping does you can sort of see a rationale but the execution is it's just like so I mean both of these policies have been carried out in a way that's kind of self-destructive I would say okay uh yeah it'll come back to the to the topic um yeah I think especially the effects in the education system uh I think indicative of things we will get to experience as a result of more pervasive AI so maybe this is a topic we can flesh out a bit later but what what is intelligence for in in the first analysis right so where did where did nervous systems come from well you first evolve an i and then you need to hook it up to your motor system to move away from a predator right you you first get the sensory organ but then you need to process that sense data to form action so the the intelligence is part of just being able to perceive the world right you need intelligence to perceive because you have to filter out what matters and what doesn't and as it currently stands our world is very complicated in a way that we can't really perceive in that sense so it's just there's lots of complexity and we can't understand it so we just we don't try you can see the effect though of attempting to try so if you attempt to see through a nation of 1.4 billion people to figure out who's intelligent and who's not and then just play out what that means once you have that data it means generation after generation of millions of children being just subjected to high stakes testing to figure out how to stratify Society into IQ classes to figure out who the elite are and it's a terrible terrible burden once you commit yourself to seeing clearly the effects are catastrophic on human life and I I do worry that once we are I mean surveillance is an aspect of that but more generally once we acquire the ability to see ourselves as a culture as a as a world more clearly at very high granularity the effects of acquiring that understanding will be yeah
it will be very hard to turn away I mean it's very hard for Chinese people to get out of that equilibrium right it's hard to opt out of the exam system you just kind of Doom your child to having no prospects there's lots of equilibria like that that once we see more clearly we have a risk of just falling into I think so I guess that one one for our for these conversations probably one thing we ought to ask ourselves for any of these sort of big what is it a meta question a question about the questions we're asking one thing one or maybe it's just a criteria um that we ought to apply is uh uh on what time Horizon does this does this problem emerge then manifest and and ultimately as does that overlap with the you know the limit of uh of the Horizon that we've set from where we can have meaningful um we can do some combination of having making meaningful projections and and you know surmising a little bit about the future that we can we can you know predict with some plausibility you know what what the future state of the world might be or the future state of any portion of it um and then uh you know is is does the problem become severe does have the potential to become severe enough and by severe I mean does it have the potential to cause a a very large amount of suffering within that time frame and so my my uh this so This sounds I mean again I'm just learning most of this I'm very very ignorant about China um and and uh it's shameful but and I'm learning most of this for the first time as you can imagine and and so it's a it is a a gruesome reality already this is not some sort of what you're describing the situation is not something that may come to pass but but a current part of the status quo that's awful and it sounds like the question is well you know will Ai and these new technologies make this much worse and cause much more suffering in this in the near term and before the foreseeable future basically before we can get all this stuff fully solved um and so I I'm taking that to mean you know we're on the order of 20 years or less um and uh so it sounds like it sounds like your concern is that on that time scale you know things could go from pretty awful to truly nightmarish by um uh you know ticking an entire generation of people and throwing them into the Hunger Games kind of you know um even more into some sort of horrible uh uh dystop truly dystopian you know dynamic there um and then of course the following question is is there anything that we could do to prevent that to push back