WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed. The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except for errors at the level of individual words during transcription.

Synopsis


AI and AGI are becoming increasingly prevalent in everyday life, with potential applications ranging from self-driving cars to military strategy. Andrew Yang's suggestion of Universal Basic Income is gaining traction and the timeline for AGI to arrive is shifting from long-term to medium-term. AI is being used in a variety of applications, but its potential to disrupt the finance market indicates a shift to short timelines and a need for preparation. Now is the time to act and be aware of the implications of machine general intelligence.

Short Summary


The speaker discusses the differences between Artificial Intelligence (AI) and Artificial General Intelligence (AGI), particularly in relation to Collective Intelligence. It is suggested that this type of coordination is different from a single system because of the close coordination across different domains of competence, and the latency of the coordination. It is argued that organizations of agents can be 'dumb' in some ways, yet still able to achieve complex goals, and the possibility of regulating the development of general AI models is discussed. The need for an agent to have the capacity to recursively model itself in order to solve problems in novel situations is questioned.
The transcript discusses how the industrial revolution, with its emphasis on production efficiency, took away the generality of intelligence that was needed for more complex tasks. Now, however, big tech companies are requiring a more general intelligence, returning to a pre-industrial environment. A thought experiment is proposed, where Google or Alphabet are assembling a collection of narrow AI competencies to solve a novel goal, and questions are raised about the need for a managerial layer to facilitate the coordination. Finally, the Cutting Edge of Robotics combines large language and vision models with a human-written piece of code, which may eventually be replaced by an agent using reinforcement learning.
The speaker discussed the difficulty of solving difficult problems and the responsibility of executives to create incentive systems to maximize success. They then discussed the timeline for General AI, which has shifted from long timelines to medium timelines with the belief that it will arrive in two to three decades. Elites face a different situation than tech insiders, as their positions of power and influence are contingent on public approval, leading to a cautious approach. As a result, the estimates for risk are always underestimated, erring on the side of caution.
Andrew Yang was a presidential candidate in 2016 and 2020 who proposed the idea of Universal Basic Income. He was not successful in his campaign and his message was ahead of its time. However, the recent election could have shifted the conversation on technology and policy responses, and elites are beginning to pay attention to the shift from a long-term to a medium-term timeline. Robotics is likely to accelerate and be reflected in GDP numbers, while other potential signs of the shift include progress in robotics and the emergence of political figures with significant platforms. This could be a chance for entrepreneurs to instigate a shift and move the Overton window.
AI is becoming increasingly prevalent in everyday life, from self-driving cars and electric vehicles to voice-activated assistants such as Siri and Google. It is anticipated that AI will become more interactive and advanced in the near future, providing convenience for everyday tasks. This shift in technology is expected to move people to medium timelines for adoption, and there is a need to market AI products in a way which does not trigger an existential angst or fear of the unknown. Star Trek is a cultural touchstone which can help to differentiate AI from AGI and show how this technology will not drastically change the world.
AI has been applied to a variety of applications, from controlling drones to running nuclear simulations. While science fiction has been depicting its potential dystopian military applications for a long time, its real-world applications have been more limited. AI is being used for things such as auto stabilization and auto target tracking in drones, and could potentially provide an unbeatable strategic advantage in military strategy if a large proportion of forces become robotic or automated. AI has also been implemented in algorithmic trading, but has yet to provide consistent outperformance in the financial sector. Some hedge funds and individuals are attempting to use Transformers for trading, but the public's awareness of AI is beneficial in order to make the right investments and decisions.
The transcript discusses how the introduction of AGI into the finance market could indicate a shift to short timelines and disruptive behaviour. It is suggested that this could lead to three stages of awareness, with the first being an intellectual understanding, the second being an internalization of the idea and the third being a total shift in world view and values. It is argued that people could be aware of the idea of machine general intelligence, but may not act until they see others taking it seriously. The elites in Washington are increasingly shifting to short timelines, potentially leading to an arms race for AGI and the nationalization of hyperscalers. Training and preparation are necessary to ensure profits are allowed and to prevent chaos.

Long Summary


The speaker is discussing the differences between Artificial Intelligence (AI) and Artificial General Intelligence (AGI). He brings up the concept of Collective Intelligence, which is the coordination of multiple narrowly competent intelligences to create a generally competent single system. He suggests that this is different from a single system because of the close coordination across different domains of competence, and the latency of the coordination. He uses the example of a human being, who can develop competences in various areas, and how those competences are closely coordinated.
The speaker discusses the concept of general artificial intelligence and the possibility of a single system or effort to accumulate competencies under one organization's control. They suggest that this could be a collective intelligence coordinated closely enough, such as the big tech companies. They ask what a program, project or effort to gather narrow AIs under one roof would look like and how it might differ from the collective intelligence humans have achieved. They conclude that the term and definition of general intelligence accounts for this focus on goals.
An agent's general intelligence is defined by its ability to achieve a range of goals in different environments. This involves breaking down goals into subtasks and executing them, as well as integrating the results. The current model of AI is based on powerful foundation models trained on a lot of varied data, and the integration of these sources is what makes them so capable. This paradigm is unlikely to shift as it has proven so successful.
The speaker discusses the possibility of regulating the development of general AI models in order to ensure safety. He questions the need for an agent to have the capacity to recursively model itself in order to solve problems in novel situations. He contrasts this with the example of a hive of ants, where each individual ant is not a general agent, but the collective can still achieve complex goals. He also mentions the possibility of a human collective, where each individual may still not comprehend the tasks being executed by the group, yet the group is still able to achieve its goals.
The transcript discusses collective intelligence and its differences to individual intelligence. It is argued that collective intelligence, as it is normally seen, is comprised of individual intelligences, or agents. It is also argued that organizations of agents can be 'dumb' in some ways, not exhibiting agentic properties. It is questioned how to make sense of collective intelligence, as it is not general anymore and can be dumber than the individual intelligences that make it up, yet it can also be more capable than them.
Andreas Capathi discussed the Codex model, a coding assistant used to help write code. He speculated about how well two models could work together to suggest code and find bugs, as well as a third model that would check for compilation and execution. An overall manager model could integrate all of these outputs, which may soon become a reality. This could be a possible trajectory for the emergence of artificial general intelligence.
The transcript discusses the concept of a case model and how it relates to general intelligence. It is suggested that the line between general intelligence and a specialized system may become blurred over time, as more tasks are automated by the model. The speaker then reflects on the idea of ‘dumb companies’, where employees feel like cogs in a machine, and how this relates to the efficiency of the company. It is suggested that the coordination and systems design of the company is important in terms of the outcomes. Finally, the speaker reflects on the early days of the Industrial Revolution and how this relates to the concept.
The industrial revolution saw a conscious effort to take away the agency of workers and dehumanize them by reducing them to performing a single task. This allowed for production to be scaled efficiently, but took away the generality of intelligence needed for more complex tasks. Fast-forwarding to the present day, big tech companies are now requiring a different set of intelligence, returning to a situation more akin to the pre-industrial era, where agency and generality of intelligence are valued. This creates a strange circle, where the same objectives are being pursued in different ways.
In this thought experiment, Google or Alphabet are assembling a collection of narrow AI competencies to solve a novel goal. It is suggested that humans would be responsible for coordinating the tools and outputting the results. However, it is questioned whether an executive or managerial layer is needed to facilitate the coordination and make decisions, and if it would be distinguishable from an AI agent.
The Cutting Edge of Robotics combines large language and vision models to form a robotic brain. The language model is given a task and is able to interpret it, while the vision model maps the real world into the language that the language model understands. A human-written piece of code then glues them together. It is possible that this integration component will eventually be learned using reinforcement learning, where the vision and language models are pre-trained and fixed, and an agent is rewarded for its ability to coordinate and integrate them to achieve a goal.
Reinforcement learning is a difficult problem to solve, as it requires the robot to take thousands or tens of thousands of actions to receive a signal that says it has done the task correctly. Deep reinforcement learning has not made rapid gains, so researchers are now returning to these problems with the use of large pre-trained models. This model consists of understanding the instructions given, and then learning how to perform them. It is similar to how a human brain works when it has a goal and doesn't know how to achieve it, involving trial and error and incremental progress.
The difficulty of achieving goals can vary, with some tasks having a clear, continuous gradient of progress and others requiring large leaps. This is a difficult problem for both humans and machines, as it requires careful design of incentive systems and loss functions. Deep reinforcement learning systems are able to learn how to interpret sparse rewards and use them to inform their algorithms. Ultimately, machines will be able to learn this part of the problem as well.
The speaker discusses how executives have the responsibility to translate external signals into internal incentive systems in order to maximize success. They also discuss how reward systems can become corrupted due to hidden incentives and game theory. The speaker then talks about the difficulty of solving problems that have never been encountered before and how executives must make guesses and explore possibility spaces. Finally, the speaker transitions into the advertised topic of short, medium, and long timelines and insiders, elites, and non-insiders.
The discussion was about the views of Elites on the timeline of General AI, which has shifted from long timelines to medium timelines with the belief that it will arrive in two to three decades. It was compared to the recent shift in Elite opinion on UFO's, where there is now a mainstream view that something is going on. The Overton window has moved a long way in both cases, but for AGI it is not quite there yet.
The elites are in a different situation than tech insiders because their positions of power and influence are contingent on public approval. As a result, they are unable to espouse certain beliefs publicly and are forced to be conservative. An example of this is the climate science community, which has been very cautious in its estimates and only stands by claims for which the evidence is overwhelming. This leads to a consistent pattern of underestimating the risk, as the community always errs on the side of caution.
Andrew Yang was a presidential candidate in 2016. He was looking for less climate change and less impact. The predictions for climate change over the last 20 years have only ever gotten worse and never better. The public needs to grant permission to move the Overton window, but it does not seem to be taking this seriously on a long-term time horizon. Until the public does, very few elites will stick their necks out on this. Andrew Yang was received positively but it is unclear how much of an impact he had.
Andrew Yang was a 2020 presidential candidate who ran on a platform of problem-solving, and particularly Automation and Universal Basic Income (UBI). He was a breath of fresh air for many, but unfortunately he was not successful or famous enough to make it past the first big obstacles. Had someone like Bill Gates run on the same platform, he may have been more successful. Yang was a tragic figure, as his message was ahead of its time and he was not the right person to bring it to the public. His campaign showed how major parties often shift their opinion or the Overton window shifts, through the emergence of entrepreneurs who can float a trial balloon to see if the public will follow.
Andrew Yang had a successful track record as a technology person, but his message about Universal Basic Income was not well received. If a more seasoned politician had delivered the same message, it may have been received differently. However, Yang dropped out of the race just before the pandemic hit and the US government began sending out checks to a significant fraction of the population, ranging from $500-$2500 per month. If Yang had stayed in the race, his message about UBI would have been more relevant and resonated differently.
The timing of the recent election could have resulted in a better candidate and a shift in the conversation around technology, automation, and policy responses like universal basic income. However, we should be wary of attributing too much of the direction of human history to individuals. There may be an opportunity for entrepreneurs to instigate a shift, and the conditions seem right for new figures to emerge who could move the Overton window of the public and elites. It is interesting to consider if there is anything that can be done to further this process.
Elites are beginning to pay attention to the shift from a long-term to a medium-term timeline, and this shift is being influenced by figures such as Eric Schmidt and Tim Gabriel. Robotics is likely to accelerate, which could be reflected in GDP numbers, and be something the elites care about. Other potential signs of the shift include progress in robotics, which could be seen in manufacturing, and the emergence of political figures with significant platforms. These are all things that the public may pay attention to, but initially it is the elites who will be more aware.
Elites are likely to pay more attention to breakthrough treatments and progress in new materials, such as carbon nanotubes, than the average member of the public. AI is being used to engineer new and exotic quantum states with novel properties. Military applications may also be a trigger for public attention. Realizing the potential of a new technology and having individual experiences with it are key moments for people to understand its impact on their lives. In the next decade, 50% of the probability mass suggests a significant minority or majority of people will move to a short timeline for the adoption of this technology.
Technology has been transformative for previous generations and will continue to be so in the future. AI is already being used in self-driving cars and electric vehicles, and may soon be used to create art and music. Self-driving cars and electric vehicles are likely to be transformative experiences for people, and generative art may also be a transformative experience for those who experience it.
The mediani community often uses pictures in conversation, and there is a class on category theory being taught by Billy and Will. Billy gave a talk on Monday and shared images that he had made to illustrate a point. This type of experience is new and is like waving a magic wand. This was anticipated by Star Trek in an episode where a child used a tool to carve anything in their imagination. This tool was like a magic wand and was commented on by the Enterprise crew. Now, we have similar capabilities and this type of experience is becoming more common.
People are beginning to experience the convenience of digital calculators and voice-activated assistants such as Siri and Google. This technology is still developing, and the next generation is expected to be more advanced and able to interact with users in a more natural manner. This will be a transformative experience for most people, and it will be especially profound for policy makers and those in the military. When it is available to the public, it will make tasks such as driving a car much easier.
The transcript discusses how people's perception of AI will change when they can interact with it in a natural way, and how this shift in perception will move people to medium timelines. It is discussed how AI can be useful in many ways, such as self-driving cars and personal assistants, but this does not necessarily trigger an existential angst or fear of the unknown. It is mentioned that Star Trek is a cultural touchstone which shows the distinction between AI and AGI, and how marketing for AI products should not make people feel that the world is going to be drastically changed.
The original 1960s version of Star Trek had a janky voice for the computer, but by the Next Generation in the late 1980s/early 90s, the main computer was completely conversational. It was powerful, but not self-aware or sapient, and this distinction was explored in episodes such as Commander Data. Despite this, people accepted the distinction, as they accepted other technology such as the transporter, even if it didn't make sense. People are unlikely to be concerned about the distinction between a powerful, but non-sentient computer, and a self-aware, sapient one until it is actually created.
The transcript discusses the potential implications of advanced AI technology in military applications. It mentions the use of drones in conflicts such as the Ukraine and Azerbaijan, and how AI is increasingly being used in drones for things such as auto stabilization and auto target tracking. It also notes that science fiction has been depicting the potential dystopian military applications of AI for a long time. The conclusion is that it is not clear when a decisive moment will occur in terms of shifting the leads on a short timeline.
AI has been used for a variety of purposes, from controlling drones to running nuclear simulations. Public awareness of AI's potential to affect policymaking has increased, and AI could have a profound impact on military strategy. AI could be used to plan deployments of existing military assets more efficiently, potentially giving an unbeatable strategic advantage. This would likely require a large proportion of the military forces to be robotic or automated.
AI has been implemented in various fields, such as automated defense systems and algorithmic trading. Google reportedly developed a system that was too successful to deploy, but so far no AI has been able to consistently outperform the market. Deep RL algorithms are being used to try to create successful investment companies, but these systems often fail in unpredictable ways. Overall, AI is advancing in many fields, but it has yet to provide consistent outperformance in the financial sector.
AI is not necessarily good at trading, but some hedge funds and individuals have tried to make Transformers do it. Renaissance could be using 20 Transformer models for trading. There is an incentive to stay in stealth when it comes to AI applications, but it is beneficial if the public becomes aware of the imminent arrival of AGI. Even if we are wrong about it, the public's awareness would lead to different investments and decisions that would be to the good.
The transcript discusses the advantages and disadvantages of short timelines, and how they can be both beneficial and disruptive. It also highlights examples of technologies that can be too dangerous to allow out into the wild, such as weapons technology and bioterrorism applications. It suggests that if people believe in short timelines, they should get a huge mortgage and invest it in land, as land will be the only rare resource left in the future. It concludes that if Elites shift to short timelines, it will be a complete mess, but could be transformative and chaotic in a good way.
The speaker discusses how the introduction of AGI (Artificial General Intelligence) into the finance market could be an indication of short timelines and disruptive behaviour. They also suggest that this could lead to two or three different stages of awareness, with the first being an intellectual understanding and the second being an internalization of the idea that impacts behaviour. Finally, the third stage could be a total shift in world view and values. Surveys could be used to measure the population's acceptance of the idea.
People may believe that machines can be generally intelligent by 2030, but this does not necessarily mean that they will act on this knowledge. It is important to distinguish between awareness and action. An example of this is when people are in a room with smoke, they may be aware of it but not act until they receive social permission or an alarm prompts them. This is also applicable to the idea of machines being generally intelligent, as people may be aware of it but not act until they see others taking it seriously and changing their life plans.
The transcript discusses the idea of an alarm that catalyzes a shift from familiarity with facts to defining them deeply. It is suggested that, like a fire alarm, training is necessary to ensure people move calmly and orderly. The idea of nationalization of hyperscalers is raised as a provocative possible event in response to elites shifting to short timelines. It is suggested that this trajectory may not be left to the free market and some form of coordination of producers may be necessary. In order to ensure profits are allowed and to prevent chaos, training and preparation is necessary.
The elites in Washington are increasingly shifting their views to short timelines, leading to the potential of a nationalized war production board that would direct how things go. This could mark the beginning of an arms race for artificial general intelligence. If this were to occur, the government could take control of production of key hardware, such as GPUs and chips, and draft mathematicians for the software side of things. This could be done under the guise of some other policy or initiative, potentially with a plausible smoke screen for the real reason for it.
The US government is taking control of chip makers to win the AGI arms race, reminiscent of the Cold War. This is a zero-sum attitude without the positive vision part. There is a micro economic model of this shift in opinion, with venture capitalists funding AI-first startups and young people making their bets on them. This is what an Elite shift in opinion looks like, as people don't change their minds, they just die and a new generation comes.
New generations of Elites are making bets on short timelines in various areas, such as biotechnology, which will enable them to gain influence and power. The US federal government is notorious for participating in and enabling the military-industrial complex, with a large budget devoted to defense and military applications. A lot of this spending is done through private contractors, with some research and production done by government agencies and employees. It is likely that this is still occurring today.

Raw Transcript


should we get started yeah um I was wondering there was one thing that I thought of over the course of the week that I if it's a if it's all right I just wanted to go back and just elaborate on real quickly yeah um maybe just ask a question about it which is um uh so we we if we we tried we put a little bit of time into uh defining Ai and AGI we didn't necessarily you know create anything definitive there but we've talked through some ideas um and we distinguished a few things there we distinguished super intelligence from from artificial intelligence and artificial general intelligence I uh shared my own personal sort of three uh category schema which is um narrow artificial intelligence uh uh General artificial intelligence that isn't Sapient or you know isn't agentic it isn't self-aware and and and um and then a sort of the classic artificial intelligence idea of um artificial general intelligence that is sapient is self-aware and and um does act like an agent with motives and goals and so forth okay um well one other thing that we mentioned in the context of that conversation very briefly was uh Collective uh intelligence or group intelligence I wanted to return to this just circle back to this very quickly because there was this this uh this question of what it what exactly is it about coordination across domains of competence that distinguishes sort of a Singleton uh you know a single system versus a collection or Collective so what is it that makes a a single uh intelligence that's generally competent different than a collection of narrowly competent intelligences and it seems like it seems like you you it seems like the the um close coordination across different domains of competence is sort of implicit in an agentic Sapient general intelligence like a human being for example um because you know to whatever extent an individual human being develops competencies and they can range very widely obviously um there's there's close coordination across those so you know I mean I I know how to drive a car and I can walk and I can ride a bicycle and I can swim and I can type on a computer um but all of those things are competences or or domains of um uh well yeah I'm I'm broadly speaking intelligent and competent in those different ways uh but what makes that different than you know one person who knows how to drive and that's it one person who knows how to swim one person who can use a computer and um it seems like it has something to do with coordination latency uh you know
that that sort of thing so this there's there's that uh that there's that that's my sort of the first observation and I guess it's also an open question for uh but more on observation really that's open for Comet then then to just follow up on that very quickly and add to it um uh I think I would then point to the possibility of that second category you know a general artificial intelligence that's that's that is capable in many domains the question I would ask there is um uh what what would we expect a an effort or a single either a single system or an effort to to accumulate competencies under one one organization's control let's say what would that look like in other words even if you didn't have sort of a a Singleton in the human sense or in the agentic recipient sense um you could still imagine a uh a generally competent system in other words A system that has lots of individual um uh competencies and it could effectively be a collection of narrow intelligences or competencies so it could be a collective intelligence but if it were coordinated closely enough if we were sort of um uh if there was some effort made to interconnect and coordinate uh uh those those capacities that maybe could happen just organizationally and here I'm thinking of you know the the big corporations like Google and and and and apple and others the big tech companies but what would a program look like what about an effort what would a project look like organizationally it was trying to basically gather collect as many of these uh ai's narrow AIS these as many of as these as these narrow competencies as possible basically all Under One Roof and then you know coordinate them um and what would what would that look like and and if it were not an agent if it were if it were just uh uh you know would it look different than the collective intelligence that humans achieved through you know large organizations or multiple organizations across you know say an entire industry or an entire economy or does this look like something closer to an agent but still not quite and what anyway I hope I'm being clear here because it occurred to me that there may be a new and separate category that we should talk about or think about that might be relevant that's already happening and um it may have a novel role to play in sort of the the intelligence ecosystem as it were [Music] yeah I think on the first point the the term and the definition of general intelligence that I think accounts for this is the focus on goals
So when you say that an agent is generally intelligent if it's able to achieve a wide range of goals in diverse of our environments so a goal is a high level thing like prepare a meal and that is made up of many subtasks and I think it's implicit that a big part of what it means to be generally intelligent is the ability not only to decompose a given goal into subtasks and structure them in some sort of hierarchy in time and space and then executing each individual subtask and integrating them I suppose you could view us even in our own intelligence maybe delegated to subunits things like physical movement is below our level of conscious attention you could say that's a subtask and within the task of moving my arm is is many further sub-tasks that are kind of automated from my point of view uh but that's right I think general intelligence is about achieving a wide range of goals and achieving goals has as an important and perhaps decisive component the ability to deconstruct goals and um and then execute them and kind of bring the results together yeah I think that's right um yeah I think we may have talked in the past Adam about the kais model from Drexler so cais um what does that stand for collection of AI systems perhaps or services so this is an alternative model of a AI enabled future where sorry I think it's comprehensive AI comprehensive uh that makes more sense yeah um yeah so this is a model where there are many more narrow AI systems that are integrated in a way that the integration itself maybe isn't a highly automated maybe there's more humans in the loop or it's in some sort of Market or anyway it's not like a single server that's uh kind of passing signals with high bandwidth between its components um this seems in my view to be basically already a historical somehow so the way uh the way that modern AI systems are achieving breakthrough performance is by building on top of very powerful Foundation models which are trained on a large amount of very varied data often across different kinds of modalities like text or images uh and it's the sort of integration of those different sources and the scale that makes them as capable as they are I'm not sure it's uh so there's such a power to the general training and then fine-tuning on top of that that I mean maybe the Paradigm will shift but uh it makes so much sense and has worked so dramatically that I find it very unlikely that there is a future in which that kind of Kai's model apart from some sort of top-down dictat
on safety grounds that you shall not build general AI models that are you know not uh somehow narrowly tailored and therefore safe absent that kind of top-down regulation it seems highly unlikely to me that that's going to be a realistic path forward um but yeah I I guess on that on that final point you you maybe are talking about something like emergent agency where maybe there's no like conscious agent but a system behaves as though it has agency uh like you could say a corporations maybe already behave that way is that sort of what you have in mind yeah I I mean the the I suppose they I suppose they do have something analogous to agency but I wasn't really thinking my mind wasn't really going that direction I was really reaching more towards what the this this uh case um model is that that was where my mind was going um and uh uh I I guess the the the I guess the question that that really emerges out of that is um to what extent the achievement of goals that are that require generalization in other words the achievement of goals in the context of of significant novelty um uh to what extent that requires something more than just a a a a collection of uh separate competencies in in a number of narrower domains um and what I mean by that is is I guess this this just gets to this question of is it necessary to be an agent is it necessary to to have um uh to have some capacity to to recursively model oneself in the context of all of one's goal seeking in order to solve you know some types of problems in in situ in some situations presumably uh uh those problems or achieve some goals um in in some situations that are characterized by a lot of novelty and uh uh I guess that's an open question um could we contrast here say a collection of humans working together collaboratively each one of which is a general agent but which is kind of fine-tuned to some specific task for reasons of optimization and division of labor as compared to say a an and colony where maybe each individual ant is is not a general agent arguably there is no real General agent there perhaps uh but certainly any individual ant isn't capable of really comprehending the tasks that are being executed by The Hive and just does its little part you could say that's like a narrow intelligence deployed as part of a collective in a way that achieves goals that no individual in the collective could really even apprehend uh a human Collective each individual maybe still in the large organization you don't
really understand what the hell is going on but somehow in principle it seems more tractable that an individual could get the aim and maybe could execute it just very slowly or more poorly but somehow there's a there's a difference between these two kinds of agglomeration would you agree absolutely I think that that's a crucial a crucial uh distinction and and it is I think one it's it's a problem embedded in the definition of collective intelligence that that at least as far as I normally see it described and discussed it's collections of human beings so it's collect it's collections of general intelligences or generally or well yeah I mean or or another way to think about it is it's collections of agents and um uh uh and it's it I suppose well it's kind of ironic isn't it that you can have you could have say if you take for example a company or Corporation um you can have a corporation that this gets back to your early question earlier question does this does do agents do large organizations um have agentic properties and and it's a little bit ironic it seems that you know that to even be asking that question it seems like there are some circumstances in which you could argue that organizations of agents that don't behave in an agentic way like the the collective intelligence seems sort of sort of um dumb in a sense even if it's got you know a number of of uh uh you know even if it has a collection of competencies because of its of its the the the humans it's comprised of the organization itself can be pretty dumb not very self-aware maybe make bumbling along maybe making some mistakes maybe pursuing goals that don't really make any sense that none of the you know uh that that a properly intelligent agent wouldn't necessarily agree to pursue and so forth so it's it's it there's there's yeah there's some strange things going on with with collections of intelligences and um it's one reason why I wanted to kind of I didn't mean to derail the conversation that we we really want to have today but um it this is I am genuinely confused about how to make sense of this because it seems like you've got you know at different levels of organization you have um you know you have you have general intelligence and then you put a bunch of those together and you get a collective intelligence but somehow it's not General anymore and it isn't an agent anymore and it can be Dumber in some key ways but then possibly on the flip side you could have something more like this case model
where you're putting a bunch of dumb things together and maybe generality emerges and something more like agentism emerges maybe that's a possibility it just seems like like I I'm I really am confused as to what the rules are here it seems like any and all combinations are possible and we're seeing all sorts of different things maybe that's maybe that's the lesson to draw as they you know you there is no strict strictly speaking a rigid hierarchy from which you know uh you know above which certain things emerge and Below which certain things don't exist it seems like it's maybe more fluid than all than that um but I find this a bit confusing as a basis from which to draw any conclusions about the emergence of of uh artificial general intelligence of either the non-agentic non-sapient kind or the agent xapient kind um because we look at Collective intelligences and they just it seems like you know and then it goes yeah so I think I think that's right I think that's indicative of what the uh what the Finish Line looks like is it just kind of disaggregates into a mess that it's hard to tell when we're going to cross that line or not so there's here's one scenario that uh Andre capathi just did a podcast with Lex Friedman that's quite interesting I found the first I don't know how long it is maybe two hours or so uh the early part maybe the first three quarters I didn't find very interesting but there's a quite interesting exchange near the end uh so Andreas is talking about the Codex model which you can use on GitHub to a sort of co-pilot uh codex is a closely related thing co-pilot on GitHub which is a assistive coding model that you can helps you write code being very widely used now it's not perfect but people find it useful there's similar internal projects in Google and elsewhere uh and he it's just a single model that looks at your code and proposes extensions to it and you know you write comments in natural language and it attempts to write code that achieves them and so on but Andre was speculating about how well you'd have one model that's proposing code and then another model that's more specialized in finding bugs and that would pay attention to the outputs of the first model and then you might have a third model that is uh checking to see if it compiles correctly and executes to do the desired job and then an overall kind of manager model that will integrate the outputs of all of those and that does seem like a plausible trajectory that will happen quite soon
I'm not sure that's exactly what the case model has in mind but maybe it's uh kind of conceptually similar and that makes a lot of sense it's you get a lot of advantages to training on a large data set but there's still value to specialization and in that kind of system when would it cross the line to being a kind of general intelligence well uh if it's just limited to code then you could say it never will but at some point maybe there's a mix of these systems communicating adjudicating one another's outputs in some complicated way and it's never quite clear at you know what when this barrier of in general intelligence Falls it'll just be kind of clear that within some fairly broad domain like coding with cross some threshold to be you know maybe it won't be like Tuesday it's not there and Wednesday it is there it'll be like on Tuesday there are five percent of all the programmers in the world basically offload a top like set of tasks to this kind of model which they would have previously hired an assistant to do and so there's like basically it's kind of General because it's replacing a general agent which is a human but only within the task of programming but still that's a very broad task and so on so there'll be this kind of like creeping generality that probably it's you know by the time it's here people will be used to it for some uh we won't be a decisive moment I suppose because my impression um that's really interesting um I just wanted to go back to what you were saying Adam about like dumb companies it's just it just sounded really quite ironic and funny that we talk about um you know if you if you have a job that you don't really like you might speak about it as if you're like a cog in a machine so it's I wonder if like somehow when a kind of company is like you know functioning in a dumb way it's kind of not really using the sort of General intelligent capabilities of the people who make it up right so you you feel like it's somehow working more like a computer like you like you feel like you're a kind of bit or something or you know just a little mechanism like you're just doing your little thing um you don't really see what's going on and um so I I suppose that yeah the the um the the way that they put the coordination and the sort of systems design is really crucial in terms of um how what like what the outcomes are yeah that's really interesting from a historical perspective too right because in the in the sort of early days of the Industrial Revolution and very famously
you know a century or more into the industrial revolution with with Henry Ford you know being the quintessential example of the in the the um assembly line where human beings were just reduced to performing a single task you had a deliberately conscious effort to take the the agent agency out of the human beings really dehumanize them and alienate them from their own labor and and or alienate them as in the process of the division of labor and um uh yeah the the that's that's fascinating that that I suppose there were significant advantages to to especially trying to scale production in an organization to removing the um uh uh the agency from the individual uh constituents but I suppose that that there was there was we were effectively trying to build a machine you know I guess the fancy way of saying it would be some cybernetic machine but we're trying to basically build a machine in the in the form of an organization to to to achieve you know some goal to perform some function assembly assemble automobiles in this case um and uh uh you were having human beings perform very mechanized very mechanical functions uh that didn't honestly didn't they were they were specifically designed to be sort of minimally intelligent um uh uh yeah that's a really really interesting way of looking at it um you know uh just from a historical perspective and then now you know it depends on the organization of course but if we then fast forward another century and we think of the big tech companies you know when you're trying to engineer software or do something that's that's very intellectually um challenging you know maybe you maybe that they're uh we there's there's a different set of because it's a different set of challenges and different kind of requirements of intelligence that we're asking um then we return to this to a a situation ironically it's a little more analogous to you know the pre-industrial era where we hope each person is exercising some of their agency and using their generality of intelligence to to bring that value to the service of the organization's goals as a whole which of course is like what you know that's what you would do if you were if you were an apprentice building carriages as opposed to you know you know and learning every task and solving every problem that came along and helping with the the family business as opposed to just yeah right twisting one twisting one wrench on a production line right that's weird it does a weird Circle that's it's being traversed there
somehow it's wild I think I think you might think too highly of programming work but yes it may not be that different to working on a production line uh I think in practice at least at some level I I do have one more thought um uh and that is so so just bear with me here with this with the thought experiment but imagine Google or alphabet you know or apple or any of these big companies that's working on this I'm thinking Google and deepmind um so say they they are assembling a large collection of uh narrow AI competencies and let's say that that collection becomes quite large you know dozens or hundreds or thousands of narrow competencies and we can fast forward in time of you know a few years to whenever this thought experiment makes sense but is is there does there come a point where um you you need a you need you need a a a higher level not a high level what's the right way um a um well really a managerial or executive uh function that who's whose job is to choose among the narrow AIS for how to accomplish some task and to coordinate their activities so say you have a novel goal that comes along and I know this is this will sound a little bit agentic um but say you have a novel of all that comes along you want to solve it and you think you sort of have the capacity to do that with you know with the toolbox of AI narrow AIS that you've got but you know you've got to pick and choose you've got to kind of coordinate them a bit you've got to get them to do the right things you've got to get them to talk a little bit to one another so that the findings and results out of one then are you know output out of one are input into the other and so forth and right now it seems like if you were to take advantage of narrow intelligence in an organization human beings would be doing that process they would be okay well I use this tool here and that tool there and that tool over there as much as we do with any other tool and then you sort of you know outputs from water inputs to another and you get your results that you you accomplish your goals that way and I'm wondering what at what point do we create another layer uh a coordinating or managerial or executive kind of function layer that whose job it is to facilitate that coordination to make decisions and to you know why are things wire things up together um uh and uh how many well okay well that's the first thing and then I suppose the second thing is when and where and how does that cease to be distinguished distinguishable from an agent
[Music] yeah maybe you'd be interested to know how the Cutting Edge of Robotics looks so The Cutting Edge of Robotics as far as I understand it is that they take large language models and large Vision models they're trained separately and then uh those various demos of this I think from Google and elsewhere so these models are kind of together joined to form the brain of the robot who's you give it a task for example go get me a drink from the fridge the language model is then sort of given this sentence and asked to figure out what it means and and how those ingredients combined together to solve the problem to perform the task but the the robot has to see in navigating the real world so the vision model is kind of mapping those objects in the real world onto the internal language that the language model understands and together they're coordinated by at this point a human written piece of code that glues them together and that together forms the kind of agent uh so yeah that is what you're describing is the present day of how cutting-edge robotics Works um there's no reason to think it that a handwritten integration component won't at some point also be learned the intermediate intermediation layer would be the kind of internal language that both models have sort of learned to communicate in in this case the the dictionary of tokens that Transformers understand but yeah that hasn't happened yet I think but certainly you would expect that layer to itself be learned at some point so I find that very plausible here what what kind of data would that layer be learning from do you imagine well we're trained on it would be trained on so you could imagine a kind of reinforcement train learning Loop where the the vision and the language model components are pre-trained and fixed and then you would have an agent whose job it is to kind of coordinate those two so sort of translate from one to the other and and use that coordination in order to achieve some goal then you would give it rewards based on whether those goals are achieved or not in simulation most likely so that would be how you would train that outer loop by reinforcement learning so you'd have these kind of components which may have something in common they may have been jointly trained to some extent uh but you would you would yeah reward an agent for their ability to communicate or facilitate communication among those components and integrate them to achieve some kind of goal interesting okay cool that's interesting because I mean when
you're describing that then it sounds like a lot how I imagine the human brain to work yes um and like in the case of a robot we're dealing with a kind of a kind of system that has a kind of body so it's sort of easy to imagine how you know like if you wanted to train it to move boxes around the factory you can just get it to do that thing um and you you have some sort of reinforcement process and um eventually it learns to do that but um yeah I suppose maybe it's a little harder for me to imagine how that would happen [Music] um you know I suppose you need to define the sort of generality of the sort of the problem domain that you would be looking at but um yeah I mean I suppose it's the same principle isn't it yeah it's a source a question of difficulty so reinforcement learning is a very difficult learning problem because I take the example of a robot so if the task is fetch me something from here or there or move a box from there to here that's made up of many individual subtasks and the reward is very sparse so you might have to take thousands or tens of thousands of actions before you get a signal that says you did it correctly or not which is why this early progress and things like alphago with reinforcement learning really didn't go very far so deep reinforcement learning has not stalled exactly but it's not the place where rapid gains are being made which is why now people are returning to these problems but with the Arsenal of these large pre-trained models that are much easier to train so it's much easier to train just the piece that understands your instructions and maps that to some kind of internal language and the piece that knows how to see and then have like a separate Loop which on top of those can then learn how to perform instructions that you give it and that seems to be the model that uh people are moving forward with very rapidly in robotics so yeah it does sound a lot like I mean it it it just intuitively it it like Owen said it really does strike me as as uh concordant with how I imagine my own brain working um uh you know the the the yeah I guess the the I suppose this is an older domain of you know reinforcement learning's been around a while I'm sure this is well trodden territory but um uh you know when a person has a goal and doesn't know how to achieve it there's a lot of trial and error you know it's pretty clear when you you know um when you don't succeed in a goal uh it's it's sometimes less clear uh whether you're making incremental
progress towards successfully achieving a goal or not I mean I mean sometimes that's clear for certain tasks you you can tell oh I I am inching closer to being successful at this but there are other tasks where you know you're you're you're it's really hard to tell if you're getting if what your if your strategy is working at all like is it is it you know in other words I think there are there's a difference between between goals to which there's a clear continuous gradient of you know that you can Traverse and then there are goals where it's it just it just seems like the you know you have to take big leaps across obstacles um uh you know and and um you know the actual success May lie a few peeks away and may not even be visible at all from where you are and I'm I'm not completely clear how um how that maps to you know to to machine learning and human beings seem to be able to do some of those leaps with the with the help of imagination and you know dogged trial and error and so forth but very rarely no I mean you're talking about incentive design and the whole problem of management and organizational persistence and this is some of the hottest things that humans do right no no we we are terrible at this uh yeah this is a very hard problem and likely to be one of the key places where human are contributing to AI systems until it becomes completely automated specifying loss functions and how to provide reward in a way that's informative you know when when we're inside organizations this is what we interact with right our incentive system what gets me a promotion there's a list of stuff tick or not uh who designs that well whoever designs that incentive system the loss function that we're kind of optimizing for if we're careerist conformist people uh that that loss function determines you know everything uh very very careful design is necessary to engineer the outcomes that that you think are desirable so yeah this is a hard problem I don't it's hard for people it will be hard at the beginning for machines uh But ultimately they will learn this part as well as is already I mean that's kind of what's happening when you provide sparse rewards to a capable system so um internally deep reinforcement learning system is somehow figuring out how these sparse rewards like win or loss translate into signals that are more appropriate for the components that make up the algorithm we don't understand how that translation really works but that's clearly what's happening at some level
inside it's it's learning how to you know in an organization it would be something like the executives their job supposedly is to translate increases in sales like that signal what what's responsible for that signal how do you translate that success or failure into actions internally how do you map the external signals onto the internal incentive system that's um you know why they're supposed to earn the big bucks I presume yeah supposed to it I fully believe that there's deep competence in uh doing that yeah okay there's a lot of that competence is is well I mean this is another funny thing is that we're you know we reward systems get get corrupted and you have you know multiple layers of reward and hidden incentives and you know all the game theory kind of good fun stuff um and you see it in all you know big corporations Wall Street is famous for it for you know rewarding all the sorts of the wrong things and yeah um uh but yeah you know how do you how do you solve a problem that you just that you've never encountered before that's really hard that you don't that there isn't a clear sequence of steps that that will get you to success you know I mean you know yeah this is very you know like how do you do research how do you do r d on a new uh you know to develop a new uh to build create new knowledge or to develop a new product or service and and figure out how to actually do that successfully it's super super difficult and a lot of you know a lot of leaps of Faith involved and and um I guess yeah leadership Executives and men as well they are making guesses uh you know hoping that that we'll I guess ultimately you're exploring some possibility space right and and uh as you as you've Bumble and flail around blindly in the dark and try to remember what you where you've been and what you've encountered just trying to make progress but yeah and sometimes it's tough for heart problems super super duper tough um anyway yeah shall we segue into the um advertised topic yeah sure sure uh okay let me just take a look at this board and see where we were up to uh Adam could you move over here so I can get the old cam looking at this uh all right so we were talking last time about three kinds of timelines short medium long uh and three groups Insiders Elites by which I have in mind non-insiders so not tech people not people directly involved in building these systems or funding them but the uh the players who will seek to uh well yeah I guess there's a list there I'll
just leave it at that and then there's the public so everyone else we discussed last time fairly extensively the the view of the Insiders which arguably for some subset has already moved to short timelines uh we finished and we got up to at the end talking about Elites who maybe have shifted from long or long timelines or agnosticism towards perhaps medium timelines thinking that we'll have General Ai and something like two or three decades as compared to hundreds of years uh that's where we left off I'm not sure if we have more to say about Elites I think we want to maybe spend time talking about P about the public category did you have more you wanted to add to the discussion last time anybody about Elites and the medium timeline we I think we finished talking about the Chip War and I see a question that why not more and maybe that's a good leaping off point so uh I think it's very recent somehow uh the shift in the elite opinion towards something like a medium timeline and still isn't really consciously stated I don't know that it's I think it's a kind of latent view that you can read in action but maybe hasn't in the collective intelligence of the political and economic Elites reached a conscious thought I thought of one compare point of comparison that's maybe interesting which is UFOs or uaps or whatever socially acceptable acronym people use which has gone through a very interesting kind of phase transition over the last few years from not really there being new information although there's some new confirmation from more reputable sources but it's not like there's actually that much new to know but there's clearly a big shift in Elite opinion and willingness to state that opinion on this topic where now it's kind of the mainstream view as you'll read in the New York Times and you know intelligence officials will say it and military officials will say it and there are Congressional hearings that there's definitely something going on that isn't weather balloons uh they don't say it's Little Green Men necessarily but there's something going on and it's not crazy to talk about it anymore so it's not like the facts on the ground really shifted over the last few years it's just the Overton window has moved a long way so I think we're not quite there yet with AGI I don't I think if you were to stand up and ride in the pages of the New York Times that you know just in a very uh straightforward way that oh get ready uh the the machines will be here in in 20
years maybe that would still be a little view it as a little Fringe uh yeah I don't know you would agree with that perception Adam yeah I think that that's correct I think that the elites have are in a different situation than the tech insiders right in that they have um their their positions of leadership their positions of authority their positions of power and influence across different measures and dimensions um are at least in some sense contingent upon public approval so the layer of Elites and what they're able to espouse uh explicitly is is contingent upon some sort of social sanction publicly and there so there are certain things that that are not that are that remain taboo or that you know are will likely invite ostracization and and ridicule and so forth and so even if the the elites hold private beliefs they aren't able to espouse those publicly and so they aren't able to act on them in in an overt uh Manner and we've seen this um in other domains disruptions provide instructive examples uh that we talked last time about how their the the um cultural political and economic Elites can be informed by uh Knowledge from insiders but but then don't but but then the social pressures force them to be conservative and to to uh adhere to a conservative consensus rather than going out on a limb and sneaking or risking their reputation for example um uh uh so yeah there are lots and lots of examples this of this we mentioned some last time one I'll mention just again as a reminder very briefly is the climate science Community um is uh has is is notoriously conservative in its estimates and uh there's a great fear of being alarmist in that Community because of the political you know um the attachments to the issue right and so for the last generation uh the climate science community so it's 15 to 20 years or more climate science Community has been very uh very cautious about making claims and will only make claims and standby claims that for which their the evidence is is uh overwhelming and as a result there's a consistent pattern that emerges which is that the the delta in other words the the derivative of of the projections going forward only ever get worse because the the climate science Community the consensus only defends um uh claims for which the evidence is overwhelming and in the in in the interest and the incentive is to avoid alarmism the the that that consensus uh only only ever underestimates the risk and the you know um always errors on the side of uh of of
making not OP not positive or negative or optimistic or pessimistic claims but only only ever errors on the side of uh of less climate change That's What I'm Looking For Less uh climate change and less impact rather than more climate change and more impact and as a result if you look at the Delta if you look at the change over the over the um the last 20 years the the uh predictions for climate change only ever get worse and the scenario that emerges is usually as bad or worse than the um than business as usual or the worst case scenarios and it never it never seems to go in the other direction there never seems to be a shift to oh it's not as bad as we thought guys um that never seems to happen and that's because of the nature of the consensus and the incentives around it and you describe some other examples last time so this Elite layer is severely constrained uh in terms of its incentives and the consequences around um being completely forthright not not honest but just being forthright like fully fully disclosing and so that you think disclosure that's a word that's related to UFOs and uaps right that I think there's something called the disclosure project that's around getting the government to admit things at any rate um yes uh the this is definitely a a pattern and the public needs to need I think historically we've seen that the public has to on some level Grant permission um to move the Overton window so that uh you know certain part certain conversations are allowable and certain Claims can be made more possibilities entertained um uh seriously and right now it does feel like the public is no is it's just not near that point with the short AGI scenario I don't I don't feel like I don't my perception and again this is subjective but my perception is that is that the public is not not even close to that yet and and maybe the public is inching from long to worth medium and a little bit of this perhaps is is helped by you know science Fiction's you know Hollywood stuff here and there but it all is treated as if it's a long long way off still it's a remote possibility it's just science fiction the public does not it's my eye seem to be taking this at all seriously on sort of a tenure time Horizon so uh and then until the public does that I think we should expect the very few Elites to stick their neck out on this how is Andrew yang received so Andrew Yang was a presidential candidate was that in 2016 I think so um am I getting the name correct Adam you
know it's Andrew Yang and it was 2020's election it was 2020. okay so he was campaigning partly on a platform of caring deeply about Automation and its effects and he was explicitly talking about a universal basic income Ubi uh yeah I guess more generally I'm interested in your take on how his message was received uh but I'm also more generally I guess reaching for the point that it seems a lead opinion shifts partly through this kind of Entrepreneurship politically right where you'll have figures who stand up and kind of float a trial balloon to see if the public will follow them out on a limb and see something as important that maybe they didn't see before and that seems to be how major parties often shift their opinion or the Overton window shifts is that you get these entrepreneurs who 90 of the time Crash and Burn uh but then sometime when the time is right they'll be the first to notice because they won't crash and burn and then uh then things happen so yeah actually one of the things I wanted to put up is the harbingers of the elite shift from medium to short would be the emergence of entrepreneurs of that kind so maybe I'll I'll do that while you give you a reaction um I think that that's a great point I completely agree with it um uh in my mind Andrew Yang was sort of a tragic figure and a tragic very unfortunate a terrible loss um for certainly for the United States if things had been just a little bit different um uh so the tragedy there with Andrew Yang specifically is is he was one of the first people in a long time to run on an entirely pragmatic platform really about problem solving you know sort of brought very much he's not an engineer but he brought that kind of mindset to the to the task of of policy making which was an enormous breath of fresh air for a lot of people not for everybody but for a lot of people um the things that were tragic about that about this particular example uh of him is that um he he was he he was simply not uh uh the right person to have that message he wasn't just wasn't quite uh successful enough or famous enough or experienced enough in any of the of key ways that would have that would have um put him over the you know put him over the uh uh the first big obstacles that he encountered he did remarkably well considering he has had minimal experience and and notoriety and so forth if for example if for example um you know Bill Gates uh had run on that identical platform with the identical messaging he would have gone a
lot further because he is you know as an unassailable track record as a technology person you know for all of his flaws and yes we can you know criticize him and so forth the American public would have looked at someone who was monumentally successful in technology very differently than someone like Andrew Yang who had not just not that impressive enough of a reputation similarly if a very seasoned very famous politician had had come across come along with the exact same message and I'm not even talking about you know um you know somebody's Super Famous uh like you know Obama or something I'm talking about just you know somebody with a lot of political experience maybe who'd been a contender at some point you know um in American politics we've had people like we have people like that at every election cycle probably the people you know you you could think of someone like Pete booty judge who is a reasonably successful um in the candidate probably in the running for a presidential election in future years um uh if if someone like him had come across come along who who could be taken entirely seriously as a politician who did that had the same message that way what it might have landed entirely differently so um and then the the last piece that makes Andrew Yang's case I think tragic is that um he dropped out of the race uh uh only a few weeks before the pandemic really got going in the first wave of the pandemics um emerged and if he hadn't dropped out um his message about Universal basic income would have would have resonated completely differently because we basically did have a Ubi for about a year in the United States or something very close to it we the the the US government the federal government started sending out checks to a significant fraction of the U.S population I don't know the exact numbers but and they weren't sending out equally sized checks they ranged from about 500 to about 2500 per month depending on circumstances and family and so forth um uh I received some of those checks um so they were really were going out to a wide range of um uh people with different incomes and um uh those checks went out from for well the original plan was a few months and then it got extended multiple times and I think it ended up the program ended up being a year or more long and if Andrew Yang had stayed in the race another month you know all of that messaging about a universal basic income would have would have been relevant in a completely new way and it was tragic
just that timing of that um so it could have gone very differently there was an opportunity there for a better a better candidate um and maybe a little bit more luck to completely change the conversation about technology and and disruption and Automation and perhaps even age now AGI and certainly about policy responses like Universal basic income but that opportunity seems to have missed and the the uh we haven't had a new candidate like that who knows if we'll get one um but I I agree that that's I think that there's there's a there is potential for for um uh entrepreneurs to occasionally you know give give the whole entire system enough of a shock that it really does instigate a shift um having said that I think we do need to be very wary of sort of the the heroic figure of History where you know we attribute um the the turn that we attribute the the trajectory of of human history to Kia sections of individuals um I think that's Pro we probably do too much of that we I I'm not saying that that never happens but um well I think that it's it's we instinctively sort of um uh point to individual actions by you know individual people and we sort of attribute let's let's say we attribute heroic causation and I think we probably do that too much and there's a bit too much anthropomorphizing and there's more of it is just system inertia and system behavior and less of it is about um uh you know individuals uh really making the entire system shift but having said that having said that you know I I I I I I don't just I don't dismiss that that happens at all I think that that does happen but we probably over attribute the actions of individuals uh causally to the direction of human systems um so anyway yeah that's my general take on it and we're just bringing it back to this conversation uh yeah I I absolutely agree that um uh we seem to be in a fertile time for a for a new um uh for one or more new figures to emerge that that then gain Traction in Shifting the Overton window of the public and thereby permitting the Overton window of the elites to move uh supported by you know the the authority of the experts at The Insider level and so with timing and things see the conditions seem right um and uh uh yeah it will be I I suppose it's interesting to ask the question can is there anything that we can do to advance this um this this process can we in other words is there some campaign of awareness creation that could be mounted that would be beneficial to humanity to making you know to making that shift
sooner rather than later um and I work for an organization that believes that's possible so many universities are not fundamentally believe that too I suppose they do yeah if they could only achieve such things uh could you move over to the other board so that I can take a look at this yeah I wanted to add another name into this mix as a kind of prediction so in a few years you can uh congratulate me on how clever I am so Tim not Gabriel uh who I've mentioned before this would be my prediction for a political figure to emerge on this topic within the next few years with a a wide platform and a significant voice in how these things turn out so we don't have to talk about her again but uh so she was well looking up bio before this meeting she was on 2022's times influential list of the I don't know 50 most or 100 most influential people so for whatever that's worth maybe that's already happened but this time you will you will hear more of I think Eric Schmidt the former CEO of Google is is also very influential behind the scenes I don't know to what degree you'd attribute all of the m to S shift among the political Elites in the US sorry the L to M shift uh to him specifically but I think a big chunk of it could go to him I think he's very active in the kind of government and Military circles in uh explaining what's about to happen so I haven't paid much attention to him but I know there's some books that would go into more detail about that I haven't read them but that seems to be another figure to watch I think uh okay so that's kind of one of the signs of the medium to short timeline shift I wanted to just rattle off for Elites I wanted to Rattle off a few things that I think might make a difference or kind of punctuate the equilibrium a little bit uh just as a kind of conversation starter I think progress in robotics which is likely to accelerate partly for the reasons we were discussing earlier could um you know as as Rob more General robotics kind of moves out of the Sci-Fi column into the column of things we actually see out there in the world particularly in more varied forms of manufacturing this could show up in GDP numbers and be something the elites care about uh I mean all of these I suppose could also go in the column of things that the public will pay attention to but I think to begin with it's not like the average member of the public cares so much about what goes on deep in the bowels of the manufacturing system right not in the way that Elites politically
Congressman or whatever will pay attention to what's happening in their districts and what moves the needle on employment figures and so on so you could imagine Elites caring about this more than the average member of the public for sure major breakthrough treatments I think is is another one that's going to be something that causes people to pay attention um progress on things like Fusion which is already perhaps happening it's a little hard to tell what's hype and what's not new materials is an easy one to forecast we're going to see uh a lot of progress in novel materials things like I don't know specifically if it's going to influence the yields on things like carbon nanotubes but that that kind of thing where you're getting new materials at scale that enable new kinds of applications that'll be one of the signs that things are accelerating and on a similar I'm reading quite a few papers making use of AI to engineer new and exotic kinds of quantum States highly entangled states with novel properties that won't influence the economy I would think at least on the short timelines won't make much of a difference but it will be one of the research areas that gets people excited I would think over the next a few years I suppose there's military applications um what other triggers do you think exist Adam or anybody else for this according to our reading this house this hasn't happened yet right Elites moving or some significant minority or a majority moving from medium to short timeline short meaning within 10 years of today as the median prediction the point estimate so that puts 50 of the probability mass in the next decade to be clear right so it's not that a short timeline means it's definitely happening by the end of the decade but it means it's almost surely happening within the next two decades that's a short timeline you could be more aggressive than that of course but uh yeah so what other triggers uh are we missing from this list yeah the the um in in my team was talking about some of this not purely related to artificial intelligence per se but just in as a general phenomenon when do when is there a moment of realization for any individual person uh with sort of grocking the reality of a new technology realizing that it's going to make their it's going to make a difference in their life and a lot of times this goes back to sort of individual experiences with exposure to to some new capability right so for many people we can kind of remember the first time we had a
smartphone in our hand you know what what that what that felt like or the first time we you know took a picture with a digital camera or the first time it actually got online on the internet um in previous generations people can remember the first time they flew in an airplane the first time they rode in a car and they just oh wow this is the the world will never never be the same now and and my life is recontextualized completely by this by an experience of this new technology so what we could imagine what technologies does AI touch that are likely to create an experience like that a very obvious one to my mind is to perform of Robotics which is the self-driving cars once once you a person gets into and rides a car takes a car ride that where there's there's not a human driver and it's a pretty seamless uh and fluid experience I think that's going to be transformative for public uh awareness um uh this is I mean it's possible to have take these kinds of rides in a handful of narrowly geofenced and and you know weather permitting kind of areas like you can technically do this in a few places but we have not hit the sort of critical mass of that yet um uh but presumably that moment will come and you know some large number of people will have that experience I think we're already there with electric vehicles where enough people have ridden in an electric car or driven in an electric car and just say oh wow this is the future of cars this is all cars are going to be electric now I get it um so electric vehicles are an example um uh of the same sort of phenomena I think that self-driving Vehicles will do that with the artificial intelligence component um uh a uh the the ability to create your to be able the ability to create your own art and music I think will be transformative um we're sort of just getting there now so people are not many people have this experience but once once this becomes uh wider spread and and we realize we can each of us generate this content um I think that's likely to be quite transformative people who have had that experience have come away you know thinking it's it's really amazing and so I suspect that that's a case um and uh uh oh gosh I have one other thought here it'll come back to me in a minute um what was my other example another example um oh I slipped my mind at the moment I'll I'll it'll occur to me in the next five minutes and I'll bring it back up yeah just on just on the note of uh Dolly and generative art it's interesting the way
it's already become a part of the mediani community actually so it's it's quite often there's like four or five of us who have access to it and you know it's it's uh often throw pictures are often thrown into conversation uh just as a you know and then uh in the way yeah it's hard to think of an analogy really I mean I guess that's how memes work in general but it's not that you easily produce them I mean you can just be chatting and then make one of these images and then throw it in it's quite interesting and uh there's a class running at the moment on category Theory that's being taught by uh Billy one of the people doing the most behind the scenes to make midi and he worked technically and will who's a PhD student of mine and the talk he gave on Monday uh yeah as a way of illustrating one of the kind of conceptual points he'd noodled around a bit with Dolly and made some images which uh then he shared it's it's um it's interesting it's a new kind of it's a new texture in Human Experience that's uh it's very interesting to watch unfold well it is uh I mean in a way it's like waving a magic wand I mean it's it's you know it's pretty amazing um yeah and as with many things they were uh uh it was anticipated by Star Trek there's a Star Trek the Next Generation episode with some interesting social commentary about it um it's not a great episode it's a forget largely a forgettable episode but there is a moment where um you know the Enterprise encounters you know the crew of the of the Enterprise encounters uh a culture I don't remember the exact details of it but um a uh a child is using a tool that allows them to carve out of wood anything in their imagination so anything that they could picture in their mind they use this tool and it just carves it for them and so this child effortlessly carves a dolphin and the Enterprise crew criticizes this this as cheating basically and and because it doesn't require the development of any skill it just it bypasses that and uh you know there's there was interesting some interesting social commentary around that and it's funny of course this was 30 years ago it was in the in the early 1990s uh maybe late 80s I don't know exactly which season of The Next Generation it was um but fairly present and now here we are we have very very similar capability now and it it it it yeah it I think in the episode it was literally a tool that looked like a magic wand he just pointed it and it carved up this thing um and yeah it's it's that's that's a
bit what it feels like and people who've used that uh you know suddenly you know I I suppose metaphorically people must have felt like that the first time they had a digital calculator right and you're just magically produce these results uh effortlessly and you know people spoke uh derogatorily about you know um how that crutch was going to hobble people intellectually and oh you're when are you you can't you're not gonna always have a calculator in your pocket of course now we do um and uh won't be long before they're probably better integrated into our you know closer to our our brains um but at any rate that's it does remind me of that um I did think of this this other example and uh and that is um I think when people engage with a genuinely useful personal assistant that will be a transformative experience my understanding is that is that this is quite close in lab uh environments now so I mean you can use sort of Siri and Google right now voice activated assistance and it can do quite impressive things I mean you can you can with you know your typical OK Google kind of questions and commands you can get an amazing amazingly good result um it's extraordinary what capabilities are there um uh and but but the conver but the quality of the of the interaction is not is not quite at you know um it's not it's not fooling anybody that this is something that might pass a Turing test right but I don't think we're that far I think the Next Generation that were that was if it were rolled out based on new language models and and and stuff and things that are again that are that are being demonstrated in lab labs right now um probably that require too much horsepower to you know to deploy to millions of people simultaneously millions of users um those are coming obviously and I think that that that when you can properly have a conversation across a wide range of topics and ask questions in in a fluid natural language Manner and and the the personal assistant can keep track of multiple you know uh a complex Chain of Thought and multiple ideas and not get lost and remember things you said two minutes ago and that kind of thing which you can't do right now I think that's going to be your transformative experience for most people um when they experience that for the first time and I I think of of in my I mean for the policy makers yes you know fusion and Military applications those are going to be in you know those are going to be profound for the public when they get in a car they can drive itself
and when they can talk to Google in a completely natural way and have a conversation and it is not making dumb mistakes but it's remembering you know what you said a couple minutes ago and it's basically following the plot those are going to be that is going to mass in my mind that is it won't take that many people uh having those experiences to really shift the the Overton window to shift the perception from the long to the medium to the short timelines yeah I'm curious I guess we're jumping to the the public version and that's fun uh I can see all of those things moving people to medium timelines but I feel like the takeaway from all of those things has a rather different feel to it than what I think short timelines feel like because all of those things they sort of feel like computers just better you know so a car that drives itself uh or generative art or a personal assistant yeah it feels like Star Trek I guess but the AI and Star Trek is is not really an AGI it's like the the computers in Star Trek are just the kind of things you imagine computers doing if computers were really good and could actually do the kind of things that you know you sort of naively expect computers to be able to do perhaps uh it doesn't I don't think it's automatic that that shifts you to really even believing necessarily in AGI as opposed to just I don't know I'm not able to put a finger on the distinction but I feel like it it raises your estimation of just pervasive very useful computation but doesn't necessarily trigger a kind of existential angst or kind of fear of the unknown or the the alien mind about to change the nature of your social and economic reality that those things seem separable and will likely be separated in the sense that you don't want to if you're making self-driving cars or any of these other things you don't you don't want your marketing to make people feel uh that your products are the uh The Cutting Edge of a blade that will slice your world in half yeah it's interesting Star Trek is sorry to lean on that as a you know a cultural Touchstone and and Harbinger um but the it was interesting that the both the original series and then the next generation and I presume subsequent ones um which I don't know as well um uh those television shows just always distinguish the the computers which were as you said very sophisticated like the you know the main computer on the on the on the Enterprise you know anybody can talk to it at any time and it was pretty conversational
um they had a janky voice in the original you know 1960s version um I think for the benefit of of the viewing public would have been it would have been a bridge too far to get them to believe that the computer would be completely naturally but by the Next Generation you know in the 19 late 1980s early 90s the main computer in the Enterprise is completely conversational but there was always the idea that it wasn't sentient it wasn't agent it did not have any goals just was you know for function basically as an oracle um it was very capable like you said but there was always a distinction between that and AGI that was genuinely uh self-aware sentient alive and then there was a lot of hey made about that with for example um uh commander data and there's some fantastic episodes uh cover covering the question of personhood and that sort of thing um but there was always a distinction there between those two ideas between sort of a general intelligence that was it was self-aware and Sapient and when it wasn't and I suppose my three three category schema comes comes from it's just borrowed I should probably be deciding Star Trek none of the academic papers or or you know any popular books or anything written recently but I'm sure it goes back to that it's funny now that I think about it um but having said that you know I I even as a kid I always found that a bit ridiculous and I lumped it in with other things that were just conceits of the of the needs of Storytelling right so like like the technology of the tech of the transporter you know it doesn't make sense and half the things in the show don't make sense in the context of that technology right I mean it doesn't it doesn't yeah it doesn't make any a lot of things don't make any sense at all um uh if you if you have matter you know energy perfect matter energy conversion and assembly and disassembly and and so forth um but you just accept it as a conceit and and so the one it never made sense to me that um computer could be powerful enough uh to do the things that the main computer did on the Enterprise and then it wasn't self-aware and then and then that it didn't completely transform everything about Society um and uh uh so I suppose until we have it there you're probably right there's no reason to expect um naively uh there's no reason to expect that people will be concerned about this you will simply accept it in the same way that we know we accepted the distinction on on old episodes of Star Trek yeah computers can do these
things but they're you know they're not really alive they're not really self-aware they don't really have you know priorities and goals how much they're not going to change things that much um I I suppose the the cynic the cynical view is is that position right um uh that the public won't shift all that much and uh you know yeah I I I I I I I don't know um and um uh I suppose that the policy making level we should we should talk more specifically about the military applications because you know drones is only one I can think of a number of other ones um and those those can have profound influence on the elites yeah let's do that I think that's one of the most noticeable outcomes of the Ukraine conflict actually is the way the discussion around this shifted before the conflict I mean I guess there was the um the earlier conflict in Azerbaijan um if I get that right where drones played some limited but potentially decisive role uh pay enough attention to form a judgment on that but certainly it was described that way by some Observers and there's you know a lot of activity in the Middle East these attacks by um Israel on Iranian targets for example I mean of course drone Warfare is uh is at this point decades old red uh the U.S military has been making extensive use of drones all over the place for a long time but these These are remote operated largely and automated to some degree but not I would say very large consumers of advanced AI in my understanding but even small drones uh you know Auto stabilization and auto Target tracking and being able to move over long distances and correct for uh you know wind and there's a lot of increasing amount of AI That's going into these systems and eventually they will be largely automated this seems to be really picking up pace um over the last few years I don't know what it's not clear to me there's going to be some decisive moment there in terms of Shifting the leads to a short timeline well one thing that's happened is that is that uh science fiction popular entertainment has done a pretty good job for a very long time of imagining some of these you know dystopian military applications of artificial intelligence right so the the you know the again Star Trek of course has you know the original series had episodes the Next Generation definitely had episodes there are a couple of quite funny ones in the original series There Was You know the Enterprise encounters a world where you know the the all the conflict is fought by computers
and then the casualties are just reported the results of the simulations and the people the populations go and incinerate themselves accordingly to the result the results of the simulation so that they don't get the infrastructure damage I mean crazy but hilarious and funny um social commentary there about the you know the about the absurdity that you could get to if you were letting AI you know fight your Wars um Next Generation have won about you know an episode about drones and how that you know a drone uh program was demonstrated and then it wiped out the entire planet's population in the Enterprise crew gets there and they have they they finally realize they realized it to shut down this drone system they have to agree to the sale and so they okay we're sold we'll take the system and then of course the demo shuts down um you know so this and then famously you know war games that that uh the movie where the AI is running the um nuclear uh uh you know the Cold War um simulations and and and things um so there's been some very famous treatments of this and I think that again public awareness is there and uh permits you know the the it leads to think about this stuff seriously um uh in my mind the the you know the the Robotics and drones yes those are you know those are those are important but I'm thinking the AI applications more in in strategy and planning um and actually actually you know actual command of uh of of of things that strikes me as something that has the potential to be um uh really quite have quite a profound impact on on uh policymakers because if you're if you're if you have ai that can do you know that can they can do you know the planning that what you know and do the war fighting and you know figure out where and when and exactly how to deploy things efficiently and actually succeed basically to play against Starcraft but in real life um in in a way that no human possibly could well you'd be unbeatable right you'd be it it that would be an enormous strategic Advantage if you could not be defeated except by another AI uh you know uh strategizer and um I think that kind of presumes a large proportion of the military forces is itself robotic or automated I don't know that it I don't know I don't have a mentality and I have a good mental model of how AI planning really makes deploying existing military assets much more efficient I mean is it really the case that an AI would just be much better at placing your you know your battleships and your aircraft carriers
I mean yeah maybe if there are a lot of automated systems on those platforms like automated defense turrets or some kind of drone defense umbrellas or once the complexity of the systems you know becomes very high I can imagine there are combinations there that just aren't intuitive to a human which makes sense and but um I think I have a mental model where you need a lot more complexity and computation and Automation in the battlefield itself before the planning part really makes sense to to have an advantage from AIS I don't know what you think about it yeah fair enough yeah but that's that's you know quickly approaching of course oh hey one thing occurs to me we haven't mentioned um already this is an interesting example and that is um uh in finance and trading oh yeah um so I know that there's been an algorithmic trading for a long time I I remember with with friends in in college long long ago dabbling in those ideas in 20 plus years ago um and you know it was definitely something that people were thinking about oh how do we do this I mean hopefully we figure out how to you know how to do how to do an algorithmic trading has become you know obviously a big deal it's very popular but what hasn't emerged apparently is you know an AI that just mops the floor with everything and it just wins and I actually find that I'm a little surprised by that I heard some rumors that that Google had you know had developed some approaches that were that were shockingly successful too successful to deploy um because they would have been they would have been dangerous to actually Implement um would have been would have been uh and try to avoid using the word disruptive but they they uh yeah I heard that that was I don't know if that's true or not but but frankly I'm sort of surprised that that hasn't happened well I know personally somebody who's employed basically to write deep RL algorithms for a googler whose Moonlighting as a like a hedge fund like he wants to start to start his own investment company and he's he's paying a friend of mine to to write the the AI that doesn't uh it's you know moderately successful but like all these things it will fall over in some way and lose all your money I think that's kind of from talking to a few people who are in the field many people can come up with very impressive demos especially on you know historical data and then you know maybe even on some limited amount but then it's the out of equilibrium sort of very uh sharp transitions
that just lose all your money uh that's that's the problem with any training strategy of course I'm not saying anything original but yeah it's not so clear that there are good I mean for algorithms that learn from large quantities of data and see signals in there it's not it's not always the case that you would expect them to be particularly good at trading it's not only about extracting signals right so yeah I'm not not from not familiar with any public examples of Highly Advanced AI being used but I assume they're out there uh and we just you know there's just a hundred of them so it's not like anyone has a decisive Advantage but I I can assure you that plenty of hedge funds and individuals have tried their hand at making Transformers do trading for example there's no way that has I have I mean there are published papers about it I just I guess nobody's made it well I don't know who knows where say Renaissance makes their money uh at this point it could be 20 Transformer models doing the training that would absolutely not surprise me at all yeah this is another question which is which is you know in many of these applications there is incentive to stay in stealth right there's an incentive to to um to be opaque rather than transparent and so there there are you know the motives of the incentives are to keep information from the public to and and and um uh and and that runs directly contrary to you know this this um uh the process that we were talking about of um the public becoming aware sooner rather than later uh as a as a benefit you know at least I think that's implicit here and that's I mean we can ask that question explicitly I think implicitly I've been assuming that it's better if the public um uh realizes the the you know the the imminence of of AGI at least it seems to be imminent to me um and shifts at least from the long to the medium scenario if not from the medium to the short um and I think they would be beneficial even if we were wrong about it I actually think it would be beneficial anyway because I think we would adopt a different posture and make investments and decisions differently in ways that would overwhelmingly overwhelmingly be to the good but um but that's it's an assumption it's not I we haven't I've you know we haven't defended that that's because we're naive idealistic dinosaurs Adam that actually believe in the Enlightenment and boring old like that um yeah but you know there it's it's we we obviously have to admit that not
everything is not every uh example of a trigger is something that for which there are incentives to be as transparent as possible about it for self-driving cars obviously there is and for these other sort of um services that would be deployed to Market and would be very profitable if you know as soon as we can get the Technologies working um but some of them obviously military applications uh if some of the finance applications we can think of maybe other ones where it would be you know be very advantageous to to uh be secretive about having this this as either a you know a competitive advantage or a strategic advantage or something like that we can imagine um and one could also Imagine you know examples of technologies that are you know that perhaps are too dangerous to allow out into the wild at all there have been there have been a ample of examples like that that can be clear weapons technology and some biotech uh you know bioweapons bioterrorism uh applications that's a good example yeah I didn't think about synthetic biology that's another area which is already seeing rapid gains from this kind of take yeah I want to I mean maybe it's worth distinguishing uh just sort of slow steady progress uh now slow steady progress almost by definition can't shift you from medium to short timelines because short timelines means 10 years or something and that's not short that's not slow steady progress um so we're talking about I mean let's let's be clear about how radical transformative disruptive and crazy short timelines are right it's easy since we've been talking about it for a long time and we've sort of gotten used to the idea to underestimate just what a storm it will be if Elites let alone the public shift to short timelines this will be a complete mess maybe transformative good chaotic good uh but a big mess this will be very significant if people shift to short timelines that means prices will do crazy things right expectations okay to take one example I have a 30-year mortgage right what the hell is that about why is anybody getting mortgages anymore Go Get It Go Get a mortgage for a five million dollar house in 10 years you know there's no such thing as money if you believe in short timelines well who the hell knows what money is worth it's like there's a Google Google uh so so there's there's some sense in which if you believe in short timelines you should probably go out and get like a super huge mortgage and invest it in land because land will be the only rare
thing anymore and you'll just make Megabucks by doing that so the stuff like that where it's it's really um it's not a small deal to believe in short timelines the all the downstream things of that belief are very disruptive I I think Finance is a good example so what would be a kind of sudden indication that stuff is getting real in finance might be something like the hyperscalers but which one usually means things people like Amazon Google Facebook Microsoft to a more limited extent Apple people with thousands of gpus at their back and call if someone cracks applying you know someone cracks AGI and with a thousand gpus you can make predictable returns at very high rates on the market and we see that happening um well I think that would certainly move it leads to short timelines right if it can be done in finance if it can be done in markets which are viewed with such reverence uh then people will conclude that it's going to happen everywhere else that seems like a pretty good indicator actually maybe that we're already there but certainly you could expect that before it arrives there will be indications that take the form of strange Market Behavior I think that's a good example yeah yeah I think that's a good example too um I think we should probably distinguish two maybe their stages or degrees of awareness so there's there at least when I run this when I modeled this in my imagination I see two degrees of so it's it's it's it's one thing to sort of entertain an idea seriously uh but still only intellectually and it's another thing to so fully grok an idea and onboard it and and internalize it that it actually has a you know it has an impact on your behavior and then maybe there might even be a third uh degree as well where it where it could totally changes your world view all of your priorities reshapes all your values changes how you see the entire world and really really Alters your behavior so there might be even sort of a a tale at the end of that second degree that almost qualifies this is new as a third category but I I can definitely imagine um uh when I you know again as I said when I think about this in my models in my mind the the public could sort of um could um declare like you could have a lot of people you could have a preponderance of of the population you know the majority of the population sort of declaring that they get this and it can show up on surveys and you could survey how you know do you how you know do you think there's a 50 or
greater possibility that you know machines can be generally intelligent in 2030. I bet if you ran that survey right now a lot of the you know public percentage would be pretty high but that doesn't mean people really take it seriously enough to act on it and so I think there's it's it I I don't think that's a meaningless distinction I think that's I don't think it's like a distinction without a difference I think that's actually a big deal um and so the the the one does obviously lead to the other I think um uh but uh I don't think the world goes like completely bananas until you're until people are really so serious about this that they're acting on it and I don't know how large a number of people have to be you know seriously entertaining the idea intellectually before you know you get some critical mass of people taking it so seriously that they're that they're re that they're altering their life plans and you know as a result and you know like they're not putting money into their retirement accounts anymore but you know but they're instant instead you're like partying like it's 1999. um uh yeah I I think that I think that's worth distinguishing that it feels like that's an important thing to think about yeah I agree I I guess the the purpose of a fire alarm is uh a good analogy in this context right I I never thought of about about fire alarms like this before I saw it written down but it makes so much sense when you think about it where uh this kind of social permission to give a about a thing that is given by alarms so people apparently who knows whether to trust these kinds of experiments ever again but uh there's supposedly experiments where you sit people in a room and you can put smoke under the door and they won't do anything about it until there's kind of like social permission to be freaked out by the smoke uh so that people can be sort of aware of something and aware I'm not excluding myself from this right maybe I would care about smoke but you could you could take some other example um people that can be aware of a thing and intellectually believe it's significant but not make that connection to behavior until there's some kind of prompt where you look around and everybody cares and is paying attention and then suddenly uh the whole crowd is is running out of the room because well they heard the alarm and then that's what you do when there's an alarm so yeah I think that's right that uh when we're talking about these kinds of decisive events and making these lists
I'm thinking of them as being kind of serving that alarm purpose which catalyzes that shift from sort of yeah kind of being familiar with the facts to defining them deeply salient yeah that I think I agree with that um and just to kind of maybe I don't know if this is useful or not but to stretch your your metaphor of the fire alarm um uh you know when the alarm goes off in the case of a fire alarm you know under some circumstances I mean the whole reason why you do a training for example and do fire drills so that you get up and walk calmly in the right direction you know where the exits are and it's reasonably orderly um if people don't do any any if people don't do any training and then or you know they get caught unawares in some strange circumstances um you know they just Panic you know get up scream start running in circles you know complete chaos don't know what to do don't know which action to take they know they can recognize that it's an emergency but okay now what do we do um and so there's so the the yeah just see what you write on the board there um yes the AGI variety of humans move calmly to the exit that's our purpose in life Adam we're going to the exit will be dignified at least yeah that's right that's right maintain your dignity at all um yeah I guess uh let me see if there's anything I wanted to add on the um on the elites to short timelines I mean we mentioned this before I don't know how much we want to get into it it's kind of the most widely oh yeah I was about to mention things to do with Taiwan and Military conflict but actually something more interesting occurred to me earlier which was um which I've now forgotten no it's gone all right so let's go on with that yeah I mentioned nationalization of um of the hyperscalers or Google as a as a kind of provocative possible event uh I guess in America you think you're not allowed to call it nationalization you say like production guidelines or something so in the in the second world war I'm not aware that many industries were officially nationalized uh but there was the war production board and and a lot of coordination of producers and something like an allowed profit I don't know to what degree things were really directed or the whether it was nationalization and all but name uh you may know better about that but I I can certainly imagine that something like that happens as a result of Elites shifting to short timeline so I can't imagine that this the trajectory will be left to the uh
to the whims of the increasingly disliked nerd uh barons if the Elites in Washington shift their views to short timelines I think there will be some kind of moves to if not nationalized then have something like a a war production board that is directing how things go yeah it seems to me that that's and that seems very likely to me um and um foreign we have talked a little bit about these you know what would we expect and and does the the the increase in um uh repatriation of key Cutting Edge uh AI related Technologies Hardware especially um uh Mark the beginning of a of a a genuine arms race an explicit arms race towards the um the prize of of artificial general intelligence is what we're seeing now does that um is that does that sort of sort of fit the bill is that what we would expect to see and we can easily imagine a you know a war production uh footing being mounted um if if the uh you know if the Overton window shifts any further or continues to shift in the direction of you know sort of imminent arrival of this technology in the short scenario um I would I would very much expect that sort of uh uh uh that that's that sort of thing that occur one thing that we might one question we might ask is um let's assume that that's necessary let's assume that that the uh seizing control over the production of key um uh uh Hardware so basically gpus and chips and you know that sort of stuff it's assume that that's necessary for victory for for you know uh support for say for example the United States if it decided okay this it's on we gotta win this uh we and we have to do this we have to seize control of production what would that look like what would we expect that to look like if it were not explicit could we could we predict in advance the actions that would be taken under some other plausibly deniable guys some other policy some other you know initiative what would we expect here um and it would be fun to try to imagine sort of you know the the the the the plausible smoke screen kind of uh reasons or excuses that that a government might give if it didn't want to give the real reason for this um and so what would you what what what what what might it be so if if the government decided to suddenly do some massive procurement um uh yeah draft or mathematicians there you go that's that's more on the on the you know the the software side of things obviously um but like what what might we expect um uh you know what we what would that be done in the name of if we're not
explicitly done in the name of winning the AGI arms race like what excuses could the US government give for deciding to purchase you know um some substantial fraction of uh you know the chip Maker's output um again and and and taking control of that yeah it would something be something yeah I think it's I think it's pretty blatant in the the more recent chip regulation they're sort of just saying it out loud I don't know why they would bother hiding it I mean I suppose you could look to the Cold War again for a very good example of that right it's uh the positive vision of I I feel very much like history is repeating itself with that right the positive vision of let's go to space and you know go to the moon and is coupled to this kind of very zero-sum attitude of let's bankrupt the Soviet Union and it's almost the same without the positive Vision part so far for for AI right where the the chip regulations are clearly intended to basically say to China okay well you want to keep up then you better spend you know a big chunk of your GDP trying to keep up and it's not like anybody thinks that's impossible but the the costs imposed on the Chinese system are viewed to be severe enough that maybe it'll disable them or something so yeah the positive Vision part is missing you're right that's an interesting observation that you'd expect that hole to be filled some point soon with a kind of national effort to build AGI to solve climate change or build AGI to end aging or build AGI to I don't know some Progressive priority yeah that's that's uh that's interesting actually I remember uh sorry to switch back I remember the thing I wanted to say which is it's worth giving a kind of micro economic model of what this Elite shift in opinion looks like so I'm sort of proving to conversations among some of my friends who are in areas like Pharmaceuticals um and hearing about their career decisions and there's like all these shifts going on where a lot of venture capitalists are putting up lots of money to fund new biotech startups and new Pharma competitors that are kind of AI first isomorphic Labs from deepmind is one of them but there's a huge zoo of them um arriving and the young people sort of are all uh and some of the older people but a lot of young people are making their bets on these these new approaches and this is kind of what Elite shift and opinion looks like uh you know the old adage about uh people don't change their minds they just die uh and a new generation comes
along I think new generations of Elites just win by making bets on the new things and what we'll see is those younger people moving into positions of larger influence and power as we move forward on this timeline they will they're all making I mean the people who will be maybe it's incorrect to say people will shift their mind to short timelines it's more like in any given area of human activity say biotech there are large well-funded groups of people out there right now who believe in short timelines I know for a fact I know some of them and they are making their bets right now and those bets will pay off or not but when they pay off they'll be the people running the show very rapidly or they'll be co-opted or you know in any case they'll move to the center of the frame so it's maybe it's not actually accurate to think about it as the elites change their mind it's more like actually the bets are laid they're on the table out there or will be shortly and it's just the uh the winners will step to the front uh as we move closer if in fact short timelines are the true timelines we don't know because yeah I think that's a reasonable I think that's a reasonable supposition um what I wonder then is that is uh so a couple of things one is that the in the case of the United States um the the federal government I suppose this the the state and regional and local governments to a much lesser extent but especially the federal government is notorious for um participating in and enabling um the military-industrial complex I mean this is a you know this is quite a familiar term to most folks and and it's certainly a key feature of the United States federal government and the our econ our national economy we have massive uh uh budget of the federal government is devoted to defense and Military applications and a lot of that is very opaque it's it's a lot of famously you know thousand dollar toilet seats and things like this to hide um uh where the spending is actually occurring but a lot of that spending uh is two private contractors and so that you know the government although some research is done by the government bureaucracy itself and government agencies and institutions by government employees and um even some production uh is done by government agencies and government employees but forever including throughout the Cold War a lot of that was done on a contract basis so private companies were contracted and um uh uh I I suppose there's every reason to believe that this is occurring now