WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed. The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except for errors at the level of individual words during transcription.


Automation is disrupting the way human labor is used, with robots and deep learning tools taking on jobs that require basic visual reasoning. Tesla is developing a humanoid robot for production, Amazon is running competitions to automate sorting tasks, and self-driving cars are becoming a reality. Deep learning and improved engineering are enabling mechanization and automation of tasks, with potential for robots to be used in warehouses and vacuums, and Deep Reinforcement Learning to control robots. Government investment is being put into robotics, with potential for gene-based machines and nano machines to revolutionise the way we work.

Short Summary

The potential for human labor to be disrupted by automation is difficult to assess, with many tasks requiring specific machines and algorithms to perform them. Automation and manufacturing are heavily intertwined, and the production line is set up so that inputs to the machines are of a very definite form. There is an overestimation and underestimation of automation, and it is important to consider the steps between the present and the future of automation. The speaker is looking for a more reliable view into the world of machine learning, narrow artificial intelligence, automation and robotics to get a better sense of what is plausible to expect in this domain of technology and how this could affect labor and capital in the next 10-15 years.
Manufacturing is becoming increasingly automated with the use of machines and deep learning tools to automate tasks that require basic visual reasoning. Amazon is running competitions to further automate sorting tasks using cheap cameras and training data for robots to accurately sort objects of different sizes and shapes. Self-driving cars are also becoming more prominent, with implications for transportation, and Amazon is a leader in the adoption of these technologies, using mobile robots to navigate and carry out tasks.
Automation of tasks perceived as repetitive and boring, known as toil and drudgery, is an area of interest. This does not include intellectually demanding tasks such as those of doctors, lawyers and architects. Tesla is developing a humanoid robot which could optimise production by narrowly defining and controlling the scope of tasks, inputs and outputs, and by mechanising them. A recent experience with a pizza vending machine was poor as it only microwaved pre-made frozen pizza, not a robotic kitchen as expected. Automation of tasks that require intelligence, such as vehicles driving in the world, are more disruptive and can save humans from dangerous and boring toil.
Robotics is a complex field with scepticism towards optimistic projections. Baxter was a humanoid robot meant to automate processes for companies without the resources to hire engineers and roboticists, with an iPad face and arms that could be guided to do tasks. To save time and effort, it may be wise to focus on humanoid robots that can fit into existing setups, rather than introducing robots with new criteria and having to readjust the manufacturing operation. This could be a more efficient and cost-effective solution.
Automation is possible in the physical world and Apple's Automator is an example. Tesla's robot is yet to be available, but people remain optimistic about its potential. Robotics technology is not as advanced as expected, with few companies producing standardized robots. Tesla could potentially make a great robotic arm and torso, but the cost of custom-making one for a specific task may be too high. An alternative is to use general platforms such as humanoid forms, wheeled robots with a flat surface on top, or robotic waiters for various tasks like folding clothes, unboxing items, running cash registers, mopping floors, and opening and closing shops.
Deep learning and improved physical engineering are enabling mechanization and automation of tasks that are not precisely defined. This is leading to robots being used in warehouses and vacuums, and limited but not specific platforms reaching scale. This progress is driven by the combination of deep learning and physical engineering, such as deep reinforcement learning, neural networks, language models and pre-training. In five years time, this progress is likely to have advanced significantly.
DeepMind recently released a paper showing that pre-training may be possible for deep reinforcement learning agents. This is demonstrated by an agent being trained on thousands of games and then transferring to a game with different rules. DeepMind also posted an article about fusion, which reminded the speaker of the science fiction movie Passengers. In the movie, a computer was damaged when a shielding failed and the computer was struck by an object travelling at high speed. DeepMind's article suggests that AI may be able to solve the same problem, indicating that the premise of the movie may have been more realistic than initially thought.
Deep Reinforcement Learning (DRL) has traditionally been seen as a toy tool only used for games, and Jeffrey Hinton was not optimistic about its potential. However, it has the potential to revolutionize robotics and OpenAI has demonstrated its ability to control robots. To make this possible, Deep RL must be practically useful, large-scale pre-training must work, and the agents must be able to control robots. Tesla is attempting this with their Dojo supercomputer system, which is designed to generate simulations from real-world data. If these three elements come together, Deep RL could experience a step change in effectiveness.
Robotics is being used to create broader mechanization in factories, with deep learning teams and dedicated hardware teams working on this. Investment from the government is being put into this area, with the potential for the technology to be academically proven in the next few years. Farm robots could be a good fit for tasks that require visual reasoning and sorting, and have the potential to reduce the need for human labor in agricultural work. The speaker is looking for linkages between the food, energy, transport and automation sectors to see where robots can be best used, with the agricultural market being surprisingly well suited to automation.
Technology has the potential to cause disruptions, however markets and infrastructure can quickly emerge to support it. Genetically modifying plants to signal various things is an example of this, showing how technology can open up a space of possibilities. The Australian government is enthusiastic about this, as it could help increase the productivity of the farm sector. Logistical details need to be worked out, but these challenges can be solved through market forces, government incentives and social and cultural pressure.
The mapping of the human genome has enabled progress in biotechnology and the potential for multiple domains to be improved simultaneously. However, there is currently no way of reading the state of a cell or providing input to it, apart from PCR tests and mRNA vaccines. Gene-based machines are being used for mundane tasks such as cleaning up mining sites, but nano machines are not yet able to do anything. Synthetic biology and understanding existing gene-based systems better could lead to a future where nano machines are able to do anything, but the Eric Drexler vision of atomic level manufacturing remains a distant dream.
Deep learning has enabled control of complex systems without needing to understand the underlying principles. This could be applied to synthetic biology, allowing for complex behaviours to be programmed into cells with an API. This runs contrary to traditional science, which relies on simplifying explanations and principles to gain power and control. Algorithmic trading has been successful in understanding markets, suggesting machine learning approaches could be used to control complex systems.

Long Summary

The speaker is interested in exploring the potential for human labor to be disrupted by automation. They are unsure of the state of the technology and its potential for cost improvements over the next 10-15 years. They are looking for a more reliable view into the world of machine learning, narrow artificial intelligence, automation and robotics. They would like to get a better sense of what is plausible to expect in this domain of technology and how this could affect relationships between labor and capital.
Automation and manufacturing are heavily intertwined, and the potential for machine labor and automation is difficult to assess. It is not a binary process, but rather a gradual one with many tasks that require specific machines and algorithms to perform them. The production line is arranged so that the inputs to the machines are of a very definite form. There is an over and underestimation of automation, and it is important to consider the steps between the present and the future of automation.
Machines are being used in manufacturing to automate processes and reduce human labor. Deep learning tools are now being used to automate tasks that require basic visual reasoning, such as sorting objects on a conveyor belt. Amazon is running competitions to develop deep reinforcement learning to further automate sorting tasks. These systems use cheap cameras and accumulate training data to allow robots to quickly and accurately sort objects of different sizes, shapes and textures into appropriate boxes.
Manufacturing is becoming increasingly automated, with startups developing technologies to automate sorting and other processes. Amazon is a leader in the adoption of these technologies, using mobile robots to navigate and carry out tasks such as fetching items from the back of the warehouse. Self-driving cars are also becoming more prominent, with implications for transportation. These scenarios often include assumptions about the capabilities of self-driving technology, making it an area of focus for many.
Automation has been attempted to save time and money, however a recent experience with a pizza vending machine was poor. It was not a robotic kitchen as expected, but a fancy vending machine that microwaved pre-made frozen pizza. This is not the sort of automation that is disruptive as it does not require sophisticated reasoning, pattern recognition or navigation. Instead, it is tasks that require intelligence to accomplish, such as vehicles driving in the world, that are disruptive. Such automation can save humans from dangerous and boring toil.
Automation of tasks that are perceived as repetitive and boring, known as toil and drudgery, is the speaker's keen interest. They are not talking about automating intellectually demanding tasks like those of doctors, lawyers and architects. The speaker also discussed the idea of Tesla developing a humanoid robot, which could potentially optimise production in two different ways - by narrowly defining and controlling the scope of tasks, inputs and outputs, and by mechanising them.
Humans have optimized production around their labor, building factories and processes that assume people will do the tasks. To save time and effort, it may be wise to focus on humanoid robots that can fit into this existing setup, rather than introducing robots with new criteria and having to readjust the manufacturing operation. This way, robots can benefit from the optimization already done for human labor, such as tasks that require two arms, five fingers, two eyes and two cameras. This could be a more efficient and cost-effective solution.
Robotics is a complex field, and Rodney Brooks, a famous roboticist and founder of Rethink Robotics, is often sceptical of optimistic projections for robotics. Baxter was a robot designed for companies looking to automate, but it did not succeed in the market. It was meant to be a humanoid robot, with an iPad face and arms that could be guided to do tasks. It was designed for companies that don't have the resources to hire a team of engineers and roboticists to automate their processes, but still need to convey and build something.
Automation is possible in the physical world, and Apple's Automator is an example of this. In the early 1990s, a friend and the speaker had an idea to do this operating system-wide, but eventually decided it was too difficult to do. Tesla's robot is not yet available, but people have argued that it may be a lagging indicator of the transition to automating intelligence. People should remain optimistic about the potential of the robot, as it doesn't exist yet.
Robotics technology is not as advanced as one might expect, with few companies producing standardized robotics. Tesla could potentially make a great robotic arm and torso, but the cost of developing a custom-made robotic form for a specific task may be too high. An alternative is to make the best of the standard forms available, such as humanoid forms, wheeled robots with a flat surface on top, or robotic waiters. These general platforms could be used for a variety of tasks, such as folding clothes, unboxing items, running cash registers, mopping floors, and opening and closing shops.
Robotic platforms are becoming increasingly efficient and affordable due to a combination of deep reinforcement learning and improved physical engineering. This is enabling mechanization and automation of tasks that are not precisely defined. Examples of this include robots running around in Amazon warehouses, and Roomba vacuums. In five years time, it is likely that limited, but not specific platforms will reach scale, such as Peter Robiel's company. This progress is being driven by the combination of deep learning and improved physical engineering.
Deep learning has been used for decades in robotics, but recently more money and resources have become available, leading to a convergence of multiple threads of technological advancement. This includes deep reinforcement learning (DPRL), which has seen milestones in robotics, as well as neural networks and language models based on transformers. Pre-training has been a key discovery, allowing for expensive training on a large model, which can unlock rapid progress in computer vision and language models.
Pre-training is an important approach in modern large scale deep learning, where a large data set is used to train a model with a generic task such as predicting the next character. This approach has been applied to sentiment recognition and computer vision, but has not yet been successful in deep reinforcement learning. DeepMind recently released a paper showing that an agent can be trained on thousands of games and then transfer to a game with different rules, indicating pre-training may be possible for deep RL agents.
DeepMind recently posted an article about fusion, which reminded the speaker of the science fiction movie Passengers. In the movie, a computer is damaged when a shielding fails and the computer is struck by an object travelling at high speed. The premise was that it required a lot of computing power to contain the diffusion reactor. In retrospect, this idea may have been more realistic than initially thought. DeepMind's article is evidence of this, as they are attempting to use artificial intelligence to solve the same problem.
Deep reinforcement learning (DRL) had a reputation for being a toy thing only used for games. Jeffrey Hinton was not optimistic about its potential. DRL is technically difficult because the agent must be trained to take actions in an environment and the past experience becomes outdated when the ability to play the game improves. DRL is expensive because the agent must generate its own training data as opposed to fixed data sets used in other forms of machine learning. It is difficult to learn a function that is conditional on the current agent, which is constantly changing.
Deep RL has the potential to revolutionize robotics, with OpenAI showing that it can be used to control robots. To make this practical, three elements must come together: Deep RL must be practically useful, large-scale pre-training must work in Deep RL, and the agents must be able to control robots. Tesla is attempting this with their Dojo supercomputer system, designed to generate simulations from real-world data from their fleet. If all three elements come together, a step change in Deep RL's effectiveness could be seen.
Data is being collected from various sources, including simulations and a humanoid robot. Companies are creating their own super computers to take advantage of this data. Robotics is being used to create broader mechanization in factories, which could transform manufacturing. Deep learning teams and dedicated hardware teams are working on this. Farm robots could be a good fit for tasks that require visual reasoning and sorting, although this is limited to a specific environment. Investment from the government is being put into this area, and the technology could be academically proven in the next few years before it is scaled and deployed.
Robots have the potential to reduce the need for human labor in agricultural work, such as identifying weeds, determining when fruit is ripe and picking berries. This could be a large market if a robot could be developed that is durable and cheap enough, powered by cheap solar power. It is possible that the agricultural market could be surprisingly well suited to automation, as it does not require a robot to be overly intelligent. The speaker is looking for linkages between the food, energy, transport and automation sectors to see where robots can be best used.
Automation is causing labour disruptions, and soon genetically modified food plants will be able to emit signals to indicate their state of health, which will be captured by drones and processed. This could lead to robots being able to come and spray fertilizer when needed. The Australian government is enthusiastic about this, as it could help increase the productivity of the farm sector. Logistical details need to be worked out, but these challenges can be solved through market forces, government incentives and social and cultural pressure.
Technology has the potential to create disruptions, however the necessary infrastructure and markets to support it often emerge quickly due to the availability of capital and people's motivation to profit. This is why it takes some time for disruptions to roll out, but unless there is a fundamental obstacle, these challenges are usually solved. Genetically modifying plants to signal various things is an example of this, showing how technology can open up a space of possibilities.
Technology is increasingly being used to outsource routine calculations, but gene-based manufacturing systems do not have access to external computation. This is likely to change soon, with gene-based machines being used for mundane tasks such as cleaning up mining sites or nano machine tasks. Synthetic biology and understanding existing gene-based systems better will enable animals or bacteria to be used in manufacturing systems. This could lead to a future where nano machines are able to do anything.
Progress has been made in the field of biotechnology, with the mapping of the human genome complete. This has enabled the potential for multiple domains to be improved simultaneously, and for technology to converge. However, there is currently no robust way of reading the state of a cell or providing input to it, with PCR tests and mRNA vaccines being the only exceptions. There has been no real progress in the development of nano machines, and the Eric Drexler vision of atomic level manufacturing remains a distant dream.
Deep RL can be used to control the behaviour of cells in a similar way to how it is used to control a tokamak, a fusion reactor. With careful engineering and domain expertise, a deep RL system can be created that is fast and simple. This could allow for cells to be programmed in real time, rather than having to be reprogrammed in advance. This could potentially lead to a paradigm shift in synthetic biology, allowing for complex behaviours to be programmed into cells.
Biology is complex and cannot be understood through a reductionist approach. An API for cells could be developed, where signals are sent and a desired result is achieved without needing to understand the gene regulatory networks. Deep learning has already thrown out the need to understand how large neural networks work, showing that understanding is not necessary for control as long as there is rapid feedback and learning.
Machine learning approaches may be able to provide control over complex systems, even if the underlying principles are irreducibly complex. This runs contrary to the traditional approach of science, which relies on finding a deep level of simplifying explanation and principle to unlock power and control. This has been successful in the past, but economics and other complex systems may be too complex to be understood in this way. Wall Street has pivoted away from econometrics and academia to try to understand markets and has instead used algorithmic trading with success.
Machine learning can make progress with complex systems without unlocking deeper level of explanation. Neural networks and brains have parameters that don't affect the predictions directly, meaning their individual values don't have any meaning. This is also true for string theory, where there are many combinations that produce the same observable universe, so predicting the values of the parameters is not possible.
Nature has evolved certain mechanisms that work to perform a function, but the content of that mechanism is irrelevant. It is a level of analysis at which it is meaningless to dig into it, because it is an accident and does not help to predict or understand anything. Machine learning can be used to do what nature did, by fitting to accidental details and then dealing with the higher level details that make a difference.

Raw Transcript

all right what are we doing um well i said a quick note over i hope it's not um like a boring topic or anything i'm open to doing whatever you want obviously um but i was laughing that i'm sort of you know i'm starting to run out of fundamental stuff we can you know can always dive into the details of some some specific uh you know a sector that's likely to be disrupted or whatever we can we can get down in the weeds and if if that's something that folks are interested in but if we're keeping it at sort of a high level um an overarching conceptual level then i'm afraid i only have so many ideas um to really talk about but one that we haven't you know sort of fully explored but we've touched on it a few times and one that i'm really interested in understanding better is this um the potential for human labor to be disrupted by automation because my team this is something we've shied away from you know doing a proper research dive into but i think it's likely to be as fundamental or more fundamental than the disruption of energy and transportation and food that we've already looked at and i see it as sort of an extension of the of the um you know the sort of ongoing um information uh and information technology disruptions that have sort of you know um continued to occur overlapping with one another again and again over the last you know over many decades now so um but it's it's that is this is something where i don't feel like i have a very good handle on the state of the technology it's real trajectory its potential for cost improvements um for you know um uh you know would basically just what the what the next 10 to 15 years really looks like from a proper expert point of view like you guys have um as far as machine learning and and um narrow artificial intelligence and automation and robotics really goes um you know it's easy to get a misleading view of that just reading um uh you know non-scientific publications and um uh you know just paying attention to sort of the the the discussions that people have that i don't in other words i don't feel like i have a very reliable view into that world yet and i'd like to um uh i'd like to to get a better sense from you guys of what we really think is plausible to expect over the next 10 to 15 years in that domain of technology and then if we want we can sort of think okay well between here and um and there what does this what do relationships between labor and the rest of society especially capital i'm thinking of but even you
could think of you know other things that labor is part of or relates to um institutionally or culturally or whatever we can talk about about um how we think that might change and not necessarily just leaping forward to the end point that's the temptation right oh well let's just imagine a future where everything's automated you know that's that's that's easier to do it's been done a lot i'm more interested in what i think is the harder part which is what are the steps what are the incremental steps actually look like between here and some you know um some uh moment in time at some point in the future where where a great deal of of what is currently undertaken by human uh drudgery and toil is is has been successfully automated i'm interested in the steps between here and there not necessarily just envisioning the end um picture although that's perfectly fine of course um so anyway that was the sort of thing that i was hoping to talk about starting with this you know getting a handle on on where we really are with machine learning machine intelligence machine the potential for machine labor um and automation and uh what you know what you guys with a properly expert view really think the next 10 to 15 years looks like there yeah cool is that fair enough yeah sounds good i'm just thinking about where to start um i guess it's maybe this is true of many trends but it's both simultaneously over estimated and underestimated in some sense right so yeah uh it's it's difficult to talk about uh maybe uh maybe i'll just list some of the things i'm tracking personally in the domain of automation and manufacturing um which of course is heavily automated already so maybe we should be clear about what kind of automation we we really mean um so and it's it's maybe not so clear that it's a binary right where it's like a certain kind of automation now and then something will happen and it becomes a different kind of automation it's it's not really like that it's more like there are many tasks i mean you take a typical factory there will be some tasks that are people have formulated specific machines to perform them you know like to make pringles or whatever there's some very specific it's basically an algorithm in physical form that somebody spent a lot of time figuring out and then they built it and then it makes many copies of a given thing but the production line is arranged so that the inputs to that machine are of a very definite form so there's a certain density of very
standard objects going into it and they're controlled in speed and they're checked for defects before they enter the machine and exit the machine because the machine is like has very low tolerance it's really like an algorithm that accepts you know a certain kind of input and if it's not exactly that kind of input then it just will break or not do anything um so however that automation process can be so valuable that you know we've arranged a large percentage of human labor to be checking the inputs to such machines checking the outputs from such machines and then you know basically plugging such machines into one another you know whether that's circuit board assembly or you know maybe you know i don't want this to be a monologue but here's a and if you think about machines have do a lot of the manufacturing already right where there's it's kind of some percentage of the you know the humans are sort of in between the the machines in a typical factory checking and doing certain sort of integration tasks and it seems to me like the the place which is most vulnerable to automation in the short term are the the kind of sorting and integration tasks that require some kind of basic visual reasoning that is now within the scope of deep learning tools so most famously i mean amazon runs competitions on this which have in recent years been uh what's the name of the startup um coherent i think i can check but this is um this is peter beals i think i've got this right there's a deep reinforcement learning is starting to have an impact on manufacturing we have fairly cheap cameras combined with deep reinforcement learning which accumulates training data at scale so across many installations of a robot to perform things like sorting so amazon is very interested in this right so for the logistics facilities they have many objects and you want to put them on a conveyor belt there are you know they're different sizes and shapes and textures and you want some sort of robotic arm to just pick them out and put them in an appropriate box for sorting and delivery or whatever right and that's a task that until recently you'd need a human to do because humans would be pretty good at that and at some speed there's objects coming along a conveyor belt and you tell the human put all the i don't know frisbees into this box and all the books into that box uh but because they're of uneven i mean a book is a hard thing to define right so you know they can be of many widths and many sizes and come
in many orientations and so on but that's the kind of fairly routine component of manufacturing which is now being automated there's several startups doing that i don't know how i mean i don't know enough about manufacturing to know what percentage of manufacturing activity is sort of vulnerable to that stage of automation but um that's what's happening right now i guess okay that's that's really useful that's interesting um yeah it's seeing behind the scenes you know where this where these uh machine learning technologies are really being put into practice would be very interesting i need to do more reading basically more study to see where these these things are going into effect so that's a great example in the sorting in amazon warehouses i have heard about um of course automation with mobile robots you know these little carts that effectively that drive around amazon yeah and um i think they do you know reasonably reasonably sophisticated tasks of navigation pathfinding and avoiding one another and obstacles and i think that's it it started out fairly rigidly um you know uh defined you know what they were able to do they're basically running on rails and very you know quite limited um well i'm quite limited i don't know what the right word is algorithms or or you know you're basically just running procedures that were fairly simple but i think those have gotten more sophisticated um over the last few years and now it's it's possible to really just you know tell the robot to go get something from the back of the warehouse and it will just go and manage to get make its way over there and find it and bring it back kind of thing um so i guess i i sort of forgotten that amazon was was a real leader in the adoption actual adoption of these technologies here at this early stage i think they they acquired the startup one of the big ones that was making those devices yeah i spelt this guy's name wrong yeah there's a good podcast actually i highly recommend it um which is called uh the robot brains podcast robot brains okay got it um so again giving a a a couple of uh examples just maybe to help with some context here so so um obviously we watch the self-driving autonomous driving technology space pretty carefully that has major implications for you know transportation and the work that we that we've done there um some of the scenarios that we've analyzed um uh have assumptions about self-driving technology built into them so it's that's something we're obviously paying
attention to um but then another thing that i've seen recently and actually just had a quite a poor experience i have to say is that there's a lot of you know um there are a lot of attempts to automate to maybe cash in on this idea of automating certain things so i i recently bought a pizza out of a vending machine huge mistake just awful and and yeah um and the problem was that my expectation was that this machine was actually making pizza and it turns out it it it isn't it's not it's not a it's not a robotic kitchen which is what i was sort of led to believe this is a startup they've got a they've got a storefront in ann arbor here so we just went over to check it out we got a pizza it was absolutely terrible um but it isn't you know it isn't that the machines are making pizzas it's the pizzas are made off-site and you know by people and frozen and so it's just it's just a it's just a fancy vending machine that microwaves or whatever warms up a pre-made frozen pizza it's nothing so that's not that's not what i'm talking about at all that's just you know i mean i don't mean to rubbish it completely although i still have literally bad taste in my mouth from that but um i don't mean to rubbish those efforts completely but that's obviously not the sort of thing that i that we are that we have in mind as being disruptive that doesn't that's that doesn't quite fit the criteria obviously um it really is the this idea like you were saying dan where there are there's some set of tasks that um for a variety of reasons were required like a better word um intelligence in order to accomplish some some some tasks that are you know that that uh recall that call that that require some sophisticated reasoning visually or you know other pattern recognition that requires navigation maybe you know modeling the behavior of other agents in some constrained or limited way um you know to avoid getting you know into collisions or or other trouble in some real world space so presumably some three-dimensional space and um you know a lot of these sorts of things are typified by what uh vehicles have to do if they are going to successfully drive out in the world but one can imagine you know many work environments where um even from from a human perspective that it's quite repetitive and boring and obviously if it's dangerous and it's just toil um it still requires a lot of brain power a lot of real thinking in the same way that driving does and so those the the the um it's really that sort of automation that
that um uh obviously i'm this is where my real keen interest is and um so let me have a more thing here repeat that what kind of automation is where your keen interest is basically what i just described which is the this this um automation this this automation of tasks that from a human perspective seem simple and seem boring and they're basically toil and drudgery so i'm not talking at this point about automating you know um uh uh um this perhaps the so-called professions right so doctors lawyers uh architects you know those sorts they're not talking about necessarily automating what human beings perceive to be intellectually demanding tasks i'm talking about automating what human beings perceive to be um uh repetitive and boring tasks but obviously a very large number of jobs in the global labor market that are that are like that and so that's i'm actually very interested in understanding i mean this is not to say that both the automation across all that could be could occur simultaneously perhaps it can maybe perhaps some of the professions are more vulnerable in some ways um yeah but uh uh it's it's in particular the the toil and drudgery um uh automation that i'm quite interested in um so that's that's what i meant by that and let me just um let well let me just add one more thing here to put one more thing on the table to just as an idea to think about um it occurred to me a while back that um uh well actually i tell you exactly what it occurred to me it occurred to me when i first saw this um announcement about tesla developing a humanoid robot and you know i had a mixture of feelings that we've seen that at first a little bit of obviously some skepticism and and so forth but you know i think it's i think it has potentials of quite i'm taking the idea you know on the prospect of that seriously um but this is the idea that occurred to me it occurred to me that um we've done we've done a two things in optimizing production so maybe basically manufacturing but maybe even production more broadly so production of services and so forth but especially manufacturing just making stuff okay we've done we've sort of optimized that and at least to my mind we've optimized that in two different ways that are relevant here the first way is what you described already dan where we where we have um very narrowly defined and constrained and controlled the scope of a set of tasks and its inputs and its outputs right so so that it's amenable to mechanization perhaps it's a
better way to think about it in automation where you can you can use machines that are dumb but they can do the same thing over and over again very precisely very very you know with great strength with great speed and so forth faster or stronger than human beings but they're just they're too dumb to do anything but the same thing over and over again so you so what what you have to optimize is some very very very tightly defined and limited activity okay we've done that's on the one hand we've done that kind of optimization it's a great success the other thing that we've done is we haven't we have at least this is hypothesis and uh it it seems fair to me i think we've we have optimized production around uh human labor and basically we've we've we've set up processes so that um everything is optimized given the assumption that human beings are available to you know to be part of this whatever the system is whatever the manufacturer whatever your operation is whatever the manufacturing operation is so we basically built all these factors all these factories and all these facilities and all these processes with the assumption that there are people who can do this who could do whatever that needs to get done that today only people can do i think that's kind of a truism and so what i'm wondering is if it isn't perhaps really quite smart to focus on a humanoid robot that can just drop right into the role of humans because we've already optimized so much around humans literally around the human physical form in many factories but you know and it's certainly in terms of like the tasks that a person can do with with two arms and five figures on the end of each one and two eyes and you know you know two cameras on a gimbal that can move around and that kind of thing um and so i'm actually i'm wondering if um it does make quite good sense to to try to design robotics to fit into what we've already optimized rather than go the other route which is to introduce some uh to introduce a robot that's been optimized by some other criteria and then redesign and read optimize all of the other rest of the manufacturing operation around some new some new um uh uh you know intelligent labor agents form do you see what i mean there so yeah i don't know if that's if i'm doing a good job of sort of describing that idea but that's that's uh um yeah because traditionally i've always heard you know well why would we make robots that look like people that maybe that's not optimal why wouldn't we build them on
wheels and why wouldn't we have give them five arms instead of two and etcetera etcetera you know why would we be constrained by what we've already and one good answer to that may be that well we've we've built just about everything in in the you know in in the economy to fit very well together with the human form and so maybe um we don't have to do much reoptimization if we if we just roboticize the human form anyway it's just a it's again it's just a hypothesis i don't know if it holds a whole lot of water but it's something i've been thinking about so yeah i think i'm a bit skeptical of that simply because the the complexity of getting a humanoid robot to work is is really extreme relative to getting more tasks specific things to work uh i think so rodney brooks who's a famous roboticist uh is often quite scathing about the more optimistic projections for robotics um i think there's some value to that so he's not only a famous academic but he started several companies rethink robotics as one of them i think he's i think the company he found is the one who built the roomba i might be wrong about that yeah so you know that's one of the few robots to actually survive turn being turned into a product right yeah yeah that's true that's a good point and obviously that's there's nothing like the human it's not like a robot that's walking around behind a vacuum cleaner yeah yeah and they designed i think it didn't quite succeed in the market but there was a robot called baxter which had a kind of ipad face and you've guided the arms to do things and it was meant to be exactly what you're kind of suggesting so you'd have it was designed for not enormous companies but companies that could benefit from automation and there's clearly a market there where if you're a very large company and you can afford to spend you know 10 million dollars to automate the production of some little widget then you can do it but if you're on let's not say medium size maybe medium size is appropriate and you still have some resources you can you can drop twenty thousand dollars on a machine uh a hundred thousand dollars i don't know how much it cost but maybe you can't afford to you know hire a team of engineers and roboticists to to automate your your process but maybe your process is in a sense not that complicated right you need to you gotta convey a build set up and you need to sort something or i don't know another manufacturing person but so they designed this robot back so where you kind of train at what to do
right you'd move its arms with your with your arms and show it what motions to do a bit like automator on the mac or i guess there's a windows equivalent where you you know you're operating your your apps your programs and you click around and you just record it and then you play it back and that's a kind of automation but you could hope to do that in the physical world that's kind of what the facts it's funny that i haven't i'm not a mac user so i'm not all that familiar with that on the mac um you can do that you can record those sorts of things in in individual apps like the office environment for microsoft you can work with what are called macros that kind of do that a little bit funny i remember having an idea to do this operating system wide have a sort of background application running that would do this across all all input and all programs i remember thinking of that idea with friend in in the early 1990s and wondering how possible it would be we looked into it and decided it would be totally possible i completely forgot about it and the last time i thought of it was at least 10 or 15 years ago it's funny you mentioned it sorry apple's had apple's had some version of that for quite a while but the latest version is automator i'm actually running it right now on on the machine that's recording it's pressing w every 15 minutes to walk forward so it doesn't get logged out which is a very sophisticated automation yeah but you could imagine a version of that in the physical world but the baxter robot didn't really succeed i think in the market um i don't know if that says anything but i well i mean people like rodney brooks have been very dismissive of that and kind of made the argument that you almost need to solve every problem in ai to make something like that work so it's like a lagging indicator of the transition to automating intelligence rather than something you'd expect near the beginning i think this that makes sense some kind of i mean one should always be a little cautious in you know the old saying that if an old expert says something's impossible it'll be done quite soon so i think maybe one should you know not be completely closed-minded about that but i think it is well maybe this is off-topic but the the argument for sort of being optimistic about the tesla robot that i've seen is simply the fact that they don't exist you might it's a strange situation with the robots that people use to say train deep learning models there's kind of very few
companies that make standardized robotics like that uh that you could buy to to work on it's kind of kind of strange right that you might think there's there should be heaps of people making standardized robotics art robotic arms that you can go and buy and you know try and train to do something but that's really not the case so probably tesla ends up you know maybe not making a whole robot body but you know maybe they could really make a great robot arm and a you know great robot torso or something like that i don't know yeah i guess that that's a brings an important dimension into it which is you know i if even if it made sense to to um to have a custom-made robotic form for whatever specific you know operations you were undertaking manufacturing or whatever else it might be i mean even you know imagine a retail operation you're selling clothes in the you know in a shopping mall or something like that and what does the robot need to do i mean it's needs to be able to fold pick up clothes fold them put them on the shelves unbox things whatever you know the various tasks are run the cash register mop the floors and open and shut this shot and all that sort of stuff and one can imagine that there might be a form non-human form that would be more optimal for those functions than the human form but would the cost of developing in that entire form be worthwhile if you could just buy a tesla bot and that teslabot could do a pretty good job of it because all of the shells and everything are already set up for so that human workers can do it and maybe so i guess there's this this factor of you know what what makes what will there be a market for the production of lots and lots and lots of different customized specialized purpose-made um robotic forms or will is that's that seems to me to be one sort of a tractor basin in the possibility space and another tractor basin is that is that we instead everybody just figures out how to make the best in their in their whatever their um situation is of some standard form that's available maybe one or a handful of standard forms one which presumably might have some humanoid you know elements to it um and maybe you know another one might be some wheeled thing with a flat surface on the top that you can put stuff on top of i'm thinking like us you know the rope the robotic waiters we've seen in science fiction movies forever right it rolls up and it's a tray on top of wheels basically anyone can imagine a handful of sort of general platforms
that then everybody else is designing their processes around as opposed to okay i've got this process i need a special robot for this and you know and then but i mean that's that's an assumption you know i could see it you know potentially going both ways especially if if maybe the design and production of robots becomes much more efficient and affordable and cheap and i don't know yeah i think that's i think that's right i think that most likely to see some kind of limited but not specific platforms reach scale at some point probably within five years or so and maybe that's i would bet on something like what peter robiel is doing with that his company uh so so why do you say five years dan uh yeah okay so here's here's some background knowledge that's going on in my head so if you look at the second board i'm gonna sketch something so um so there's a couple of ingredients you need to do i think mechanization versus automation is a good way of talking about it so what coherent does and similar startups is they have robotic platforms which are not they kind of uh they're able to do a fuzzily defined task but it's still a specific task right maybe that's sorting some specific classes of objects or maybe it's looking for faults in some item so it's it's kind of not a general task but it's also not as precisely defined as the old-fashioned manufacturing automation would require and the the fuzziness there is is is possible because of things like deep reinforcement learning um so that's the you know the same technology that's in in alpha go uh for instance or alpha star um and then there's of course you know improvements in the physical engineering of the robots i don't know that there's been necessarily a lot of breakthroughs in that maybe there have i don't know um so the combination of these two is is what is sort of seems to be driving some of the progress things like these robots that run around in amazon warehouses i think they don't really have any deep learning in them or maybe very little maybe there's a little bit of computer version but it's more like old-school um you know there's a lot of very capable ai that was developed for robots and hasn't changed much and that's what roomba uses and you know uses and works very well for what it's doing so i think there's a lot of one has to be a bit clear that a lot of the progress in automation is more like amazon has reached a large enough scale to invest a proper amount of money in technologies that were already well
understood for decades but nobody really tried to deploy them ah yeah yeah versus you know there's like really new technology that people are making use of the two you know because the the money and so on available at scale for tech companies is you know just it keeps increasing exponentially the money is being made available to both sources of innovation at the same time so it looks like maybe it's a single trend but it's in in some sense two i think i mean they're related of course that's an element of our theory for disruption where we have we have the convergence of multiple threads of of technological advancement and sometimes the you know one of the threads can exist for a very long time before it sort of comes it comes it it converges with some other thread of technological advancement and then circumstances too availability of capital is one market demand a potential sort of breakthrough product is is another so yeah that's all quite familiar yeah so one of the reasons to be kind of bullish about deep rl in robotics i think i mean it's it's very difficult people have been trying this for ever since i paid attention to the field so i don't know it's at least uh yeah i mean at least six or seven years deep rl for robotics has been a thing people have been doing you would have seen the the rubik's cube that open ai did uh and i spell rubik's i don't know so there there have been some kind of milestones in applying dprl to which is a particular kind of deep learning to to robotics but uh i think there's probably a trend happening in deep rl that when translated to robotics will potentially unlock rapid progress in the way that we've seen happen in computer vision that was the first place where where deep learning of course took off and that was as a result of uh well neural networks themselves but also this large training data set image net and then arguably the the next big step for deep learning which we're in some sense in the middle of um were language models based on transformers so that's a particular kind of architecture um so that's things like gpt3 gpt3 yeah i was going to say that's the one that's the one i've i read about the most in the news is the gpt3 yeah and the the phenomena there it's not just the architecture but it's also the kind of practice of what you might sometimes call pre-training so the real the discovery here is not so much the architecture it's the fact that you can pre-train one very large model so it's very expensive to train it you train it on
you know that's something like all the english on the internet or something approximating that right so a really large data set you pre-train it with a very generic task like predict the next character so it's not task specific so that's what people mean by large-scale pre-training so it's a very generic sort of unsupervised task you're not asking it to predict whether a given piece of text is happy or sad or any specific task so you pre-train a very large model like that and then you can just use it to do many specific tasks or you can even fine-tune it to do a specific task so sentiment recognition predicting whether a review is happy or or unhappy would be an example but there are many other ones and now that's a very important approach in computer vision as well so this idea of pre-training is is a very important one in modern large scale deep learning now to date this approach has not worked in deep real really i mean people are starting there have been some recent progress but what you really would like and this is the kind of the holy grail of deep rl is to have this kind of language moment but for deep rl where you have some enormous corpus of tasks that an agent might want to do now in the case of robotics this might be you know you have 600 million years of simulated stuff an agent might want to do every task available in a human home driving you know blah blah blah you just invent heaps of tasks and then you train a kind of large model to do all of those tasks or at least like maybe not to do every specific task but you train it to do something which you think is kind of upstream of being able to do any one of those tasks whatever that might be maybe like predicting what happens if you take a certain action and then you hope that you can just fine-tune that single very expensive model to do many specific tasks and interesting and we may i think we're kind of on the cusp of that so people there's been some some results in that direction you may have seen deep minds maybe we discussed that i don't know i can send you a link but about a month ago there was a paper out of deepmind where they trained an agent in i mean thousands and thousands of games not any specific game and then observed certain kind of generalized learning and exploration behavior emerged which could then transfer to a game that the agent had never been trained on like literally different rules so that's an indication that this kind of pre-training is possible for deep rl agents at sufficient scale in a very few
companies can afford that kind of scale actually i'm not quite sure how expensive that particular experiment was but to do it for something that would approach useful for robotics would be you know extremely expensive um but okay just today uh deepmind posted uh something about fusion i don't know if you said it yeah yeah i saw that but they're they're uh actually it's interesting um there's a there's a science fiction movie it's a mediocre one but it's uh called passengers um and i forget who is in it um chris somebody or one of the the oh yeah i've seen that movie yeah yeah they're like asleep and then they get their sleeves and he wakes up or something yeah the premise one of the premises is that the computer gets damaged the shoe gets struck by a you know a um some object as it's traveling at high speed it's high but it's high sublight speed uh in its shielding fails and the computer gets struck and so one one one interesting thing in the premise was that it apparently required a huge amount of computing horsepower to figure out how to contain this diffusion reactor and so the the the systems are failing because the the computers are struggling to to keep up with maintaining this this damage diffusion reactor and uh it seems a little silly but then i thought you know that's actually not a bad idea might might actually be something that requires um you know that it may be that you you need something some very sophisticated extremely high speed um uh uh you know um uh system it's some sort of you know intelligent response kind of system that's watching and responding in real time to containing some a very complex and changing um sort of reaction and i because i was aware that that was a real challenge in in the fusion reactor systems is just how do you how do you make adjustments in a you know millisecond millisecond to do things anyway surprisingly prescient you know a science fiction sometimes gets lucky with things and that of course as soon as i saw the article today i thought oh that's interesting that that wasn't such a such a goofy you know idea after all see in some ways it seems a little silly in the movie and you know you accept these things as part of the the um uh you know as part of the premise of or whatever um suspend your disbelief but it's it is funny when it when in retrospect things make act actually turn out to make a little bit more sense maybe than you originally thought they did sorry anyway yes i did see the article this morning and maybe i think
yeah it should be clear that's not as far as i know an example of this large-scale pre-training i'm talking about uh it's just a uh i think deep rl even among the deep learning community had a reputation for being a kind of stupid toy thing that would only work on games you know like real grown-ups use deep learning and computer vision or language and it's only the weirdos a deep mind that do deep rl and dream that one day this might be useful for something uh was there a specific reason why the the reinforcement learning was not seen as having as much potential as other forms of machine there's other forms of deep learning other forms of machine learning if there isn't a simple answer to that yeah that's a good question so actually jeffrey hinton was kind of famous for being not so optimistic about deep rl um i think it's uh partly it's because in deep rl it's kind of a technical reason so in d parallel you're training an agent to take actions in an environment but what they're learning from is their past experience right but that past experience becomes outdated as soon as you improve so if your ability to play a game improves the distribution of games you experience is different because you're now like experiencing amateur games whereas before you're experiencing like really stupid games where both you and your opponent which maybe is you um are just playing random moves right so what that means is that you have to generate your own training data as you go as opposed to something like computer vision where you have a fixed corpus that you're training your network against you know mapping from images to yes or no whether they contain you know a fire truck or whatever but it's fixed so there's kind of a sense in which you can you can spend the money once to get this data set and then you can train many networks against it whereas deep rl is this kind of related but very i mean the it's kind of like an economics difference which is decisive right so that's why it's so expensive to do dfrl because you have to keep generating your training data you can't just train you have to generate the data you actually have to simulate the world that the agent is in and that's why it's really expensive to train an agent to play starcraft or or go right because you can't the problem is not just to map it's not just to learn a function it's to like to learn a function that is conditional on the current agent and that agent is changing so there's some sort of recursive dynamic that creates some
that it sort of explodes the data requirement a bit is that what's uh yeah that's right and it's a you trade computation for data that's how people often talk about it so well sure that makes sense yeah yeah um but that computation is you so you whereas the data cost in the case of supervised learning like imagenet or languages in a sense you can just you can spend that once to get a big data set but in dprl first of all you have to you have to create the data so you spend compute to get the data but then you have to keep spending you you can't just spend it once and be done so that's kind of why it wasn't obvious that the fine-tuning or large-scale pre-training approach could work in d-parallel right because well he it's it's like you'd have to go out and collect all the emit all the english on the internet but then you'd have to go and do it again like as soon as your agent got a bit better in english and then you'd have to go and get it again and you know that that's not really feasible even for people with deep pockets um so well those are those are two slightly different issues right there's the like the question of whether dprl will ever be kind of practically useful but the fusion thing just settles that that's over right so it's useful the second thing is can large-scale pre-training work in deep rl and there's hints of that and i think there's no reason to bet against it so i bet that it's true that means that and then there's a third ingredient which is well can you actually use these agents to control robots well that's true you know open ai has shown you can do that what they did has many caveats and you know there's reasons to be you know they hyped it a bit too much etc etc but fundamentally it works so if you put all those three things together i think that there's a the the stage is set for a kind of step change in in how well deep rl can work with you know suitable simulators and suitable devices that are built specifically for it do you think that i mean this is i maybe i'm naive but this is sounding a fair bit like what tesla is talking about they've i mean they've just made this dojo uh super computer system which is specifically designed to not for not just for um you know the the various machine learning stuff that they want to do um but a big part of what their description there is a simulation they want to be able to generate they want to be able to generate and take you know the real-world data that they're getting from their fleet
um so they've got you know they get a lot of data pouring in from there and then you know a second sort of um uh parallel uh thing that's going on as far as data goes is simulation creating a lot of uh simulated data with simulation and my understanding is that they're they're designing their own super computer system to do that the combination of those things and some other things as well i guess and then you've got this humanoid robot that i don't know it sounds like they're putting the pieces together yeah i don't know how that take advantage of what you're talking about or at least it fits i don't know maybe maybe not but um yeah maybe it just seems like jumping to the full humanoid robot i don't i don't see how that path connects in the short term but okay so i mean that's you know their their deep learning team has some of the best people at doing this kind of thing and they have dedicated hardware teams and so on so i you know i'm not i'm not betting against them but i think i yeah maybe i'm being a bit cowardly in my predictions but in terms of what people like peter rabil are doing with that are kind of not quite general purpose like a tesla robot but still not extremely narrow like current mechanization in factories that kind of domain where you have a bit of a messy task in a factory you need a robot that you know i think people will be creating kind of not general purpose but kind of you know broader than the current robotics using stuff like dparel and this i think is going to work and that will you know even in and of itself well before anything like i think the tesla robot thing will take much longer than that but that will that will already transform manufacturing i think that kind of technology and that's i i can't see how that doesn't happen and soon it's just a matter of moving it out of i mean in the next few years that will kind of be academically proven and then it's a question of moving it out into deployment and scaling it and i don't know how long that will take but i i feel pretty sure the wheels are turning in in many places preparing for that not at least in china where this is a huge area of investment from the government so yeah i mean the um you said like you said like sorting and and visual reasoning um and maybe in a limited you know not not very very broadly generalized environment but in an environment that's fuzzy um one of the things that occurs to me that might be a good fit i caught economics aside but a good fit for the task would be um farm robots you
know things like like you know identifying distinguishing weeds from crops and identifying when fruit is ripe and should be picked and you know just walking just you know rolling along or moving along and picking berries you know when they're ripe and and you know right now obviously we rely on human labor and it's it certainly qualifies as drudgery and toil um to do those sorts of tasks and there are a large number of people in the world not nearly as many obviously uh uh to today in 2022 um in you know work not nearly as many people today work on farms as as a percentage of the human population as once did we our farms are very productive compared to what they were a hundred years ago or more but um it's still a large number worldwide and so that's a very large potential market if you could if you could develop a robot that was cheap enough and it was durable enough i mean if you if if you could if you had a 10 000 robot that didn't require a huge amount of maintenance to keep it running and was could run for i don't know five years or something like that and you could power it with electricity from cheap solar power um that would start to be very attractive compared to farm human farm labor especially if it was effective um you know in other words assuming it actually worked reasonably well and uh uh so we may be surprised at where um i mean it's easy to think okay well it's gonna be in you know amazon warehouse and yeah and it probably will and it's easy easy to think it could be in some other sort of um perhaps more obvious applications but uh we may be surprised at where the good fits are for these things and i think um you know i think our aggregate farm farms and agriculture will be a surprisingly good fit um for for uh you know you don't need to be too smart um but you can't be that you can't be you know you have to be smart enough and it sounds like this this early stage would be a good fit for that um and anyway it makes me think of that sorry story i should say one more thing which is i'm sorry to interrupt you but just one more thing is one of the things i'm always on the lookout for of course as you might imagine is um where is there a linkage between um uh the three sectors of disruption and the disruptions that we're already imagining you know so the food sector the energy sector transport and um this this one the one we're talking about now the automation and so i'm kind of thinking and looking for those like where what where would where would they
be a connection between the you know this labor disruption fueled by automation and food and energy and transportation and so forth so that is probably that's mainly the reason why i thought of that but then it seems like it's a good fit so yeah here's here's another one that uh in the crossover that you might not have considered so i think soon we will be genetically modifying food plants to basically give logging data so that you know a wheat plant will be emitting various kinds of signals uh which indicated state of health and you know how it's growing is going and if there's any particular problems you know plants signal each other if there's an infestation or they have some disease but i think quite soon those signals will be available to us wow and that we'll be we'll be capturing them by you know things like drones that fly over fields and then processing them so that basically plants will be able to say come and spray some fertilizer on me and then the robot will come and do it that's holy that's amazing yeah you're right that one's not my was not on my radar but it certainly is now holy crap okay but that's that's not that yeah actually australian government's quite enthusiastic about uh i mean not surprisingly we have a large agricultural sector and it's uh that's something they really want the big brains in australia to be working on is um is helping to improve the productivity of the farm sector by exactly the kinds of things you're talking about yeah it seems it seems to me like there's there's just like a lot of latent stuff to be done where the business models need to be worked out and how do you license the ip and who repairs the robots and where do you build them and all that stuff needs to be sorted out but like there's a lot of stuff you can do right now well we i don't i i don't mean to sound either flippant or intellectually lazy about it but but in in our experience looking at at both the current big disruptions we're seeing unfold starting to unfold now and then the ones we see throughout history those details that you know logistics and and they're they're real and they are legitimate concerns but they send they tend to you know in a big noisy global competitive uh economy those challenges are the sorts of things that tend to get solved um as a consequence of just good old-fashioned market forces and government incentives and social and cultural pressure you know when you put all of that stuff together if that that is sort of that is
it's that that's sort of where there's a will there's a way kind of thing one once the once the key new technologies do emerge the hesitate to call them breakthroughs but once they you know the key technologies emerge and and you know they so that maybe they come in again i mentioned convergence before the right combination of a few technologies comes together it sort of um uh sparks a fundamentally new capability and then you know that opens up this new space of possibilities of what can be done and what can be achieved the the things that are necessary to support that growth that and the shift those things really do tend to work themselves out unless there is uh unless there's a real fundamental constraint um that just bottlenecks in and and sidelines the whole thing and that does happen it's not very often but it has happened in a few instances um but for the most part those those challenges we do manage to get those solved we take those those things you know logistics and and supporting supply chains and value chains there's always hiccups there's ups and downs and you know we see that in solar for example and and whatnot but um uh in general those things those things tend to get worked out um so it's i don't want to be dismissive about it and we do pay attention to it but having said that um i wouldn't bet against this uh technology on the basis of any of the sort of um uh the feasibility that in the sense of putting the infrastructure in place building the supporting um uh you know the supporting markets like maintenance and installation and upkeep and waste management associated with the new technology all of those things those things do seem to quite readily emerge and there's a bit of a lag and that's probably why it takes some time for these things for the disruptions to roll out they don't happen you know instantly but um uh yeah the the the unless unless you can identify something that seems like it's you know a fundamental obstacle um that stuff does tend to get sorted out and sort of not surprisingly quickly i mean it's amazing what you know what uh the availability of capital and the motivation of people to to you know um multiply it and profit it's amazing what that will do to really get people um busy busiest beavers trying to sort out you know sort this stuff out so um yeah anyway it's fascinating that that uh plant behind my mind it's gonna be rattling around in my brain now i'm sure this idea of uh genetically modifying plants to actually signal um various uh
you know data about their status that's unbelievable um yeah holy moly very cool dan you said you had a hard stop i noticed it's an hour do you need to take off oh that actually got cancelled so no i don't don't need to oh my immediately so no problem um well i didn't mean to interrupt you go ahead please and finish your your thought there yeah i guess you could kind of think about the way in which were symbiotic on machine intelligence currently right where we we have various interfaces and we outsource a lot of routine calculations to machines you you could imagine uh enabling the i mean if you think about the agricultural economy as kind of like a gene based manufacturing system right that's out there processing inputs and doing stuff for us in a very different way to our metal based manufacturing system but the gene-based manufacturing system doesn't really have access to external computation in the way that the metal based manufacturing system does but that seems like something that will change pretty soon in various ways um i don't know exactly what that looks like but it's probably thinking about manufacturing and automation i think there's a form of automation that we're probably not thinking about as much as we should which is the sort of biological automation where you know we used maybe maybe we used to use animals for a lot more mechanical tasks like using horses and donkeys to pull stuff and animals used to be a big part of our manufacturing system and now maybe they're not so much but as you've you know pointed out with your precision fermentation story that animals or bacteria or whatever will re-enter the manufacturing system in in a way i think that's true more generally outside of food right whether through synthetic biology or just understanding existing gene-based systems better we're likely to to fit animals or you know i'm not sure it's appropriate to really call them animals but let's say gene-based machines um [Music] we're likely to fit those into a lot of other tasks that we don't currently think of whether it's cleaning up mining sites or uh it's maybe somewhere somewhere on the path towards the kind of you know really far out nano machine that does anything kind of future but there's probably lots of mundane things that in the short term will get done using progress in that area yep i think that makes very very good sense um and i you know i i it may be that it may well be i mean i i think there's some reason to believe that eventually
you know those sort of general purpose um uh nano machines are possible physically possible and will appear eventually um the uh you know the whole um eric drexler vision of of atomic atomic level manufacturing and um and proper you know science fiction style nanobots that kind of thing that i have not really seen the needle moving there i could be wrong but um i don't i have not seen any signs of any real progress uh there recently in the last certainly in the last what five years or something like that so um yeah it's uh i would i would but but where we have seen truly spectacular breakneck based progress is in biotech so what are we how many are weirs are we out now from the mapping of the human genome about 20 um and uh you know i guess i suppose and then plus with all computation and all the rest of it the convergence again of um multiple streams of technology it seems like we're now really in the in this space where biotech is just it's just you know has the potential to to make some quite extraordinary leaps and bounds improvements in a number of different domains simultaneously all of which will have the potential to amplify cross fertilize one another so i think i think um what you're suggesting seems to be very very very plausible um and uh yeah yeah probably potentially but very plausible yeah problem right now is something like the the input output doesn't work right so you i mean there's there's an amazing physical computer inside a cell that makes proteins and does all sorts of things we we kind of vaguely understand what it's doing but we don't know how to program it or even read from it i mean there's no way for it to really send output to our computers we can we can measure i mean the way we do it currently right now like with pcr tests and such things is we can measure the expression of various genes by chopping up the cell and you know matching little bits of mrna with some other little bits attached to our machines and counting them and that's how we tell what's going on inside cells by destroying them and doing that i guess there's other means i mean i'm really pretty ignorant in this area uh but i don't think we have a very robust way of even reading out what the state of a cell is in that sense so that it can send signals out maybe optogenetics is a kind of exception there and we don't have a good way of providing input either maybe mrna vaccines are kind of the breakthrough there but that seems like you know maybe that's very hard but it's not
there's no there's no fundamental obstacles to that right and if and if you can have robust in and out channels that are real time well then why should controlling a cell be a hundred thousand times harder than controlling a tokamak right and if you can do the latter with deep rl well why shouldn't you be able to do the former with sufficient effort and resources and you know so i don't i don't see why that's impossible and it doesn't i don't know how long it's going to take but seems pretty plausible to me right the pieces are there you just need to improve the biology a bit and maybe maybe you don't even need i mean the deep rl system that controls the fusion reactor is actually pretty old-school dprl it's like lstms and i mean they're very simple networks because they have to run really fast right the innovation there i haven't read the paper but my superficial impression is the innovation there isn't you know it's like with a lot of what deepmind does it's very careful engineering and thinking about the problem and domain expertise and it's you know it's not uh it's like some genius step that nobody's ever seen before was necessary um so yeah why not why not control cells that way i think that's coming certainly it's just like a matter of time and once you have that it's like anything you can get a cell to do you can you can do that's quite a bit yeah yeah um astonishing i wonder how that scales to tissues um and it's it's it's interesting because it's it's that that strikes me as being quite a fundamental you know a proper paradigm sort of shift in in things so my understanding i didn't frankly i had not really even considered that what you were talking about was possible but of course i think it makes it makes very sense very good sense now you explain it but what i what i had always my impression had always been that you have to you know the synthetic design synthetic biology would have you know would have to um basically basically reprogram a cell in advance you know either either genetically or epigenetically or both reprogram it in advance and then produce you know one or more cells depending on what you needed but all that that you're you're basically you're programming in your interface would be a single time in advance and then you have to specify everything in advance you have to basically install some complex programming to get the cell to behave a certain way in advance and then you know it you do it you do it one time and then you set the cell loose and it
does its thing never occurred to me that a completely different way of of interface a completely different way of doing it would be to interface in real time non-destructively with a cell and then just instruct its machinery to do things on a sort of as needed basis and one could imagine is sort of very generic you know not blank but just very generic cells that you then plug into an interface with and get them to do you know whatever our whatever thing you need done at that at that time as opposed to you know having to program thousands of different kinds in advance for different tasks all in advance um i'm trying to think of a useful metaphor here but it'd be it'd be i mean you're not a good figure out how to make a lego metaphor out of it and not really succeeding but anyway you could think about it as an api right yeah yeah of course yeah an api for the cells where you just uh you can abstract i mean you probably i mean i think the point is and this is maybe the big change that needs to happen in biology it seems to me is i think people thought of biology as being like physics where you need to understand the basic atoms and then you build up your understanding in order to do things but that seems kind of hopeless i don't know like these networks are fundamentally complex and they're like holographic you can't the information isn't in any particular part it's it's it's not actually subject to the reductionist approach maybe but that doesn't mean you can't control it as long as you have rapid feedback and learning right so maybe we maybe maybe we never understand how the actual gene regulatory networks inside cells are are working to to that together in this very complicated way to produce a given result maybe we don't need to we just we just need to learn how to build an api where you you know you send in a certain signal and a certain thing you want to happens you don't care you know maybe you don't care what happens inside that's that's a a difficult pill to swallow i think for reductionist the reductionist view on science and traditional computer science it's you know people in deep learning have thrown that out the door from the very beginning right they don't care that's that's a thing that really bugs a lot of people right that we don't know how large neural networks work we have no idea probably what this network that controls the fusion reactor is doing clearly it's figured something out we don't know do we do we care i mean as long as it works and we you
know constrain it so it doesn't blow things up even if it fails who cares right like maybe it's okay maybe that fusion system is just fundamentally so complex that that's the only way to control it and if that's true then well you know you could just sit there being sad about it or you could just be happy you can control the damn thing and maybe that's where biology ends up i don't know yeah yeah the the you're right that that the the black box element of of success in in machine learning has has it does ruffle feathers right i mean it's sort of like this it does run a little bit contrary to all of what sciences has been organized around for the last 400 years which is find some deep level of simplifying explanation and principle that then gives you unlocks power and control through that conceptual insight that that something may appear complex but when you when you unlock its secret simplicity of you know operating principles then you can then you know you can control it that's been you know and it's been successful in marriage between science and technology for for centuries but uh if if you have something that is what's the right term i forget what it computationally irreducible or you know sort of i mean just irreducibly complex and maybe it's hopeless maybe maybe it is hope but maybe biology social economic stuff maybe it's fundamentally just hopeless to try to think you're going to unlock basic principles i mean i have to say when i look at what i looked at sorry this is a number of years ago now when i was in grad school but i when i looked at econometrics um content like textbooks and papers and you know when i was doing my my econ courses in grad school a lot of that just seemed like you know it was this desperate hope that maybe we would be able to formulate something useful to say about economies with a few simple equations and most of it just seemed complete rubbish even if there was a grain of truth in it it was just going to be completely lost in the noise of a real system um and yet you know um uh it may be that that that you could have a you know machine learning approach that and ultimately ends up being a black box but that really did really really works and uh um it's funny my impression is that wall street pivoted completely away from academia and you know econometric analysis to try to really understand what was going on in markets and instead just just brute forced its way to algorithmic success you know successful algorithmic trading with uh
you know with machine learning basically and that's my my uh you know that seems like a another yet another example of where if it you know like what you're saying if it works who cares right um one might imagine maybe maybe it is possible to ultimately unlock some deeper level of explanation unless it's excluded in principle um by you know fundamental irreducibility of complexity or something like that maybe maybe maybe you know some maybe super intelligent you know artificial super intelligence generally uh agi you know that's super intelligent maybe it'll actually be able to properly uh in the way we find satisfying understand these things really you know quote unquote really understand them but you know one can imagine that we could make it a staggering amount of progress with um in a way that's fundamentally opaque and uh we may just have to come into groups with that you know yeah i think there's a kind of machine learning point here which is worth appreciating that i sort of notice also in so string theory is is kind of stuck for various reasons but one of them is that there are many well if you have a model and there are many parameters that don't affect directly affect the predictions so that those parameters could have many values and give the same predictions right so a neural network is like that your brain is like that right so you could have exactly the same behavior by you know tweaking there are certain parameters in your brain or in a neural network which they kind of only appear as a product let's say like x times y so if you doubled x and halved y you'd have the same outputs for all conceivable inputs because x times y is the same so that means that the the individual values of x and y they don't have any meaning it's pointless to try and explain them because there's no explaining them because they don't have any definite values right so that's what we mean when we say neural network is singular or degenerate or and many systems are like your brain is like that so that means that actually there is no predicting those values all you can predict is certain like sets of possible values that correspond to the truth and physics is unfortunately like that too where if string theory were true there are you still can work out the values of the parameters in it because there are many combinations that would produce the same observable universe so then in terms of explanation well either there's a version of the model where you know every parameter it
corresponds to something observable so it has a definite value you just can't work it out maybe but in principle it's definite or maybe the universe just is kind of has these hidden parameters that they contribute in some way to see what you know to explain what we see but there's no predicting them and no explaining them because well they're not quite the same thing i mean maybe you can still understand the underlying structure even if you couldn't put values to the parameters but maybe they just that's the kind of ultimate limit to explainability right it's just like the if you can't if the things you observe have inner workings that are inaccessible then there are many possible models that fit what you observe and there's no getting beyond that maybe so i could easily imagine that i mean in evolution [Music] nature has evolved certain kinds of mechanisms say the gene level that that work and pres you know perform some function but the content of that mechanism is irrelevant it's just some accident that arrived at you know you know after 60 000 years of trying something 600 million years some particular thing worked to do a particular job maybe that job is like creating a repeating pattern you know like zero one zero one zero one at some time into the nature hit up on some trick to do that but the content of the trick is you're wasting your time trying to understand that right that's a level of analysis at which it's meaningless to dig into it because well you could spend an entire career understanding every detail of how that's implemented but nature doesn't even care right it could have history could have been completely different where the mechanism was completely different and the entire observable history of life on earth would have been exactly the same so then what's the there is there is no meaning to the nature of that system right because it's a difference without it what is a distinction without a difference right it's just yeah the information you gain by knowing what's going on there is just useless it doesn't literally doesn't help you predict anything understand anything it's just an accident and how can you tell which details in biology are of that form and which are actually distinctions with a difference well you know so the approach with machine learning might be something like let the learning algorithm do in a sense what nature did right it fits to the thing that's just accidental and then you worry about dealing with the higher level thing which actually makes