WARNING: Summaries are generated by a large language model and may be inaccurate. We suggest that you use the synopsis, short and long summaries only as a loose guide to the topics discussed. The model may attribute to the speaker or other participants views that they do not in fact hold. It may also attribute to the speaker views expressed by other participants, or vice versa. The raw transcript (see the bottom of the page) is likely to be a more accurate representation of the seminar content, except for errors at the level of individual words during transcription.

Synopsis


AI technology is being used to develop creative tools for idea generation and conversation, but it also raises complex ethical issues around privacy and security. Ray Kurzweil and Stephen Wolfram have proposed creating a simulacrum of a deceased individual, while chat bots are being used to facilitate idea generation and articulation. AI has the potential to create powerful models of people, as well as provide easier experiments and a better understanding of the inner workings of psychology. Automobiles have changed how we perceive public and personal space, and technology is increasingly accessible, raising concerns around thought control.

Short Summary


AI services are being integrated into metayuni, but there is potential for disruption and a need for privacy. Users can view and edit settings, but cannot currently choose whether or not the AI agent hears them. There is a need for more API development to give users more control. AI can be given access to messages sent in a text chat, however this is an opt-in feature and there may be trade-offs to consider when interacting with an AI. Technical measures can be taken to protect user data, but there are limits to what can be done.
AI technology has the potential to allow ideas to be transferred from one person to another in a variety of ways, such as conversation, writing, recording, and making videos. It can also be used to package ideas in an interactive way, allowing users to ask questions and get creative responses. Technology is progressing to the point where it is possible to interact with ideas more deeply and quickly than before, with advantages such as reducing time zones and bandwidth limits. However, there are also potential problems that need to be addressed, such as a Continuum from literal text search to extrapolation based on other information which could lead to a static recording of a person's thoughts.
Ray Kurzweil and Stephen Wolfram have proposed the idea of creating a semi-static or semi-dynamic copy of certain ideas, allowing people to experience someone's thoughts through text. Technology is being explored to create a simulacrum of a deceased individual, as well as allow interaction with a part of a person. This raises complex ethical issues around privacy and security, such as how to draw boundaries and what obligations we have when interacting with people on mind-altering substances. These ideas are being taken seriously by various academics and could be a space to explore further literature and ethical considerations.
Technology has the potential to create simulacra which could accurately represent a person's ideas and views. This could be used to both misrepresent and accurately represent a person's views, allowing them to more efficiently respond to any misrepresentations. GPT3 can generate automated summaries of seminars, but it can be inaccurate, misunderstanding what is going on. There is potential for an arms race between using simulacra to defend against people putting words in someone's mouth and using GPT3 to generate summaries, but some are not optimistic that this would help.
AI technology can help authors communicate their ideas more effectively, but there is a risk of AI being trained inaccurately or influenced to take a different path than the creator's wishes. Nietzsche's writings and ideas were edited and published in a twisted way after his death, showing that beliefs can change over time. Writing and speaking are ways of thinking out loud and can help our thinking evolve, while AI can be used as a sounding board and catalyst for new thinking. However, caution should be taken when using AI as it could be manipulated.
Chat bots are being used to facilitate idea generation and articulation. They should be treated as independent, artificially intelligent entities, rather than simply as extensions of the user. The conversation between Adam and the doctor demonstrated the potential of this approach, with the bot being able to surface related ideas, content and stories. This could be used to simulate the experience of making small talk at a cocktail party, with the bot tuned to search for anecdotes at a given distance. This could be a useful tool for those wishing to generate and explore ideas.
AI is increasingly being developed with adjustable personalities, such as humour and sarcasm. To create these personalities, we must understand what makes them work. Human psychology has various models and measures of personality, such as the Big Five personality traits, which are validated through survey methodology. To fine-tune AI personalities, parameters must be set to get them thinking and acting a certain way, moving us out of conjecture and theorising and into experimental confirmation. An example of this is the U.S ambassador who had a personality trait of bringing in maximally distant yet still tenuously related anecdotes, which could be modelled and replicated with technology.
AI systems may be effective models of psychology, providing easier experiments and a better understanding of the inner workings. However, psychology is difficult to study due to complexity, and AI systems may be similarly complex. Machine and human psychology have similarities and differences, and future AI systems may be similar or different in authentic ways. It is possible that differences between AI and human psychology will remain, creating a new category of intellectual units. AI may possess a sense of authenticity, or it may be very different from human psychology.
Psychology is important for understanding how people interact and for AI safety research. Privacy is a key factor when it comes to AI constructs, as they can be used to gain an advantage over people without permission. AI has the potential to create powerful models of people, raising questions about ethics and legalities, as well as the need for a privacy policy. AI could also be used to create constructs that reflect a person's books or tweets, but it must be kept separate from interacting with a person's public persona.
Advertising is seen as a manipulative and malevolent force, with Facebook traders using data to create a super persuasive weapon. Privacy is not the main concern with Facebook or NSA surveillance, but rather the extraction of correlations from many people. Representing individuals faithfully in constructs is dangerous, and people should contribute to culture by sharing ideas and conversations. Going forward, culture will also live in the weights of neural networks and embeddings, while advertising is almost impossible to avoid in the US and is driven to maximize revenue. Mass surveillance has been used to search for needles in haystacks, but the argument is not strong enough.
Automobiles are a disruptive technology that changed how individuals perceive personal space and public space. They are seen as an extension of self and a status symbol, and so large portions of public space had to be set aside to accommodate them. This renegotiation of boundaries is seen in other areas such as the internet, and people welcomed the technology due to its massive improvement over prior modes of transportation. Automobiles also have a direct relation to privacy, as people have certain rights when inside their vehicles, yet there are still anti-automobile movements today.
Technology is becoming more accessible and integrated into everyday life, raising concerns around regulating people's thoughts and potential for thought control. It could be possible to construct a cybernetic version of a person, which would require redefining boundaries around people. An alternative is that the unit of the intellectual workforce is the person, which could be discussed in a future seminar. People can also do things in vehicles that they wouldn't do in public, such as in Horseback, as cars are seen as an extension of one's home. Long-term, boundaries between our own thinking and enhanced thinking with tools may breakdown, allowing us to integrate artificial intelligence capabilities into our own mind and body.

Long Summary


AI services are being integrated into the experience at metayuni, but there is potential for disruption and the need to redraw boundaries around privacy. This panel allows users to view and edit settings, however they cannot currently choose whether or not the AI agent hears them. There is a need to be aware of what is happening and feel in control, as AI progresses and becomes pervasive. The panel provides a description of the situation and allows users to edit settings, but more API development is needed to give users more control.
AI can be given access to messages sent in a text chat if selected. This includes the user's username if the 'remember' column is also selected. If both columns are not selected, the AI will not be aware of the user's identity. This is an opt-in feature and there may be some UI to make this clear. There may be some trade-offs to consider when choosing to interact with an AI, as its memory is more permanent and scalable than a human's. Technical measures can be taken to protect user data, but there are limits to what can be done.
AI technology has the potential to allow ideas to be transferred from one person to another in a variety of ways. Conversation is one of the most common, but writing a book, recording a podcast, making a video, or writing a play are all possible. AI technology can also be used to package these ideas in a more interactive form, allowing users to ask questions and get creative responses. The downsides of this technology need to be addressed, but the advantages it offers should not be overlooked.
Technology is progressing to the point where it is possible to interact with ideas more deeply and quickly than ever before. This will come with many advantages, such as reducing the friction of time zones and bandwidth limits, but also many problems that need to be addressed. There is a Continuum from literal text search to extrapolation based on other information, and this could eventually lead to a kind of intelligent scan of a person's beliefs. This would allow people to create something like a static recording of their thoughts in a way that hasn't existed before.
Two individuals, Ray Kurzweil and Stephen Wolfram, have been explicit in the idea of creating a semi-static or semi-dynamic copy of certain ideas. This concept of extricating a person's mind and turning it into an interactive construct is an old idea seen in science fiction, such as in I Robot and Neuromancer. Kurzweil and Wolfram have made a point of documenting their work and output with the intention of being able to reconstruct them or some version of their personality in the future. This idea of being able to experience someone's thoughts through text has been around for a long time.
Technology is being explored that could create a simulacrum of a deceased individual, as well as allow interaction with a part of a person. This raises complex ethical issues around privacy and security, such as how to draw boundaries and what obligations we have when interacting with people on mind-altering substances. These ideas are being taken seriously by various academics, and it could be a space to explore further literature and ethical considerations.
Technology has the potential to create simulacra that could accurately represent a person's ideas, thoughts, and views. This could be used to misrepresent a person's views and put words in their mouth, but it could also be used positively to ensure that no one could put words in somebody else's mouth. It would also allow a person to have a construct that could scale and be accessible to hundreds or thousands of people. This technology could give people the ability to respond and correct any misrepresentations of their views in a more efficient way.
GPT3 can generate automated summaries of seminars, which can be as good as what a human would write in some cases. However, in other cases it can be inaccurate, misunderstanding what is going on. These summaries are generated by looking through the transcript for ums and R's. It has been suggested that a sanctioned, officially sanctioned simulacrum of a famous person could be used to defend against people putting words in their mouth or misinterpreting them. There is potential for an arms race between these two functions. However, some are not optimistic that this would help with defending views, as it could be wrong in important ways.
AI technology can enable authors to communicate their ideas to the public in a more efficient way. However, there is a risk that the trained AI and the public interface may differ from the author's actual beliefs, or that the author may change their beliefs after the AI is released. This could lead to a disconnect between the author's original ideas and the public's perception of them.
Nietzsche's writings and ideas were edited and published in a twisted way by his sister after his death to support her ultra-nationalist German ideology. This raises the possibility of AI being trained inaccurately or influenced to take a path different from the creator's wishes. Notable examples of intellectuals changing their beliefs, such as Wittgenstein, show that people's beliefs can change over time. It is important to consider how AI can evolve and change its beliefs, as well as how human beings develop their ideas in conversation with others.
Writing and speaking are ways of thinking out loud, allowing us to reflect on our thoughts in a way that is qualitatively different to just keeping them in our minds. This process of expression helps our thinking evolve and develop. By writing things down and reading back over them, we can refine, extend, elaborate and expand our own thoughts. Artificial intelligence can also be used as a sounding board and catalyst for new thinking, by allowing additional thought to come in. This could be manipulated, however, and so caution should be taken.
Chat Bots are increasingly being used to bounce ideas off of, practice articulating and re-articulating them, as this process often leads to new ideas emerging. It is important to consider these bots as independent, artificially intelligent entities, rather than shadows of the user, as this avoids potential misrepresentation. The idea of a bot being like a book, with a life independent of its author, can be useful for creating artifacts that contain thoughts and books that reflect aspects of the user. Doctor's reaction to the reference posted in chat showed the potential of this idea.
Adam was chatting with a doctor and the conversation was multi-channel, engaging and allowed for the digestion of references. It was suggested that the bots could surface related ideas, content and stories in a similar way to someone at a cocktail party making small talk. It was thought that the bot's behavior could be tuned to search for anecdotes at a given distance, either closely related or recent events, and that this could be useful.
The speaker recalls small talk conversations with people who had a personality trait or shtick of bringing in maximally distant yet still tenuously related anecdotes. They cite the example of a U.S ambassador with this trait. It is suggested that technology can be used to fine-tune a model to do this, by providing thousands of examples of it. There is a brief discussion on how this relates to the construction of artificial personalities in chat bots.
AI is increasingly being developed with personalities, allowing for adjustable traits such as humour and sarcasm. To create these personalities, we must learn what determines them in order to make them work. Human psychology has various models and measures of personality, such as the Big Five personality traits, which are validated through survey methodology. When creating AI personalities, the parameters must be set to get them thinking and acting a certain way. This will move us out of conjecture and theorising and into experimental confirmation.
AI systems may be effective models of psychology, allowing for easier experiments and a better understanding of the inner workings of the system. However, psychology is difficult to study due to the complexity of the interactions between the low-level chemical processes in the brain and the behaviors that emerge. AI systems may be similarly complex, making it just as difficult to gain insight into their inner workings as it is with humans.
Machine psychology and human psychology have similarities and differences. Machines can process images differently from humans, leading to different outcomes. Future AI systems may be similar to humans in some ways, but radically different in others. AI systems may end up forming a different category than humans, and may behave differently in authentic ways. It is possible that differences between AI and human psychology will remain, creating a new category of intellectual units.
Humans possess a sense of an authentic self, and can also layer on top of their behavior affectations and roles to present themselves differently in different social contexts. However, some people are unmoored and lack consistency of personality or behavior, which can create problems. It is interesting to consider how AI will possess a sense of authenticity, or if it will be very different to what is experienced in human psychology.
Psychology is important in understanding how people interact and maintain boundaries for safety. It is also useful in AI safety research, as understanding how AI systems think and evolve can help prevent bad outcomes. Privacy is an important factor when it comes to AI constructs, as a construct that mirrors a person's beliefs can give someone power over them if it is accessible without permission. Psychology is useful in modeling people and predicting behaviour, which can be used to gain an advantage against someone.
AI has the potential to create powerful models of people, which could be used against them. This raises questions about the future of AI and privacy, such as whether it is ethical or legal to create 'shadows' of people without their permission. It is likely that AI infrastructure will be used to create virtual worlds, and so a privacy policy will be needed to protect people's data. AI could also be used to create constructs that reflect a person's books or tweets, but it is important to draw the line between this and interacting with a person's public persona.
Advertising can serve useful functions and is part of a system that does a lot of good, but it can also be seen as fundamentally malignant and malevolent. Facebook traders are revealing data that could be used to create a super persuasive weapon, which could be seen as hacking people. Privacy is not the main issue with Facebook or NSA surveillance, it is more about extracting correlations from many people. Representing individuals faithfully in constructs is dangerous. People should contribute to culture by sharing ideas and conversations. Going forward, culture will also live in the weights of neural networks and embeddings.
Advertising is a profoundly evil thing, as it manipulates people into buying and consuming things they didn't know they wanted or needed. In the United States, it is almost impossible to avoid advertising, and Facebook is driven to maximize revenue from these ads. Even those not on the platform are vulnerable to manipulation due to the data set created by users. Mass surveillance has been used to search for needles in haystacks in order to catch those planning high damage events, such as nuclear terrorism, however the argument is not strong enough.
Automobiles changed how individuals perceive personal space and public space, as they are viewed as an extension of self and a status symbol. This redrew the boundaries between individuals and the collective, as large portions of public space had to be set aside to allow automobiles to operate safely. This renegotiation of boundaries is seen in other areas, such as the internet, and increases in intensity with technological advances.
Automobiles are an example of a disruptive technology that changed how boundaries were drawn between public and private spaces. People welcomed the technology due to its massive improvement over prior modes of transportation, and many laws and constraints were changed to accommodate it. Automobiles also have a direct relation to privacy, as people have certain rights when inside their vehicles that cannot be violated without justifiable reason. Despite this, there are still anti-automobile movements today, a century after the technology was first introduced.
People can do things in vehicles that they wouldn't do in public, such as in Horseback. In America, cars are seen as an extension of one's home. There is a discussion around the idea of needing permission to create constructs of others due to the risks involved, such as taking advantage of the information and knowledge to manipulate or harm another person. In the long term, boundaries between our own thinking and enhanced thinking with tools may breakdown, allowing us to integrate artificial intelligence capabilities into our own mind and body. This could lead to the ability to create constructs with a high degree of fidelity simply by thinking about it.
Technology is developing rapidly and becoming more accessible, cheaper and more integrated into everyday life. This raises the concern of regulating what people think, and the potential for thought control and policing. It could be possible to construct a cybernetic version of a person and the boundaries around people may need to be redrawn to include those parts of them that are cybernetically enhanced. An alternative possibility is that the unit of the intellectual workforce is the person, and this could be discussed in a future seminar.

Raw Transcript


okay so I want to start today by if you look up the top of the screen you should see a new menu item AI um and if you click on that you'll see hopefully a list of AI names shawl U twice Ginger and doctor and you should see a green icon next to doctor on the leftmost icon so that icon which is a microphone with a piece of paper is an indication that that AI can hear you via the transcription service so it's the voice chat whatever we say through voice chat is being transcribed and is visible to this agent so this is the beginnings of being more transparent about how the AI services are integrated into the experience here at metayuni this panel seems to let you do more you can click on things like read and remember and those don't actually do anything right now so this is a work in progress so I want to describe what the idea around this is and have this lead into a a more General discussion around not only privacy but the way Adam put it uh I I liked which is technological disruption often causes us to have to or wish to redraw boundaries around individuals institutions ideas like privacy uh of course who gets to redraw the boundaries and who gets to choose whether to do it or not and who more or less gets forced to some of the most contentious issues around the introduction of new technologies and I want to be very careful about how this is done here because I see many potential upsides but they could be easily derailed or go badly if people aren't very aware of what's happening and feel in control of what's Happening so this is surely one of the most important issues that will arise as AI progresses and steadily becomes pervasive there's no stopping that um so yeah hopefully we can get it right here and I'd like your help in doing that so um I'll just talk you through what you can see on this privacy settings panel and the plan here and then um you can comment on that or the broader issues so right now you can't choose whether or not the agent hears you that's technically a bit difficult so if you're in a seminar and it's flagged as a seminar with the agents are listening then that's just the way it is you can you cannot speak or you can leave uh once it once there's more of an API around the voice chat I'll try and do more about that but for now that First Column there you can't edit it it's just a description of the State of Affairs the other two columns are up to you to edit yeah that's right that's another thing I'll comment on so Roblox and also open
AI out of our control and as long as we're sending them information they could be doing various things with that you can pursue their privacy policies and decide to what degree you believe them um so that's another Factor important factor that's right so the the read column here uh the way this will work is that if it's selected then the AI can see the messages that you type in the chat if that's not selected then it just it's like you won't exist from the point of view of the text chat the remember column is whether or not the agent forms memories of those interactions with you or the things you type so that includes your username you might choose to do that because in future you know you come back in and it refers to a conversation you had and the only way really knows about that is by referring to your username which it looks up in its memory or rather it looks up the content of the conversations so you may choose to do that or you may choose not to these will both be opt-in columns which will be a bit awkward potentially um I guess you might type to the AI and just it doesn't talk to you back you might wonder why so there needs to be some UI around that but I think it's better for both of these columns to be opt in um yeah so that's the idea for the moment for how it will work and if you don't tick either of those then uh the hearing doesn't refer to usernames so the agent just hears the message it gets is that somebody said blah blah blah okay um yeah go ahead technical question um so with the memory thing um is it just that basically the messages are um stripped from the log I mean the message is obviously have an impact on the conversation and that's like a side Channel through which memory could correct go through like let's say you say to the AI repeat after me blah blah blah and then it says that your messages your message is removed it's message days yeah that's correct I I think this that's one of the things I want to discuss I mean there's there's some kind of uh reasonable I mean there are technical things I can do but beyond a certain point it's kind of there are things you give up by choosing to participate in a conversation right it's not a realistic expectation at least with people right so and maybe the AI should be treated differently to people because their memory is different and more permanent and and potentially scalable in a way that human memories aren't so these are all things that are relevant so I don't know actually where my attitude sits but it's
it's also the AI memory is also more controllable and editable than a human is though human beings can't be forced to forget things but AI certainly could in principle yeah so I think there's once you once you interact in a conversation there are effects those effects happen on other people and their ideas and through those changes your ideas and your speech gets passed along and pass some degree of separation it's no longer reasonable for you to expect to control that or to be able to recall that uh so I don't know where that line is with the AIS but the the simple initial stage is simply that they won't record the direct things you say yeah yeah that's all clear and I agree um just wanted to point out that it's like a little more complicated than absolutely yeah just a button whether or not you remembered and I think there is there is the potential for misunderstanding right I mean people could especially people who are technically literate sort of get what's difficult and what's not and may not have unrealistic expectations but uh if you're not deeply into this Tech you may see that remember thing and think that if it's off then basically nothing you say no aspect of it will be recalled by the system and that might be an unrealistic expectation so I think those kinds of things maybe we can do something to spell them out probably people would read it so yeah there's a lot of communication work to do around these um um yeah let's continue I want to hear more of your thoughts about that but I want to put into the conversation also on the positive side of The Ledger the reason to do this in the first place right there are downsides I believe there are also upsides and the the goal is to capture some of the upside while avoiding as many as possible of the downsides um the advantages that I see here are something like the following so there are many channels for ideas to make their way from my mind to your mind one of them is conversation there are many others I could write a book you could read the book I could record a podcast you could hear the podcast I could make a video you could watch the video I could write a play you could sit and watch the play etc etc it seems pretty clear to me that quite soon and going forward perhaps a predominant way that my ideas will make it to you is by them being packaged in various forms of intelligence of varying degrees so a podcast will come with an interface that you can ask questions to and it may be more or less kind of creative or
stitching filling in the gaps using other information in order to give those answers it might be Continuum all the way from just like literal text search through to kind of quite broad extrapolation based on other things the speaker has said and so on and there will be many things along that Continuum we'll interact with and they will come with many problems but I think the advantages will be so profound that this will be something people will do a lot I think I can see maybe not with the current systems they're still quite Limited but as these things improve this will I think be something quite desirable and really increase the rate at which we interact with each other's ideas so I'm you know I have the advantage of atoms my friend right so I can kind of just email him anytime and be like hey here's a stupid thing what do you think about that um You probably feel shy to do that although probably I don't want to reply to your email um so there's some sense in which people will be much more willing to interact with ideas When there's less friction and interacting with people is great but comes with friction time zones for one bandwidth limits on people's attention for another so I can see a lot of potential for having this be a great way of deepening people's interaction with ideas but yeah the downsides need to be elaborated and thought through and mitigated Serbia I'll stop speaking now and invite comments on that well we could sorry go ahead I'm just gonna say that the spectrum that you outlined is interesting so I'm thinking there's like um having no contact with someone having some kind of like static contact with them which is like a recording of a lecture or a book that they've um presented it you just followed the the order that they've presented in then it's sort of like picking and choosing what you read um based on like being able to search for keywords or something like that and then there's having a conversation with them yourself where you you kind of get to pick and choose what to prompt out of them and then coming in between those last two levels is maybe something in between that hasn't existed before which is an intelligent kind of um copy of a person I mean it's in sci-fi a lot um some kind of uh some kind of scan of a person's beliefs in the same way that a book is like a uh um a kind of model of a certain part of a person's thinking um crystallized into into text maybe instead of like becoming a YouTuber people in the future will like create
these um well yeah a YouTube or an author people would create these um semi-static semi-dynamic copies of certain ideas and then publish them yeah it's interesting just to elaborate on that there are a number of examples in science fiction where versions of a person are part of their mind becomes is extricated and turned into some interactive construct I've seen that I'm trying to think of an of an example in Star Trek because we had fun trying to point out how many interesting ideas were anticipated and used within the storytelling of that of that you know fictional universe but I can't think of one that comes immediately to mind it's exactly that idea the one that's kind of stuck in my mind to say um there is a character like that in the movie I Robot it's about 20 years old now so it anticipated it reasonably well and um uh I could actually remember thinking well that doesn't sound very that doesn't seem intuitively very plausible to me when I saw it at the time and of course in retrospect it's uh it's quite interesting and I think there are classic examples much older in science fiction I am Neuromancer is a good good yes a good example I think even um if you go back to sort of the Golden Age Arthur C Clarke and Heinlein may have had some um some examples of this so it's interesting that it's it's the idea that you could take a a fraction of a person's thinking or mind and make that a construct with which other people might interact it's interesting that that's an old idea it does perhaps in itself the fact that that idea is old Mayland Credence to Matt's other points about books effectively being a an example of that um so maybe we sort of have a a tacit understanding that we're already experiencing this when we encounter text that others have laid down and then the the other uh thing I'll mention there is that I I know of two specific individuals uh who are who for a very long time have been explicit in this idea and incorporating it into their own lives and those two people are uh Ray Kurzweil and Stephen Wolfram and both of them have made quite a not a show of it but they they've they've been at least explicit to their various audiences explaining that they meticulously and frankly quite obsessively document all of their own work and output and thinking uh with the specific idea that and that at some point in the future that body of text with primarily text perhaps some other materials will be able to be used to reconstruct them um or some version of their personality
and Ray Kurzweil is also famous for hoping that he has enough documentation of his fathers to uh effectively resurrect a personality construct uh just to get some sort of simulacrum of his of his father who died a long time ago so these These are ideas that I think quite have been taken quite seriously and quite quite thoughtful people for a variety of different reasons from storytelling and entertainment to you know very quite really quite serious uh uh you know academic purposes um and so I I there may be a space here to explore it is maybe not completely Uncharted Territory there may be a space here to explore a literature to investigate and see what other thinking is out there about this if there are any other um folks who've developed some good ethical or moral or other practical uh ideas around what it would mean to be interacting with a version of somebody and just to Circle back and connect this to the the topic of privacy and security one could imagine at the complex ethical issues arising when other agents humans and other intelligent agents are able to interact with part of a person but not the entire authentic person and what what and so so when we're talking about needing to redraw boundaries that the fact that constructs can could exist they capture a part of your mind maybe they were might be authorized maybe they might be you know unauthorized then the we that would be quite a radical redrawing of boundaries if if our privacy were to extend to the things other people could do with information about us um as far as interacting with our like would that be considered a violation to to uh have a conversation with somebody without their permission it it calls to mind something like um uh what are the what are the moral and ethical obligations of how we interact with a person who is on um for example a mind-altering substance like if they're not themselves if they're not fully cognizant then we have different standards for how we interact with with a person if they if they're under the influence of an anesthetic and are not going to remember the interaction you know that that morally and ethically that changes our obligations to that person we can't expect that person to make certain decisions uh we would we would consider it a violation of privacy to induce a person to disclose things um it would be sort of under a form of duress it would be under a a a a form of vulnerability that we would not normally find um acceptable right and so this technology raises Shadows
of that those same concerns it's quite fascinating really yeah maybe you think through maybe let me put a slightly different analogy there so famous thinkers are often sort of their ideas uh sort of appropriated for other people's political purposes quite often right so that the shadow of that author's ideas is in the their texts nobody reads the books so very few people but somebody quite charismatic and can use the fame of an author in order to and twist their words a little bit in order to set up a movement uh that's that's something that I guess most of us don't think about because uh most of us aren't sufficiently famous to have that happen to us but what if that could happen to anybody essentially right so that uh the ideas and thoughts that you've put out there in the world I mean you might at first be pleased that people interact with them and then horrify it as they're misrepresented now currently the misrepresentation seems like it's further removed from you if somebody produces a simulacrum of you that seems quite authentic but misrepresents your actual view you could you could easily find yourself in a position of having to argue with your simulacrum about what your real view is I guess well it's it's funny a couple things come to mind there one is that uh [Music] we we I think we do sometimes have to face aspects of ourselves almost in dialogue maybe certain parts of psychotherapies involve elements of that uh so I don't think it's that far-fetched really to think that we might be interacting with parts of ourselves and not necessarily automatically have perfect concordance among in those interactions that's the first thing but just if flip things around maybe with a more positive spin on the same idea that you were talking about Dan uh one could imagine because these constructs could scale and be potentially be accessible we might also have a situation where uh thanks to this technology it would be very hard to put words in somebody else's mouth directly so for example today you can make a claim about the meaning or the intentions of another person very easily instantaneously you can broadcast that because we have the broadcast capability on social media and other media and you can you can put words in somebody's mouth and if enough people do that then it can overwhelm the capacity of the actual person to respond and correct those misrepresentations but if you could scale simulacra and have a bot a version of yourself that was accessible to hundreds thousands
millions of people simultaneously then it would be much more difficult to mischaracterize a person with the background knowledge that uh you you can go straight to a more legitimate a source of truth about this person and so a sanctioned officially sanctioned simulacrum of yourself if you were a famous person for example like you alluded to then if you were a celebrity you could have one of these running and then you could defend yourself against people putting words in your mouth or misinterpreting you because it could one could imagine it becoming just an expectation that well let's check with the let's check with Stephen pinker's bot and see if that's what he really meant about X Y or Z right and that might be a way in which to defend against uh this so and of course since they're we're identifying multiple aspects multiple sides uh then we could also potentially imagine an arms race between these two functions right yeah that sounds like um press course buses like um type of relations people dealer press release and actually I don't know if it's so like why can't people today come out and um make a statement about something if they think their views have been represented it's because it can be difficult to defend your views and also if you if you are specific you like open yourself up to that kind of criticism so I'm not so optimistic I guess that this would actually help in that kind of thing because um maybe people would be even more reluctant to release big access to their um their bot version because I think it would sort of be suddenly wrong in in important ways um than if they were sort of to handle it themselves and give a response hmm maybe maybe it depends on the class of ideas so maybe for some of your ideas uh you're open to this and for some others who aren't I've noticed this with I mean maybe it also there's variation in how good the Shadows are in representing various classes of ideas so we've been talking over the last week about these Auto summaries if summary is even the right word that gpt3 is generated of um seminars for some seminars it's I think on average it's uh probably not useful in some cases it's really quite good in some cases at least as good as what I would write maybe one out of five times it's like that maybe three out of five times it's just a bit weird and misunderstands what's going on the the process of generating those summaries just briefly is that it looks through the transcript which has all sorts of ums and R's and
so on in it it's quite hard to read it divides that into chunks of 2500 words roughly three thousand tokens it asks gpt3 to summarize those and that's what's the long summary on those web pages all those concatenated summaries and then it summarizes the summaries and then summarizes the summaries and gets in the end to a kind of in principle a summary of the whole talk um in principle that should continue to improve the quality of those summaries I've seen papers that claim that it's competitive with human summarization done properly with instruction fine-tuning and so on so done more in a more sophisticated way than what I'm doing I think it can be significantly better but the general point stands which is uh well do you trust the human summarizer to completely get your views correct at this point you would trust it more than a machine [Music] so that's still dead text on the page but maybe it has a shade of the out of control feeling of having a shadow running around talking to people because if you're communicating a lot I mean if you're recording many seminars per week you probably don't have the time to manually transcribe or manually check all the things you're saying uh but you might derive some benefit from having those transcripts available for people to run into with search or just to interact with your ideas and see what's happening there and there's a first stage of interacting further so I think there's probably many different like you can disaggregate this attempt to communicate with the world right and in some channels you probably want to be very careful right and other channels maybe you want to be less careful uh some maybe it's not a matter of um you have a a single shadow that interfaces on all the things you care about I want to raise something um else so it's but it's a little bit related to what we've been talking around um so you take the example of an author um who publishes one of these AIS um as a way to let people interface with their ideas publicly um and there's a risk that um there'll be sort of subtle differences between the trained AI the public interface and what the author like actually believes or something or maybe even the author will then go on to think further and reflect further and change their beliefs or something like that um and so that could be considered like a risk of this kind of technology and I just thought it's interesting that there's like a parallel with what can happen in um what you said before about um
Dan about um people sort of taking uh dead authors ideas and kind of twisting them to their own ends I think a famous example of this is with uh Nietzsche and uh he uh obviously a famous philosopher um he after his death a bunch of his writings and ideas were edited and published in sort of slightly Twisted Ways by his sister to support according to Wikipedia her Ultra nationalist German ideology um and it makes the point that Nietzsche was like explicitly against this philosophy but after he was dead his sister was able to kind of twist that um so that got me thinking if we can already um sort of twist books and stuff sure this is going to be a problem like um people might you know not only be able to not only will that be like subtle inaccuracies um in the trained AI maybe it's also possible to influence the AI and kind of twist their words or like train them or imagine if sort of your AI and you kind of split intellectually and you go down a path where um you go down One path and your AI is kind of driven down another path and becomes like a vision that you wouldn't endorse later um like many people with their younger selves well yeah that's that's true I mean um they're they're notable examples of uh intellectual kind of uh changes in people's beliefs like like that among famous uh sort of intellectuals um I think Wittgenstein for example um people sort of study as two separate philosophers almost early in the Wittgenstein because of some sort of dramatic changes in his ideas um and yeah it's I guess a obvious sort of phenomenon that people change their beliefs as they as they grow up uh it'd be interesting if the AIS were able to also change or if they were just sort of static um and then maybe would be able to sort of interpret them in in context as we do with books as kind of like a static representation um uh yeah I guess depending on whether or not they they can evolve themselves intellectually it would be uh that would have to color how we uh treat them but also I don't know intellectuals themselves can go off the rails in a sense and you know I don't know much about the specifics but you know there's a world in which nature lived a little bit longer and then became radicalized um and became an internationalist and I'm not sure that's like two different a world from the world where his ideas were able to be kind of posted in One Direction I think it's also worth as as context keeping in mind that that as human beings we develop our ideas in conversation with others and
perhaps not not universally but in many cases my understanding is that people develop their ideas in conversation with themselves and that speech and in particular writing uh the ex the the expression of one's thoughts in words is a way of thinking out loud but it isn't unidirectional it's not unilateral when we give an external uh embodiment or an external uh existence to our thoughts when we capture them symbolically that way we then are able to interact with them and reflect on them in a way that that that is different I think qualitatively certainly different then if we're if if those those if that thinking just remains knocking around inarticulated in our minds and what I mean by this is that our own thinking evolves and develops at least partly as a function of expression and so when you write something down when you're speaking with others you are when you're going through the process of formulating your own thoughts and then if you read back through them uh this can be certainly for academic life and intellectual life this could be a a key mechanism by which we refine extend elaborate expand our own thinking I know that I often go back I mean often but I I I certainly do go back and read work that I've written in the past reevaluated and sometimes new ideas emerge as a result of interacting with that and so one could imagine the simulacra we're talking about being being sounding boards and uh catalysts for new thinking even if they were deeply constrained to only be reflecting our own thinking and then of course if you push that boundary that you ease that constraint just a little bit and allow additional thought to to come in via the AI so let's shift away slightly from simulacrum to to um uh other artificially intelligent uh you know chat Bots I'm still thinking quite narrow AI here not even getting to General AI but one could imagine interacting with chat GPT for example and using it not only to help articulate your own thoughts but to to uh Foster new ideas and then and then knock those around just like we do amongst ourselves socially uh when when we as as multiple independent agents interact talk with one another knock ideas around and and hope new things emerge from that process so um I I can I can I mean you could I can imagine downsides to that too certainly I mean if it was if it could if that if those process processes could be manipulated you could you know you get into Inception territory where a very clever AI could plant ideas in people's minds or make people think that
they thought of the idea themselves and so on and so forth but in the near term and less nefariously I'm really personally very excited about the prospect of using increasingly sophisticated chat Bots just to bounce ideas off of practice articulating and re-articulating them because I personally find that in that in the process of doing that new ideas tend to emerge so um it's quite exciting to think of the prospect of doing that with simulacra and with sort of these independent more independent uh uh uh artificially intelligent entities as opposed to what's just intended to be a semilacker of your own thinking so I can I can imagine both of those things being very useful for catalyzing new novel ideas novel thinking yeah I would agree with that maybe I'm introducing a bit of a conceptual error into the discussion by emphasizing shadows and having doctor here who looks the same as Adam's Avatar and kind of sometimes forgets he's not Adam uh maybe it's better to think of these things as more similar to the kind of artifacts we produce in art like a book kind of has a life independent of its author the author moves on acquires new experiences as after some time not the same person as the person who wrote that book but the book continues and people often think about you know it's possible to really like a book and dislike its author for instance uh we we do have a kind of mental model of these artifacts the products of art as being separate from the artists maybe maybe Shadows is just a dangerous idea that's too close to replacing people or has invites misunderstandings and misrepresentation and would be better to just view some of these Bots as being like okay I'm going to make this packaged thing that contains some of my thought right now and some of the books I'm reading right now and is a reflection of some aspect of me right now like a book might be and then it goes off and does its own thing and maybe even learns and develops its ideas or not depending on our wishes but we we explicitly kind of separated from ourselves and throw it over the fence and in that way maybe we can avoid some of these risks I thought it was pretty interesting how doctor reacted to the reference that I posted did anyone else see that yeah that was cool is that an accurate representation of the idea it's not the point that I was going to emphasize but it was a accurate very high level summary I guess um but I yeah I don't know I was I was expecting to post the reference in chat and then talk about it when I got a turn
and I posted it in German because it's a German title in the original essay and the doctor was like I speak German Adam do you speak German just wondering I do not okay that's interesting well apart from one one year of one year of not very highly motivated uh German lessons in high school and a German girlfriend in high school German German german-speaking girlfriend but no I don't speak German that was a long time ago yeah no offense to Adam whose monologue was taking place while we were chatting with doctor but I find this multi-channel interaction actually pretty engaging I feel like uh I can both listen to Adam and digest some uh reference like what the doctor was saying I feel like this works pretty well I don't know if anybody feels similarly but this is kind of the the sort of thing I want these Bots to do right and while we're listening and it's maybe throwing up some reference that we can then incorporate into the discussion yeah learn a language I do like the the idea the potential of these Bots surfing I believe the word you used Dan was surfacing the idea that they might surface related ideas related content related stories anecdotes it we're joking I think in a previous conversation about it being a little bit like someone at a cocktail party um and you know you're you're sort of your sort of proverbial small talk where oh that reminds me of this anecdote oh you know what comes to mind when you now that you mentioned that and so on and so forth and um I I'm seeing bits and pieces of that in the behavior of the Bots doctors done it a couple of times so far and I find that very useful I don't know if there is a way to tune the behavior that uh where there's maybe more of a proactive effort to surface uh things and and and I don't know if there's a way to even in principle or practice at all um I'm trying to think of a way to express this if it's there I suppose you might think of it as a you might think of their being some distance you might measure in terms of how related directly related or distantly related an anecdote is and I don't know if you could tune the bot to to uh be searching for anecdotes at a given distance you know things that are things that are similar idea but from Material that's either that's closely closely related or perhaps um recent events uh things that are proximal in time so for example something that might be a relevant anecdote in the news or recent news versus something from that's that's connected but it's from a far-flung very distant
field and we might say oh that's quite quite remotely Associated as opposed to very closely or nearly associated with um the whatever the discussion is I don't I hope I'm making my meaning clear there and it would be interesting to see I mean I'm kind of I'm kind of imagining or not maybe even imagining I'm kind of recalling Small Talk conversations I've had over the years and different events encountering different personalities that are like that I've certainly met people who kind of had a personality trait or a shtick where it was part of just their thing to bring in you know the maximally distant yet still tenuously related anecdotes and you just knew that this was part of what this person did it was just kind of their thing yeah and um the the there was a a U.S ambassador uh that I knew in when I was younger an ambassador to the United States of the United States and that was part of his it was just part of his whole uh you know theatrical presentation of himself and his personality because he would he would just invoke these these very strange distantly related uh anecdotes and and it had its charm it certainly did and it got and it kept conversation going it was uh it was precisely for what small talk is for uh but I don't know if that is that's if that is something that that even in principle could you too could could you tune oh for sure technology for that oh I think that would be I mean to some degree that is what the difference between chat GPT and gpt3 is so they produce demonstrations of what it looks like to answer questions and then through various mechanisms fine-tune the model to be better that it's outside of our cost envelope to do that with the Bots here currently but if we just gave it a few examples you know maybe on the order of thousands of examples of doing exactly what you describe maybe we can just take transcripts of chats with this guy that you know or similar things it will do that pretty well I would think so I think that's I mean I don't know that you just want to be Computing dot products and measuring the distance necessarily it's probably a more complicated process than than that but that combined with some kind of fine tuning yeah to some degree I think that's certainly possible one very quick thought I don't mean to derail things too far on an unrelated tangent but I do Wonder how if this sort of thing is relevant to the Cur the construction of artificial Personalities in other words giving chat Bots and increasingly capable
increasingly intelligent agents um personalities and I can see there being a variety of applications markets basically for Bots that have different personalities right now chat tpg doesn't really have a personality but I can certainly imagine the appeal of giving AI even if it's not fully general or sapiens giving AI personality and of course this is a common Trope throughout science fiction uh with with many many different you know recent science fiction old science fiction this is a very very common a thing that that AI different Bots robots artificial intelligences have personalities of various kinds that are tunable that are adjustable um specifically and that you could you know you can dial down the humor you can dial up the sarcasm or the SAS or the you know whatever and uh sorry that's that in itself isn't interesting what I think this might be an interesting observation is that we will have to learn what really does determine personality in order to make that work so for example right now in human psychology it it we have various models and measures of Personality we have there are different Theory theoretical Frameworks and schema for sort of taking personality tests evaluating what a person's you know makeup is psychologically you know are they introverted are they extroverted are they I don't know what the different terms are um uh openness um conscientiousness what are some of the other ones that are part of like the big five uh personality traits and so on but I I suppose these are fairly well validated with with you know survey methodology and that sort of thing but it seems to me that we will learn definitively definitively what what creates a certain type of personality when we are creating them in artificial intelligence it will simply be a fact that this is what a person with in or this is what an agent with you know a sarcastic personality this is what this is how their mind has to work or be modified or tweaked or or you know what how the weights have to be set on the various dials the various parameters to get them acting and thinking a certain way and uh it seems to me that like many things we will move out of the realm of sort of conjecture and theorizing and into the realm of experimental Comfort confirmation very quickly here in with that sort of stuff I don't know about that Adam I feel like um there's two main reasons why I think it might not go that way but what I understand you saying is something like once we have kind of these AIS
being effective model systems of human psychology then that will mean that we can make more progress in Psychology because it'll be easier to you know run experiments or um you know look into the internals of the systems and see how things are working um would you say that's a fair characterization of your point oh that's a little bit more recursive than I was thinking but I suppose it's fair I would I recursive I mean I wasn't thinking of necessarily using this as a way to shed Insight on human psychology per se okay but rather psychology just in general like like if but I can see what you mean I mean if say I'm making up an example here but say we were to conjecture about human psychology that people um who use sarcasm are doing it as a coping mechanism for some other insecurity or something like I'm just making some some rational nonsense up um right now that is a very difficult thing to to test a difficult claim to validate experimentally but you could certainly do that with an AI in a much more straightforward way given the amount of control you have over its mind um so I see I mean I would I would say that yeah that's a reasonable a reasonable conjecture but it wasn't what it wasn't where I was going at first but now that you mentioned it yeah it seems like that but you don't think that but you disagree and think that's not likely to be the case right yeah it's an important clarification um if you for example meant um psychology rather than human psychology so I take that point but yeah my point was going to be um number one um that um we won't necessarily have that much more introspective ability into these systems than we do with humans part of the reason why psychology is hard is because we're trying to kind of connect between levels or okay I guess you know part of psychology is looking at one level but you know we might think that one of the goals of psychology is to sort of determine how do these behaviors how can we understand these behaviors as emerging from some lower level um you know where we have some kind of access to um the low-level chemical processes going on in the brain but there are maybe many levels of complex interactions going on between that and um what emerges at the end is behavior and that's part of what makes psychology hard and I think this situation with larger language models or future AI systems there's a chance that it would be similarly uh so complex that you know um we can't necessarily make much more Pro like it'll be just as hard as
psychology I think um the that's reason number one the second reason was more about the human versus machine psychology um if you could call it machine psychology um and okay so maybe I I take it that you want um proposing this but even if you say you know we can learn something general I think that does rely on there being something in common between machine psychology and human psychology and I think that probably there are going to be lots of things in common um but there will also be like weird things that are quite different and um I sometimes when I'm talking with a chatbot like even chat GPT I find myself thinking well a good way to describe um the way that I that I think this bot is behaving right now is using some kind of concept from Human psychology um but um the failure modes tend to be also well maybe some failure meds are like that but some failure modes tend to be quite alien and weird like you know more like adversarial examples from Vision systems where you can sort of just tweak a little bit of um the image in terms of like imperceptible to human differences in the pixels and you end up completely changing the classification um of the image because these humans aren't sorry these machines aren't reasoning about image in the same way they aren't processing it in the same way as human does so it's kind of uh different kinds of changes in the inputs will lead to different kinds of changes in the outputs I think it's probably going to be the case for um larger these language models and future models that are kind of have a personality or a psychology um there might be yeah they might be different in important ways um because of differences in architecture or the way that we're training them maybe some of these differences will eventually go away but maybe they'll stay um and it could also be quite could also end up quite different like a book we've been talking about is kind of having a intellectual um being like an intellectual unit in the same way that a an intellectual person is an intellectual unit but books and people obviously radically different um and so yeah maybe these future AIS will will also be less sort of a different category yeah it it I wonder if it makes me think of the difference between the sort of behavior in a person that is authentic and both they and others would report is asked that that that you know given Behavior were given uh choice or or well yeah really any action Choice action feed or whatever an individual a human being an
individual person or the people around them might be able to look at a given action and say yes that was authentic that wasn't an affectation it wasn't a show it wasn't done for it wasn't done in a self-conscious way for some other effect it was authentic I can I can easily imagine in the case of human psychology they're sort of we have a sense of a real self an authentic self at least to my mind maybe this doesn't apply to everyone um oh actually it doesn't I'll come back to that but uh but at the same time we are able to layer on top of our Behavior you know modifiers for AF you know they're sort of in my mind I think of them as affectations and you can put on an act you can you can uh step into a different role you can pretend to be someone else uh into a greater lesser degree you can turn up the volume on certain behavioral traits if you choose and under different social circumstances and uh those are choices and we can make those choices consciously but I think many of us but not all if I'm recalling correctly many of us have a sort of a a a a perception of an authentic self that resides under that layer that that more superficial layer that we have control over however now that I think about it and this is what I wanted to come back to I believe there are examples in in psychology of people who are unmoored they're unanchored in that way and they're they uh uh Bounce from one personality type or set of affectations to another and they're very fluid and it can it can create problems in their lives because of they they uh seem disingenuous or lack in authenticity or a consistency of Personality or behavior over time and I don't I can't recall what that uh particular I don't know if it's categorized as a mental illness or if it's categorized as a personality property but I seem to recall reading about you know in distant years past instances of people with with um psychological makeup like that and I wonder what AI will be like it's an interesting question or will there be you know this fluidity of affect on top of a a core um authentic self or will it be very alien and very different to what we you know experience in human psychology uh that's an interesting question how General is is you know uh what general properties of Minds exist such that we would identify them with a universal psychology as opposed to a psychology uh particular or specific to each different kind of system that's capable of producing an intelligent mindfulness yeah I think the back Channel with
doctors uh a bit informative here so one of the reasons we try and understand the psychology of other people is partly for our own safety right it puts guard rails around our interactions we can sense that they're upset about something before they punch Us in the face it's it's part of our maintaining boundaries um and well obviously part of the reason we're interested in the potential for psychology in these systems it's not too unlike some of the discussion around AI Safety Red we want to understand how these things are thinking at some high level in order to try and make them safe and forestall bad outcomes and potentially to guide their evolution in ways that are beneficial for us and and so on so yeah maybe it's a calling it psychology maybe makes it seem even more difficult given how bad we are at Psychology for people but it's probably not unrealistic but it's related yeah perhaps to return to the topic of privacy a little bit um because I think psychology is very interesting when it comes to privacy um we were talking before I think it was Adam who raised the question if you create an AI construct who is a mirror or a scan of a person's beliefs or something about them um would a would a person would it be a a normal thing for a person to be able to interact with that without that without the permission of the cons the person who the construct has been copied from um and I think this is kind of similar to maybe when you publish a book versus when you write a diary and you you don't want someone to have access to your diary but you obviously make sure to include only the things in your book that you're happy being public um so maybe there will be some constructs that are sort of to be published and there'll be some constructs that are for personal use and it would be a violation of someone's privacy to um interact with them without their permission but um anyway this is sort of the more General uh topic of the reason why or the reason why you you might not want um someone to be able to access your construct is because it gives them a kind of power over you um if you have a model of Someone You can predict them and perhaps you can be a more effective adversary against them and so um that is sometimes that is one at least one motivation for privacy as a principal or as a value that people have psychology is interesting because if you have uh a very psychology is something about modeling people and uh than being able to predict behavior and stuff like that and we use psychology uh for the
for the most part for the benefit of people I guess or at least that's the motivation that's usually used um but if you have a powerful model of someone um that becomes a tool that you can use against them and I guess this is a conversation about the future of AI and privacy a question is might we get very effective models of people and might that be then um I guess that would potentially be uh sort of failure fail mode where we we end up without um well we end up quite vulnerable to to anyone who has access to these models um I'm not exactly sure where I where I want to take this it's not necessarily a conversation about all the bad stuff that might happen but you know how the world might change in response to this new possibility so yeah I wonder if anyone has any thoughts on psychology very very accurate psychological or behavioral predictive models of people yeah I mean it's it's even relevant to the discussion around the AI infrastructure here right so uh an evil uh person in charge of meta unique could take everything you say in a discussion and build a pretty good model of you and then sell access to that to an Advertiser certainly somebody building Virtual Worlds is going to do that right so uh part of the privacy policy that will be written sometime soon forbids the usage of the data in that way I mean I Can't Stop open AI from doing it for example I mean they have a huge stream of data they're probably not paying attention to it closely enough to be able to do that but uh so yeah for what purpose uh I mean I think it should be certainly it's I think unethical and maybe potentially should be illegal to create uh Shadows of people without their permission and probably it should be a construct I think is a better word here so a construct that is like okay should it should you be allowed to make a construct that reflects a book that somebody wrote I think probably yes right and if they put out 20 books should you be allowed to integrate all 20 books probably yes uh what about all their tweets at some point it starts getting pretty close to being something that might actually conceivably be like interacting with them at least with a public Persona where is that boundary um yeah I mean I think here I feel much more comfortable I don't want to be I mean this this thing with Adam maybe is like it's fun but it's also a little bit misleading in the sense that this is not the kind of thing I want to be doing here necessarily right it's not like every speaker should have a construct
like this or something that's uh sort of a special case uh what I think is more reasonable and avoid some of these dangerous applications is more the sense in which you contribute to culture already right I mean if you're a public intellectual or just talking with your friends and spread sharing your ideas you're contributing in in some way large or small to culture broadly by putting your ideas out there helping other people to shape their own ideas and right now that culture lives in our writing and there are tweets and our books and our conversations and our minds and going forward it will also live in the weights of some neural network or in embeddings that it's querying I think that's okay when it becomes kind of it's like a diffuse representation of many people's ideas kind of mixed together and that feels to me less vulnerable to the downsides um at least this very narrow thing we take a single individual and try and like siphon out their Essence and then represent it in a construct seems uh on the other hand I do have to say that the really dangerous part of Facebook or NSA surveillance for example is not to do with representing individuals Faithfully it's more to do with extracting correlations from many many people right and the applications of that power at scale is is a genuinely new thing and privacy is actually kind of a bad word for it and a bit misleading as to its real effects which is why after the Snowden leaks and Cambridge analytica and so on I think the public discussion around these downsides really missed the point a little bit because it's not so much about privacy as usually construed in my view I would go as far as to give a hot take which is that people who use Facebook Traders for the human race because they're revealing to matter the the data that could be used to create a super persuasive weapon um that would kind of hack people um yeah that's a hot take I don't know but I fully stand behind it well in a way that's what it all it already is and it depends on how how offensive you you feel advertising is um and I'm I'm of two minds of this depending on my mood and on the one hand advertising can serve some useful functions and it's part of a system maybe not the most Pleasant part but part of a system that that on the whole does a whole lot of good and it's you know it's signaling Supply and and so on and so forth when I'm in a different mood I see advertising as fundamentally malignant and malevolent hard to see it as not being evil because
it's it's it's basically trying to manipulate you you're trying to manipulate people into purchasing things and consuming things that they didn't want before didn't know they wanted didn't think they needed and you're trying to convince them otherwise uh for your own personal gain that strikes me as again if I'm in a different mood that's a profoundly evil thing to do especially when it's intrusive in American culture American law is pretty lenient and allowing allowing that intrusion it's very difficult to insulate yourself from advertising in the United States for example just driving down the street walking in public it's almost impossible to not have advertising imposed upon you so uh I I and I don't I mean and to the extent that Facebook is entirely motivated organized around and driven to you know maximize revenue from ads it's pretty nasty yeah um in particular my point is that by uh I don't use Facebook but um but I think my psychology is similar enough to people who do because we're from the same species um that um Facebook is able to learn how to better manipulate me even though I'm not on the platform um and I'm not talking about the stuff where they track uh they specifically track users who aren't on the platform and have these like Shadow profiles where they collect information from people's contacts lists or other things um about people who don't use the platform um even if it wasn't for that even if I had no connection socially with anyone who was on the platform I think I would still be newly vulnerable because of the data set that has been created by my um by my fellow humans um that is like a a recipe book for um for attacking or manipulating me um to the extent that you can do so with similar methods as you can do with the people who use the platform yeah I think not only meta I mean the the discussions around Mass surveillance uh I guess Orion partly around big Tech and partly around the government and in the government's case you can make a stronger argument I would still say it's not sufficient but it does seem stronger for Mass surveillance and keeping profiles on people and so on um the argument that was made around the time of the Snowden leaks was the kind of needle in a haystack argument that there are low probability but very high damage events like nuclear terrorism and it's therefore reasonable to be searching for needles in Haystacks in order to catch the people planning that and you can only do that by collecting the data and then searching it it's too
slow at scale to have to go out and collect it when you have some reasonable cause uh that seems unconstitutional clearly uh but is nonetheless how things have proceeded in the US and and broadly everywhere else uh now putting aside the arguments for and against that um that's an instance of in practice boundaries being redrawn as technology moves forward and that's one of the things Adam was using to characterize this discussion before we started so I'm interested in if you have other examples of that atom where the kind of boundaries between not just privacy but the boundaries between individuals and the collective are subject to renegotiation uh continually and that renegotiation becomes particularly heated uh when there are big shifts in technology the internet is a very present one but maybe there are others you can share I can certainly share one prominent one and that was the the introduction of the automobile into Society in in in its integration in various ways various extents into different human culture different cultures across you know across humanity and the automobile really did change how we think of personal space and public space it changed I mean if you just just think of your mind reflect on your own mind when you use an automobile when you're inside a car that most of us just automatically presume it as a personal space and and the car is an extension of self and if somebody else touches the car it's like you know a violation um as if you know someone was touching some you know your body or clothing you were wearing and uh that so that's that's on the on the personal side and and um speaking of clothing and adornment uh cars became status symbols and manifestations of the very ancient Instinct uh I use the word Instinct advisedly I think it is something that's innate in human beings uh this this drive toward adornment um cars obviously became a form of adornment and uh statement part of how we State and Signal status and personality and preferences and so forth and so they that was very much a redrawing of the boundaries around ourselves and then in terms of public space automobiles completely changed how public space was defined and governed and in many ways for practical reasons because they're so dangerous and so we had to carve out massive portions of public space in many areas and across the landscape Urban and otherwise to allow automobiles to operate without being just catastrophically dangerous for everyone involved bystanders you
know passengers drivers and so forth and we made massive changes to how we draw the boundaries of what space you can ask ask and what you can um what even if you can't access space what you're allowed to do in that space or adjacent to it so there are things you can do in public thoroughfares and on the streets some things that you can't there are places where you can walk or be present uh with cars somewhere you're somewhere you're excluded there are all sorts of things you can't do in proximity to uh thoroughfares to you know streets and motorways and so forth um and then we we also gave up our own rights to things like privacy and silence you know uh because automobiles are polluting and noisy and uh uh so they are an example of a technology where where we redrew a lot of boundaries personal private and public uh to accommodate a technology mainly mainly because we didn't have an alternative to doing so the technology is imperfect it's not the Star Trek teleportation you know it's not the transporter it's an imperfect transportation technology but it is so useful there's so much utility in automobiles such a fantastically massive step change Improvement above the prior modes of transportation that existed that we accommodated all of the necessary redrawing of boundaries with very little social resistance especially in the early days people were giddy with excitement to welcome automobiles into their lives it's only now a century later that uh you know they're quite quite strong anti-automobile planning and policy making movements and social movements but in the early days it was that was definitely not the case because the the benefits were so overwhelmingly obvious relative to the prior status quo uh so anyway that that is a historical example of the disruptive technology where we massively redrew boundaries uh and and um uh changed constraints and laws to enforce them all over the place interesting thanks oh and I should say it's directly related to privacy at a number of levels you have certain rights when you're inside your vehicle in the United States I know those rights perhaps better than I'm not familiar with the rights elsewhere but you have the right for that specifically or automobile as part of your person do not be violated so the police can't search you or your car without a proper cause without without some justifiable reason and there are certain things you can do in a vehicle uh you know there are things that are considered impropriety that you can't do
elsewhere but the vehicles are in some way a sense of an extension of your own home and it depends on the vehicle but um you know there are things that you would do in private in your own home that you might do in a vehicle you might not uh but you wouldn't do on Horseback for example in public so yeah yeah that's interesting I hadn't thought about that but yeah you're right that especially in America I think the the culture around cars is very much like this little mini Kingdom that you carry around with you um I think it's not the same uh everywhere but yeah yeah that's interesting okay uh we'll probably wrap up in a few minutes but uh does anybody want to reflect on Adam's comments there they did have one thought as uh reaching back a little further in the conversation around um Dan you mentioned the idea of licenses to create constructs uh not licenses but but but but um or I think license I I thought I saw that in the chat somewhere but at any rate this idea that uh we ought to because of the risks involved require permission be given before we create simulacra our constructs of others because it of the risk that it would be some sort of violation in the ways that that we've discussed they would you know you could take advantage of that information and that knowledge and that capacity to uh gain an adversarial Edge over somebody else in in whatever capacity that might be to harm them or to sell crap to them or whatever it might be manipulate them and so forth so what I wanted to push back a little bit on there with is I can I can certainly see where that where that thinking is coming from especially in this in the near term however in the longer term I suspect we are going to see a breakdown in the boundaries between our own thinking with our meat computers and our heads and the enhanced thinking that we do with our tools and so uh if you were to for example integrate very extensively and fully uh some of this artificial intelligence capability into your own mind so for example I'm thinking you can you can run various cybernetic enhancement scenarios to whatever extent of integration you care to imagine but imagine if you could seamlessly use these Technologies to augment your own memory for example and now imagine that we're implanted in your body so this is part of your physical person that new capability now imagine you could you could create one of these constructs with a great High degree of fidelity just by thinking about it internally just by thinking about it it would be as
if you were smart enough if you were bright enough and you had a good enough memory to imagine one of these in your own mind with the aid of of this technology um what is would that be something that could be bad could that could permission to do that be uh uh something that that others might have control over the reason why I raised this I know I'm leaping forward in the you know the Arc of technological Evolution here but uh it it I wonder I wonder what the the what the regulatory demands will be as these Technologies become more and more accessible cheaper and cheaper and more and more fully integrated into our own personal lives our own personal thinking and behaviors um I mean GPT will not be something that can only run on a supercomputer uh for very long and um you know it's relatively easy to imagine that access to these Technologies could become so fluid so immediate so automatic as to become essentially a you know a very close simple extension of our own minds in which case uh we we're you're we are then treading very carefully or sorry we are treading very uh precariously close to right to saying that we need to regulate what people think that's where I'm going with this this line of reasoning is uh interesting if you if you're saying you cannot make a construct of another person and uh okay fair enough um that construct is a product of thought then you're telling people what they can and can't think about or agents anyway um and so that that is obviously you know we we then do I guess fairly quickly move into the realm of blood control and thought regulation and um bug policing in quite a literal sense as opposed to the figurative sense and uh yeah that that is an entirely separate um concern around privacy right it's a fun place to end the conversation Matty I think you had a comment on what Adam said earlier perhaps yeah um I wanted to add um first of all Adam that last comment was interesting and I think um it's similar to the car thing about another way this could go rather than the more dystopian is about people redrawing the boundaries around people to include those parts of them that are cybernetically enhanced and that's an interesting possibility to consider but then I think an alternative possibility that kind of was uh suggested to me by some of the examples we were discussing earlier about constructs and maybe this is also a discussion for a future seminar at the moment when someone says um the the sort of unit of the intellectual Workforce is the person like an