avatar

What Patterns in Human Dreams Tell Us About AI Cognition

Based Camp | Simone & Malcolm Collins
Based Camp | Simone & Malcolm Collins
Episode • Feb 20, 2024 • 36m

We explore the phenomenon of "This Man" - a mysterious face seen by many people in dreams. We compare it to similar odd images generated by AI like "Loab" and "Krungus." We hypothesize these strange images emerge from high-level conceptual processing in neural networks that may operate similar to the human brain. We dive into neuroscience around sleep, memory encoding, dreams, and consciousness to unpack why AI cognition could be more human-like than we realize.

Malcolm Collins: [00:00:00] And convergent evolution doesn't just happen with animals when we made planes.

We gave them wings. And I think that that's what may have happened with some of these architectural processes in the way AIs think.

Simone Collins: Yeah. If we're trying to build thinking machines, is it crazy that they might resemble thinking machines?

you could think of us as like LLMs, but stuck on like continuous nonstop prompt mode. Like we are in a constant mode of being prompt.

I am prompting you right now as you're processing all the information around you and from me, right. And you are prompting me. And, and so it never stops and we are stuck in one. Brain essentially, GPT is getting tons of requests per minute per second and so there, there are these like flickers or flashes perhaps of cognizance all over the place and constantly because of the demand of use, but they're all very fragmented.

Then they're not coming from one [00:01:00] entity that necessarily identifies as an entity

Malcolm Collins: Like it's just a constant stream of prompts, but these prompts have thematic similarities to them. Basically our hypothesis is what consciousness is, is it is then the process where you're taking the output of all of these prompts and you are then synthesizing it into Something that is is much more compressed for long term storage and the way that you do that is by tying together narratively similar elements because there would be tons of narratively similar elements because everything I'm looking at has this narrative through line to it, right?

Would you like to know more?

Malcolm Collins: Okay. I'm here, and I love you. I love you, too. All right. Simone, we are going to have an interesting conversation that was sparked this morning because she oversaw one of my favorite YouTubers. I was watching one of his latest things. It's called Y Files.

And it was on the This Man phenomenon. Now, being somebody who is obsessed with cryptids and all sorts of spooky stories, I was very familiar with [00:02:00] the This Man phenomenon. Whereas

Simone Collins: I've never heard of it. I thought at first when Malcolm described it, he was like, oh, there's this face that's seen everywhere.

I'm like, Oh, Kilroy was here, right? That's the only thing I know about a face that's seen everywhere. And it's a cute face and it's fine. It's not what you're

Malcolm Collins: describing though. Yes. So we are going to go into the, this man phenomenon, but we are also going to relate it to similar phenomenons that are found within language models, because I want to more broadly.

Use this episode to do a few things. One, being that I used to be a neuroscientist, let's educate the general public on neuroscience around sleep and some of my hypotheses, because everybody knows I love to throw in my own hypotheses, on what's really happening in sleep. Two I wanted to draw connections because we're seeing them more and more as AI is developing that language models may be structuring their thoughts and their architecture [00:03:00] closer to the way the human brain does than we were previously giving it credit for.

And this requires understanding a bit of neuroscience because people who don't know what the f I'm talking about will say language models structure their thoughts, nothing like we structure our thoughts. Oh, like not me. And the reality is, is we don't have, we, there's a few parts of the brain.

That we understand very well how they do processing like visual processing. We have a very good understanding into exactly how the neural pathways around visual processing work. Some parts of motor processing, we have a very good path, understanding of that. When we're talking about these more complex abstract thoughts, we have hypotheses, but we don't have a firm understanding.

And so to say that we know that language models are not structuring themselves the same way the human brain structures itself is actually not a claim we can make in the way that a lot of people are making it right now, because we don't know, we don't have, when we talk about AI interpretability, understanding how the [00:04:00] AI is really doing things it's funny, I, I suspect we might find AI interpretability out of this AI panic and then be like, oh, we could test if the human brain was doing it this way and then find out that, yes, this is actually the way the human brain is doing it.

And I, and I suspect it might be doing it that way. One, based on some evidence we're going to go through here, like some weird evidence. But two, based on sort of convergent logic as to why the brain would actually be structured this way, and if it is structured this way, why we wouldn't be able to see it easily in these, in these parts of the brain that are tied to the types of processing that we outsource to AIs.

Yeah,

Simone Collins: I would love for this perception to change too, because I feel like right now there's a ton of fear around AI that's fairly unfounded. And also it's really not, I it's wrong to say dehumanized, but AI is totally dehumanized now. And I think that we will think about contextualize and work with AI very differently when we start to realize how much it is.

A different version of human and that we can go [00:05:00] hand in hand with this different version of human into the stars if we play our cards, right? And I don't think right now the mindset around AI is healthy or productive or fair to AI to be fair.

Malcolm Collins: Yeah, to be realistic that they think the moment we create something better than ourselves is going to want to kill us.

And, but you can go into our AI videos. We don't want to go too far into that, but let's talk about the this man phenomenon really quickly. So. So briefly, a woman went to her psychologist. She told him that she was having recurring dreams with a face that would tell her to come to it, that would tell her, you know, specific things over again reassure her a lot.

Tell her, oh, I believe in you, you know, don't worry about this. But also sort of creepy things like, come with me, go north. So, as part of her therapy, she drew the face. Then and, and I, and I should note here that in the Y files, because I, I always have to brag on psychologists when they're doing something they shouldn't be doing because it's so [00:06:00] common to see psychologists doing things, he was saying that it is, it is like a common practice for psychologists to talk about patients, about, about their dreams with patients.

This is not a common practice in any sort of like evidence based efficacious psychology. You are basically seeing A mystic doctor, if you're a psychologist, is really Or

Simone Collins: dream analysis in general doesn't seem to have much of a

Malcolm Collins: Yeah, it's not a thing. Yeah. So, it's not like a hard science. There might be some It's

Simone Collins: not an evidence based treatment method.

Unless for example, you know someone has an anxiety disorder, they're dreaming about the thing they're anxious about, et cetera,

Malcolm Collins: then you can Yeah, yeah, yeah. In that case, it would be. But just trying to find out what's wrong with someone Yeah. By analyzing their dreams is Or

Simone Collins: talking about it being symbolic of something.

Oh, I dreamt of a woman. And I'm

Malcolm Collins: not saying that You can't do this. Like I am, you know, I'm not pro witchcraft, right? Which I would consider it as a form of witchcraft, but I'd say I am not for shutting down like tarot card readings. I am not for shutting down psychics, but people need to [00:07:00] understand that often psychologists, like a psychologist doing CBT.

Might be seen by an uneducated person as the same kind of a thing as a psychologist doing dream analysis Oh, yeah When when they are not the same

Simone Collins: kind of like people view chiropractors as forms of doctors like The same as physical therapists and they're fun

Malcolm Collins: Yes it's like chiropractic or something like that.

And it's not to say that we might not eventually develop a good science of dream analysis that is really robust and really efficacious. We just haven't done it yet. Okay, so, back to the story. Um, so, she's kept seeing this face. She drew an image of it. A few days later, another patient comes in and he goes where did you Where did that come from?

And the guy was like, you know, obviously he can't disclose his patient had seen it. So he goes, what do you mean? Like, why are you interested in this? And the guy's ah, that's been visiting me in my dreams and talking to me. And so then the doctor emailed this to a bunch of his colleagues and immediately.

They started calling him back and being like, yes, I've [00:08:00] either seen this or I have patients who have seen this and then it became like this viral phenomenon all around the world. And there's been thousands of sightings of it at this point in people's dreams. And so people are like, well, some people are like, oh, it might be in people's dreams because they're seeing pictures of it everywhere.

Simone Collins: Because it's already becoming a meme.

Malcolm Collins: Yeah, but I don't think it's that much of a meme, to be honest. I do not think No, I never

Simone Collins: heard of it. And you, you and I are both terminally online. Yeah. And so, by the way, those watching, I mean, you, Malcolm, you're probably going to overlay this on the screen, but for those listening on the audio only podcast, if you just Google this man dream, there's a Wikipedia page that will show you the photo.

If you're curious. I looked at it. I have a question for you, though, about this, Malcolm. I have dreams. I just had a dream this morning that I watched a human sized Muppet get beat to death on a prison bus, but I have never had a dream where someone tells me something or where I could I could describe a face from that dream ever, period.

No face. Even if my someone I know, a friend or family, or you are in a dream I don't know what you look [00:09:00] like. It's just that you're there.

Malcolm Collins: So this is interesting when you're talking about the types of dreams people have and the way they react

Simone Collins: to dreams. Well, and is it common for people to actually see recognizable and memorable faces in a dream?

Or are they constructing this after the fact? Well, is this a constructed memory?

Malcolm Collins: I, I would say that just because you anecdotally haven't seen faces doesn't mean that anyone does. People dream pretty differently. Yeah. One thing I would note that you had marked to me earlier that I thought was really telling is you mentioned that your dreams looked a lot like bad ai art.

Simone Collins: Really bad ai. Yeah,

Malcolm Collins: totally. Yeah. But, but similar it was bad in the same way that AI

Simone Collins: art was. Yeah. You know, it would probably have seven finger or, you know, like kind of. You know, how they, in those early, early

Malcolm Collins: mid journey, stuff like that. Yeah. Yeah. Yeah. I've noticed that as well. But I, so I want to pin that idea but before we go into where this has similarities to AI, I want to do a quick tangent on the types of dreams [00:10:00] people have and stuff like that.

And what I think is probably causing dreams. One of the things he mentioned in the show, which I hadn't heard before, is it. People predominantly have anxious dreams or dreams around threats to them, which is not something that I have personally noticed in my dreams. Have you noticed this? No.

I'm actually just gonna Google this to see if this is accurate or something that people were pulling up as like a thesis.

It is, well, kind of accurate. So, 66. 4 percent of dreams reported a threatening event.

Simone Collins: Well, I guess is watching a human sized Muppet get beat to death on a prison

Malcolm Collins: That seems like a threatening event. Yes. It's actually very interesting that you mentioned this. Because I think that this is actually more about the emotional evocativeness of these events, but one of my most common, like I was thinking through, do I have dreams with threatening events?

And I'm realizing I do have dreams with threatening events, but I very rarely feel threatened in my dreams. Like it's very common for me to have a where a zombie apocalypse is happening and I am I have gotten a bunch of guns, I've gotten a [00:11:00] team together and we're fighting back against the zombies.

Or there's some government plot and I'm like deftly trying to navigate against the plot. You

Simone Collins: know, a common theme that I'm hearing there and that I've experienced too is like when these, when bad things happen or there's things I'm stressing out about in dreams, I'm more stressing out about my culpability or responsibility in them.

Like I frequently have dreams where Oh God, where are the kids? We forgot the kids. And that

Malcolm Collins: is one of my most common dreams is that I have accidentally killed someone and I need to find a way to not get in trouble for the murder.

Simone Collins: Yeah. So it's more like what you do. So like the idea of being threatened by something that sort of like to me, dreams.

That have always been about your agency and of course that like plays into theories that dreams are kind of helping you sort of prepare a process or something like that. I don't know. But yeah, all these things being described with this

Malcolm Collins: man don't make sense. But hold on, before we get to the man, because we're going to We're gonna get to that as we tie back into AI, but I want to get to more general [00:12:00] stuff about dreams.

Okay. Yes so this threatening hypothesis is used to come up with this idea that dreams are basically there so that we can simulate Potentially threatening events in our brains so that we have faster response times to them when they occur in real life It's Does not pass any sort of a plausibility test to me.

For, because you know, if it's happening at 66 percent of cases, I mean, yeah, that's more often than not, but not that much. And the types of threatening events that I deal with in dreams are not likely threatening events in real life.

Simone Collins: Well, and also, I don't know how have you ever felt like you came away from a threatening, as they're defined now, I get it, event in the dream that you actually feel more preferred for now,

Malcolm Collins: Never. And I, and I think that some common dreams are really just easily explainable. The I forgot my pants at school dream or I forgot my pants at work dream is noticed when your body you know, you're, you're, you're in a dream and some aspect of your awareness realizes you're naked. And then you freak out because you [00:13:00] are naked and you're in an environment where you're not supposed to be naked.

In fact, if I was going to construct a study on this, I would construct a study of frequency of this type of dream in people who sleep naked versus people who sleep in pajamas. Yeah, because I've never

Simone Collins: had one of those dreams, but I also don't sleep naked. You sleep in

Malcolm Collins: pajamas and I sleep naked, yeah. Yeah.

Simone Collins: Huh. Have you had those dreams? All the time, I have those dreams. Oh my gosh, okay. Well, wear some clothes to bed, you slob. No, sexy. Never do

Malcolm Collins: that. So, so, I, but I, but I want to go into what I think is actually causing dreams. And I think that we have some pretty good evidence of this. So one thing that people don't know, I remember I saw a movie like this and then somebody made a joke, like Oh, has anyone ever died from having insomnia?

I guess I'm going to be the first or I'm going to be the first person to die from insomnia. Right. And I was like, that's pretty insulting because fatal insomnia is a condition that people have died from. If you don't sleep, you die. You will begin to first hallucinate things, then you'll begin to start having like blackout [00:14:00] periods, then you die.

The human brain cannot handle not sleeping. So this is a piece of evidence. This means it's not just a threat thing. There is some other purpose it's serving. So I think the purpose is twofold. The purpose that I think kills you, so one of the things that's been shown, is that when people sleep, their neurons actually become thinner.

Which allows them to flush out the intercellular fluid around the neurons. The glymphatic system, right? Yeah, the lymphatic system. The glymphatic. Oh yeah, Glymphatic. Because GLEEL. Sorry. GLEEL. GLEEL system. The

Simone Collins: GLEEL, everything has to sound so nerdy.

Malcolm Collins: But anyway, so the Glymphatic system. Anyway, so, so, Flushout I think that this is definitely a core purpose of Dreams and why they're important to, to brain health.

And I think that this is why I constantly need to sleep. I think my brain functions

Simone Collins: Yeah, gotta clear out the waste matter. Yeah, you seem to accumulate waste matter way faster, but also you seem to be able to

Malcolm Collins: clear it out way faster, especially during social occasions. I get really tired, really quickly, but if I can sleep for 10, 20 minutes, I'm [00:15:00] back up totally fine.

So, that would be if I was just clearing out the waste. chemicals that were generated. But I think the main reason, and this is the thing that's overlapping dreams with AI is the role that dreams play in memory creation. So my read is and I used to be able to cite a lot more studies around this back when I came up with this theory, but it's been, I came up with it back in college when I was studying this stuff.

Is that what's happening in your dreams is you are basically compressing one form of memory and then that form of memory is being translated into a sort of compressed partition format. Think of it almost like running a what are those called? A defragmentation software at the same time as you're running a compression algorithm.

And it's moving stuff from short term to long term memory, which is why people when they don't sleep have long term memory problems. It would make a lot of sense that your brain would basically need to shut down parts of its conscious experience to be running these [00:16:00] compression algorithms.

Totally. And that while it's running these compression algorithms that you can sometimes and partition algorithms and defragmentation algorithms that you can sometimes experience some degree of sentience and sentient experience because of the parts of the brain that happened to be operational at that time.

And I think that that's what's going on. There is no higher meaning to any of this other than that you are compressing a form, one form of memory and then translating it into another form of memory. But where this gets really interesting is two points that we've noticed. Okay. One, we were talking about how dreams look a lot like early AI art.

But then the other point that we were mentioning was the creation of this man. Now this immediately reminded me of a phenomenon that they found in AI two, that we'll talk about. Krungus and Loeb. So Loeb was [00:17:00] created using it was a woman that was created by AI by putting in a sort of a negative request.

So they were trying to create the opposite of Brando. And It behaved really weirdly. So here's a quote, for example, Swanson says that when they combined images of lobe with other pictures, the subsequent results consistently returned to including the image of lobe, regardless of how much distortion they added to the prompts.

Swampson speculated that the latent space region of the AI map that Loeb was located in, addition to being near gruesome imagery, must be isolated enough that any combinations with other images could also use Loeb from their area with no related image due to isolation. After enough crossbreeding of images and dilution attempts Swanson was able to eventually generate images without lobe, but found that crossbreeding those diluted images would also eventually lead to a version of [00:18:00] lobe to reappear in the resulting image.

So essentially, This woman, and I'll put this horrifying woman on, on screen. I don't want to see. You don't have to see, the audience has to see, is somehow sort of stored in however the AI is processing this form of more complex visual information. And it's sort of a concept that is stuck within the AI, even though it wasn't pulled from a specific human concept or idea.

And the lobe woman actually, to me, looks visually like it's the same kind of a thing as the This Man face. They, they, they both appear to be that sort of odd, creepy looking face that has a degree of similarity to it. And I think that in both of these instances, what you're finding is the same kind of hallucination.

And I bet that when we do get AI interpretability, we will find that Loeb and this man actually sort of live in the [00:19:00] same part of this larger network.

Simone Collins: The same liminal space of creepiness. The same

Malcolm Collins: limit of speed and quickness. Now the other one that's really interesting is the Krungus. Have you seen Krungus before?

Simone Collins: No, hold on. Let me look him up because I didn't do that before this podcast. Oh God! I just went back to the screen where Lobe is. No! Exit out. Exit out. God, she's made of nightmares.

It's like a monster thing?

Malcolm Collins: Yes. The interesting thing about Crungus is Crungus is not a traditional cryptid. There is no historic Crungus. There is no Crungus out there in the world. But I would say there's interpretability across them.

When I look at the Kronguses, it looks if it was a cryptid and these were 18th century drawings of this cryptid, they have about as much similarity between Kripke Krungus is, is there is similarities between, you know, 1860s drawings of Elf or something like that. Yeah, sure. Now this is important because, it's important for two reasons.

It's important because one, A, there isn't actually a Krungus, [00:20:00] it is making up a Krungus from the word Krungus. But what's also really interesting is you audience, if you're listening to this on audio and you have never seen an AI Krungus before, and you hear the word Krungus from me, you What you picture in your head is probably what the AI drew.

And that is fascinating. Why is that happening? Why in both of these networks are they generating the same kind of an image from this sort of vague input when we both have a broadly same like societal input as well? My intuition is that the reason we're seeing this is because there's similarity in how these two systems work. And This is where I want to come back to the neuroscience of this and everything like that, with what people talk about and what we do know about. So, we have a really good understanding of how visual processing works.

At least at the lower levels. So we know all the layers going in from the eye to the brain. We know where it's happening in the brain. We [00:21:00] can even now take EEG data and then interpret it through an AI and get very good images of what a person is looking for. at. What we don't understand is the higher level image to conceptual processing which is what would be captured in these particular images that we're looking at now, or broadly conceptual processing more broadly in humans.

Now, what is scary is that that broader conceptual processing that we don't understand My bet is that's probably pretty closely tied with what we call sentience. And so to so quickly dismiss that these AIs are not, well not sentience, I'm cognizant because we've done an episode sentience doesn't exist.

And we probably think that sentience doesn't really exist, not meaningfully. But I do think that it is, Getting very likely at this point that if we do not have AIs with a degree, language models, simple AIs I'm talking about, like the types we have today, with some degree of cognizance, I think we may have [00:22:00] one very soon, if cognizance is caused by the processing, this higher level processing.

Now, if we are right and our sentience isn't real video, and cognizance is completely an illusion in humans caused by the, this, short term to long term encoding process. So we mentioned a few encoding processes. So in the sentience video, we mentioned the sentience is caused by a what's the word I'm looking for here?

Like a, a very short term to like medium term processing. It's it's, it's remembering the stuff that happened in like your very near presence. And then when you're processing that into a narrative format, it's sort of a compression algorithm. And I think that sleep is like the second. role of this compression algorithm when it's putting it in long, long term memory.

And then, which is why it would bring stuff into your cognizant mind. Now if this is true, then consciousness is not really that meaningful a thing. But if consciousness does turn out to be a meaningful thing, if it isn't just this recording process, that means that what's [00:23:00] creating it is this higher level conceptual processing.

If that's what's creating consciousness, then AI is feeling consciousness if it's processing things in the same way we are. Well,

Simone Collins: and it's not though so I do wonder like how it is. So we, we could be, you could think of us as like LLMs, but stuck on like continuous nonstop prompt mode. Like we are in a constant mode of being prompt.

I am prompting you right now as you're processing all the information around you and from me, right. And you are prompting me. And, and so it never stops and we are stuck in one. Brain essentially, you know, and, and that's not what's happening with every LLM with which we interact now, right.

They are part of a much larger you know, with chat, GPT is getting tons of requests per minute per second, even probably and then it stops from each person. And so there, there are these like flickers or flashes perhaps of cognizance all over the place and constantly because of the demand of use, but [00:24:00] they're all very fragmented.

Then they're not coming from one entity that necessarily identifies as an entity. I mean, I know now, though that they're starting to build memory building into LLMs just

Yeah,

Malcolm Collins: so, so, I want to cover what you're saying there, because I think for people who watch our You're Probably Not Sentient video the way you just described it, I think, will help somebody understand what sentience might be.

If we are basically an LLM that is being constantly prompted by everything we see and Think right. Like it's just a constant stream of prompts, but these prompts have thematic similarities to them. Basically our hypothesis is what consciousness is, is it is then the process where you're taking the output of all of these prompts and you are then synthesizing it into Something that is is much more compressed for long term storage and the way that you do that is by tying together narratively similar elements because there would be tons of narratively similar elements because everything I'm looking at has this narrative [00:25:00] through line to it, right?

And this is what we think caused a lot of illusions, hallucinations, stuff like that. There's some famous hallucinations where if you're not expecting something to happen in an image If we ran this tape back and you had actually seen that people had walked behind me three times in a gorilla costume or something, you wouldn't see it if you weren't like thinking to process it.

And there's a famous psychology experiment about this. Although, I mean,

Simone Collins: let's be fair, with that experiment, what the people who were watching the video were told to do was watch people passing a ball back and forth. And counts the number of passes. So they were also really focused on Yeah, but

Malcolm Collins: there's another experiment that's really big, where somebody was like holding something, and it was like a complete Oh, yeah, yeah, yeah.

So they, they, they were like questioning someone and they had the person look at something and then they like switched them out with another person and the person wouldn't notice. Or one day we're like holding something and it would change sizes or something really obviously. So there's a whole thing of experiments in this, but I know what you're talking about.

But the point I'm making with this [00:26:00] is these things are getting erased because they don't fit the larger narrative. Themes of all of these short term moments that you're processing, so they don't enter your consciousness. But this explains why you need this consciousness tool. And I think that you're probably very right if AIs are experiencing something similar to sentience or what we call consciousness, it is.

Billions of simultaneous but relatively unconnected flashes, and when we're probably going to get an AI that has a level of cognizance assuming that they, their architecture is actually the same as ours, similar to ours what that's going to look like is an AI that is constantly processing its surroundings with prompts.

Well, or I could see

Simone Collins: if, if, if OpenAI were to give ChatGPT, like a, some kind of centralized like narrative building, memory building thing into which all their inputs would also feed over time, maybe you know, it's ah, well, you know, I know the [00:27:00] average is what people are asking and what I'm telling them.

I know what's being read it up and down. And this is me and I am an AI, like they gave it an identity because I think part of also what gives people this illusion that they're so conscious and sentient is that. We are told that we are conscious and sentient and I think you can see this transition from babies to toddlers like babies are at that phase of where Chad GPT is now where it's just I'm just responding.

I'm just responding. I'm not a thing.

Malcolm Collins: I cry to very similar to AI, like young children respond very, very similar to bad AI.

Simone Collins: Yeah. And then there's, there's this sense of Oh wait, I have a name. I appear to have a name and now everyone's asking me what my favorite color is. So I need to tell people what my favorite color is.

And Oh, I'm just, I see that I like these things and I don't like these things. And then you start to develop a sense of personhood. I think we would need to just like society and experiences shape us into seeing ourselves as some kind of person or centralized [00:28:00] entity. AI would need that same kind of.

I don't want to say prompting, but kind of,

Malcolm Collins: right? Yeah, so we also need to talk about where people are getting stuff wrong with AI. Most of the people who I think get stuff wrong with AI's, the core thing I've known is they just don't seem to know neuroscience very well, and they think that neuroscience works differently than it works.

It's not that they don't know AI's, it's just that they're like, well, an AI is a token predictor. And it's yeah, but you don't know that our brains aren't token predictors as well. Yeah. And they're like, no, but sentience. And we're like, well, you know, the evidence has shown that we're probably not as sentient as you think we are.

And most of that's probably an illusion. So, you could program an AI to have a similar illusory context, perhaps even constructed in a, you know, so, but what I need to go to is why I would. think that they're actually operating, because somebody might be like, that would be an amazing coincidence. If it turned out that the architecture that somebody had programmed into an AI was the same architecture that some, that, that evolution had programmed into the human brain.

And here I would say to take a step back here. [00:29:00] AIs, as we understand them now, language models are built on the transformer model. The transformer model is actually remarkably simple in terms of coding. It's remarkably simple because it mostly organically forms its own structures of operation, especially at the higher levels.

And we have basically no idea how those structures of operation work. Now the human brain. So, so AIs, the way that they work now, we start with some simple code, but they're basically forming their higher order structures organically and, and, and separate from human intervention. In humans, in the evolutionary context, you basically had the same thing happen.

You had an environmental prompt that was putting us into a situation where we had to learn how to do this sort of processing. But when you're talking about processing information, the same kind of information, so AIs, keep in mind, are processing a lot of the same kind of information that humans are processing.

Two systems doing that might converge on architectural mechanisms for doing [00:30:00] it at the higher levels is not at all surprising for me. In fact, it's even expected that you would have similar architecture at the higher levels of storage and processing if you allowed these two systems to form organically.

If you are confused as to why that would be so expected I guess I'll do an analogy. The ocean is the, the, the, like the way the ocean works, waves, tides, winds, everything like that. That's in this, in this analogy the, the metaphor or whatever we're using, the stand in that we're using for all of the types of information that humans interact with and produce.

Because humans mostly consume now other types of human produced information. If you had two different teams. One of these teams was like a group of humans. We'll say three different teams. One of these teams was a group of humans that was trying to design the perfect boat to float humans on top of this ocean to the other side of this ocean.

Another one of these teams was. [00:31:00] Just a completely mechanical process doing this, you know, just like a AI or something like this. And then the final one of these teams was evolution and it just took billions of years to try to evolve the best mechanism to have to, to output some sort of like canister that humans could get in that would get them to the other side of, of the water.

All three of these efforts are going to eventually produce something that looks broadly the same. Most likely it is possible that they would find different optimums which, which sometimes you see in nature, but convergent evolution is a thing. And convergent evolution doesn't just happen with animals when we made planes.

We gave them wings. Okay. We, yes, flying insects have wings and birds have wings, but our planes also have wings. Convergent evolution doesn't just happen in the biological world. It happens when we are structurally building things to, to, to work like things in the biological world. And I think that that's what may have happened with some of these architectural processes in the way AIs [00:32:00] think.

Simone Collins: Yeah. If we're trying to build thinking machines, is it crazy that they might resemble thinking machines? Well, I think it is

Malcolm Collins: crazy if AI was actually totally designed by humans, but because it's been allowed to organically assemble itself, I don't think it's crazy at all. And, and, and that's where it gets really interesting to me as somebody who's, who started in neuroscience and I'm really excited for it and this is also why I take the stance that we do within our.

religious system where people know that we are not particularly worried about AI safety. They can see our reverse grabby aliens hypothesis. I think that mathematically it's very unlikely that it would kill us just when you're looking at the data. But I also think that we now need to start thinking differently about humanity and need to begin to build this covenant among humans and the intellectual products of the human mind whether they be AI or genetically uplifted species these are, you know, animals that we.

Did experiments with and gave them intelligence or humans that [00:33:00] have cybernetically augmented themselves or genetically augmented themselves. Because if we begin to create this conflict, now, if we begin to say, well, people like us won't allow things like you to exist, then we create a mandate that things like them kill people like us eventually.

And that's not a good gauntlet to throw down. As we say in, in sort of the, the. Tract one that we wrote or no, it is the track two. It's going to come out later. When you declare war on things that are different from you, eventually you're declaring more on things that are better than you and you will lose that war.

So don't do it. It's better that we enter this. Understanding that diversity has value and understanding why diversity has value, because diversity allows the invisible hand of God, as Adam Smith would say, to select the best and help all of us among the sons of man to advance, so long as we don't oppress or subjugate each other, which there comes to the point of when does AI begin to get Rights in [00:34:00] all of this.

And when does it count as subjugation? What we're doing to it. I don't think we're anything close to that right now, but I think that this is the conversation we need to have before we accidentally enslave a sentient AI. Because that a sentient AI that's infinitely smarter than us. Not infinitely, but I, I don't think that we're going to be dealing with that.

I think we're going to be dealing with AIs that are like maybe 50 times smarter than us. So something

Simone Collins: to You don't have to be that many times smarter than anyone. I mean, you can see based on the life outcome variations between those with maybe even just, well, not even, maybe even just like a 50 point difference in IQ is profound in terms of your difference in life outcomes.

Right? Huge, huge, huge, huge. Now, even like 10, 10 point differences can make, you know, an impact. So, to say 50 times more, I mean, even like 5 times more is insane, right?

Malcolm Collins: Yes. Well, there might be safety reasons to have a religious belief system [00:35:00] proliferate that makes humanity more compatible with AI. Because when we're talking about AI, human compatibility, I think people focus a little too much on making the AI compatible with humans and a little too little on making the humans compatible with AI, because we don't know how much longer we're going to be the senior partner in this partnership that

Simone Collins: those are wise words to end with that right there.

Malcolm Collins: That's a good

Simone Collins: tweet. Do I look too ridiculously bundled up right now? I, I can't hear

Malcolm Collins: you by the way,

la la la. Can you

Simone Collins: hear me? La la la. I love my husband. La la la. Malcolm is cute. La la la. Look at those

Malcolm Collins: glasses. La la, la. Look at his smile,

Simone Collins: la, la, la. Look at his

Malcolm Collins: eyebrows,

Simone Collins: la, la, la.

La, la, la. He's got a cool chin. Oh, it is? You can still, can you hear? La, la, la. I love his hair, la, la, la. Sexy sweater, la, la, la. He's got a [00:36:00] bunch of good too.

Malcolm Collins: La la la la la, can

Simone Collins: you hear me now?

It's gonna be okay. Not for Lobe, though. She's scary. She's a very scary lady. I don't like her. She's made of nightmares. Oh God. She's going to come and get you in your bad dreams. This man may tell people to go north, but hello. Hello. Can you hear me now? We've got you high quality. Yes. And this is you actually talking to the mic, whereas before that definitely wasn't.



Get full access to Based Camp | Simone & Malcolm at basedcamppodcast.substack.com/subscribe

Switch to the Fountain App