Forum

Copypasta from an o…
 
Notifications
Clear all

Copypasta from an ongoing conversation about my brain model.

12 Posts
5 Users
5 Reactions
235 Views
(@steve)
Member Admin
Joined: 1 year ago
Posts: 26
Topic starter  

@Mabus suggested we move this from the blog comments (where things easily get lost) to the forum. So here’s the main part of the conversation. Feel free to weigh in below!


@FoggyGoofball:

Speaking of brain code, I think that i’m going to diagram all of the lobes and layers onto graph paper. Being able to see the laid out physically will be a huge help to me.

I’m going to try not to bother you endlessly about this stuff, but i have two questions in that regard.

First, do you think that with adequate application of the “sucker” function on each of the feet that we could create a creature capable of climbing walls? Is it actually providing downforce, or is it an attraction to the surface underfoot?

As I understand it from looking at gloop genomes, we might be able to introduce a new IO brain organ that could translate a separate set of gait genes for climbing when we sense verticality in the spine and\or feet. Obviously it’s not simple, but would it be possible?

And last question for today is what do the visual properties kind, earth, air, fire, and water represent specifically?

I can also confirm that the fallback to previous d3d versions did work successfully in the pre alpha build that mabus shared with me.


@Steve:

Wow! You’re really getting into it!!!

I do have a diagram of all the (main) brain maps and how they’re connected to each other. Do you want it or would you rather figure it out for yourself?

The suckers exert a down force. I guess they could be made to exert that force in any direction, although to climb walls the creature’s weight would exert a large force at right angles to it and I’m not sure what the effects would be (on their leg joints especially). Making a complete physics-based creature was a huge challenge (all the other attempts I’ve seen cheat far more than they admit to). As Unity and PhysX have improved over the years, it’s got less and less of a nightmare, but it’s still very weird and unpredictable, so the creatures needed a bit of help sticking to the floor! Anyway, I think climbing would be possible in principle. Swimming and flying would come further up my own list, but either way, adding an extra dimension makes the whole thing a lot more complex and resource-hungry. Just keeping them from falling over is a big enough task – every time Unity changes their physics code it all goes wrong again!

KIND is the basic type of object: { “outdoors”, “indoors”, “vehicle”, “furniture”, “device”, “utensil”, “plant”, “animal”, “person” } (where “person” means another phantasian or the player).

After that it’s a hierarchy. So, if KIND==outdoors, then EARTH == { “wild”, “natural”, “cultivated”, “builtup”, “industrial” }, and so on.

But these labels are purely for my own use. All the creatures see is a list of numbers from -1 to +1. They don’t know that -1 means “outdoors”. Mostly I just need to ensure that I give objects a unique pattern of these numbers, and choosing them sequentially would solve that. But obviously if all the plants have the same number for KIND and meaningful numbers for shape, color and habit, then it’s easier for the creatures to learn to recognize a few plants and then develop beliefs about whether an unfamiliar plant-like thing might be edible or poisonous.

I chose EARTH, AIR, FIRE, WATER, because that give me some vague logic I could use, based on the ancient ideas of the Elements. So the spikiness of a plant might equate to it having a lot of FIRE in it, whereas for “indoor” kinds, that might represent how well insulated or heated they are. It helps to make the world slightly more predictable for the creatures (but not so predictable that they have nothing to think about).

But really it’s just the IO maps that still retain this earth, air, fire, water naming convention. Elsewhere I took to thinking of the KIND as a noun and the other descriptive lists as different levels of adjective. There are verbs and adverbs, too.

One day I hope to have the luxury of some time to think this through much better. My son did his neuroscience PhD partly on the distinction between functional categorization and perceptual categorization. Right now I just have a kind of compromise, where objects are perceptually categorized, yet there’s also a vague functional logic built into it, so that learning can be generalized from experience of similar objects. But it’s yet another big challenge, so for now, every object in the world can be recognized as a pattern of five numbers, and creatures can attempt to divine some kind of structure in these patterns as they learn to recognize things. The whole subject of combining self-organizing maps, with functional v perceptual categorization, with the need for one-shot learning (IRL, nobody needs to see a million cats before they can recognize another cat, unlike today’s so-called AI) is something that has occupied a lot of my thinking over the years, but I don’t have good answers yet. Or at least, I haven’t had time to try them out…


@FoggyGoofball:

I’d love to look at your brain maps! The grandroids genomes are really well annotated so I feel like I’m beginning to understand some of the broad strokes, but more data is always welcome!


@Steve:

Here you go:
https://phantasia.life/docs/ep/biology/neuroscience/brain-maps/
It’s a bit out of date now, but good enough to be going on with. It only shows the yin (afferent) and yang (efferent) connections. Some maps also have affect layer connections that interface with chemistry to give the maps ‘beliefs’ about how it feels to be in different yin states under different circumstances. Everything is bidirectional – information flows up yin and at some point comes back down yang and flows out. This is how the real brain is wired, rather than the pipeline structure that I think most people assume, and it’s a crucial part of my theory.


@FoggyGoofball:

So cool! That’s a great document! I still think that I want to map it out in actual physical space, I want to see it in more detail, but my screen is too small to even display everything here. It’ll be a project for sure, like a crime board from the cop dramas with strings all over the room.

Not sure how realistic it is judging from the scale, but I’m going to try to make a 1:1 neuron for neuron copy that I can walk around and literally look at from different angles.


@Steve:

Haha! Well, you’ll have to make it a cortical column by cortical column copy, but sure. I’d love to see that!

Yes, seeing things from the side isn’t really all that helpful, since much of the action happens in the other directions – each map is a stack of plates – bottom layer, affect layer, top layer, plus occasional prograde, retrograde and other layers. It’s the pattern of activity on each plate that explains what’s going on, and you can’t easily visualize that from the side. Each map is in some kind of 2D “space” – eye space, head space, local space, object space, word space, conceptual space, and so on, and these spaces tend to form hierarchies.

If I was making this for real, each of the wires you see in the diagram would actually be a massive bundle of connections from every patch in one map, to every patch in the connected map. But because that would require a supercomputer, I crossed my fingers and hoped that just sending the XY position of the PEAK activity from map to map would be good enough. And mostly it is. Where the creatures eyes are looking is represented by a broad dome of activity in head space, but all the next map cares about is where the peak of this dome is, not how much every neuron is contributing to it or how oddly shaped it is. So all the inter-map signals are reduced down to single X,Y,A,Q values (the State struct – you’ll come across hundreds of them in the code!), where A represents the peak’s amplitude, and Q mostly represents its “tightness” (like the Q of an audio filter, if that means anything to you).

It’s strangely both very complicated and pretty simple! The peak of activity in one layer says what state the map thinks some aspect of the world is in (its yin state), while the peak in another layer says which state the same or a different map WANTS that aspect of the world to be in (yang state). The map’s task is just to try to bring the yin state into line with the yang state, and hence satisfy a desire by bringing reality into line with an intention or hope. But what one map wants, dictates what other maps want, and what one map believes is going on, constrains what other maps think they can achieve.

I think this is probably close to what happens in real brains, and to my knowledge nobody has ever built a system like this, but it’s just a hunch, really, based on a lifetime of “Hmm… I wonder if the brain is a bit like a model aircraft” -type thinking! I have a LOT to say about all this, but while I’m trying so hard to make enough 3D game-related progress that I can continue to pay my rent, I just don’t have time to say any of it! Anything you figure out and feel you can share with people will help me a lot.



   
FoggyGoofball and Mabus reacted
Quote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
 

Cortical columns, not neurons, got it!

I’m still getting my mind around how large and all encompassing this simulation is, I hadn’t even considered the physics interacting with the bodies yet. I realized that they can perceive when they bump into something and which part of their body it contacted, But frankly I’m still just dipping my toes in.  I can’t imagine how difficult true flight physics would be given the difficulty just getting them to stand.

The mention of flight and swimming in the same breath made me think that it might be cheaper computationally/ all around easier for both swimming and flying creatures to incorporate some type of swim bladder organ even if you had to fudge it with regards to buoyancy by fiddling with the mass somehow. 

For a dragon-esque creature for example, some chemical pathway leading to electrolysis from stored water to build up some hydrogen and make it neutrally buoyant.  Wouldn’t it be great if it also had to breathe fire to descend, akin to a submarine releasing ballast (aquatics could just burp bubbles).

Anyway, infinite possibilities and all that.  I’ve got laundry to do, hopefully I’ll get a nice chunk of time to study some more later on tonight.



   
ReplyQuote
(@robowaifu-technician)
Member
Joined: 11 months ago
Posts: 6
 

Posted by: @foggygoofball
Wouldn’t it be great if it also had to breathe fire to descend, akin to a submarine releasing ballast (aquatics could just burp bubbles).

If it were buoyant due to having a bunch of hydrogen inside of it like a balloon, couldn’t it just burp bubbles of hydrogen to descend without igniting it?

 



   
ReplyQuote
(@genesis)
Member
Joined: 1 year ago
Posts: 44
 

I understood enoughv of this conversation to act as if i would undertand how the bolly brain works.

However a question, in a previous comment that’s now lost in the blogcomments, there was a meantion of a balance between forgettig and remembering stuff. Is this a constannt value or does it change over time? For example more learning and less forgetting ass child,because there is more to learn. And when they become older, forgetting get’s stroner, because they got more information and can sort out old uneccasary ones?

If it is controled by a chemical (similar to neuronal growth factors) it could be used to grow/shrink the memmory abillity. Similar to some squirrels ( https://www.youtube.com/shorts/yN-CSqe9QoE ) or birds who alter their brain size/complexity depending on the season. Deteriorateovertimetosimulatemental aing and later even decline. And if the chemical is produced, if a bolly experiences new stuff (aka learning) it could produce more to increase the learning abillity, when needed.

In my experiments with neuronal growth factors (kids don’t do what i did) , i regained a lot of mental abillitys linked with young brains. Including the odd headaches i had in my youth all the time. If this could be simular emulated with bollys brains, we might even get some brain disease patterns when bollys forget to much. Unlockig a new window into medical fields?



   
ReplyQuote
(@genesis)
Member
Joined: 1 year ago
Posts: 44
 

wait… does this mean if a bolly got a map where it decides to run up a hill, it could redirect that information to a “visual” cortex, where it imagines itself running up a hill and that in itself would be a reward it grants itself. Allowing for daydreaming? Later reinforcing that imagition, when it actually goes up that hill, to fully reward itself and not just partially? 



   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
 

@genesis Yes!  you’re getting it now!  The bolly’s have a very complex visual system which is wired directly into their memory circuits and decision making processes. It’s not exactly all one thing, it’s a network made up of more than 20 individual bits like landmark detection and obstacle avoidance.  These all come together to tell “exec” (executive functioning network, kind of like a prefrontal cortex) what they see.

This is where it gets really interesting, bolly’s have three distinct levels of consciousness.  First is the subconscious, made up of low priority things that cross their mind (mind wandering type stuff like “I’m not really hungry but I’m remembering where the food was last” or “I can hear something in the distance but I’m busy trying to find a mate”.)

Next is imagination and incredibly, bolly’s have a “mind’s eye” and can imagine different scenarios like “should I go through the woods or over the bridge to reach my goal?” or “I’m hungry and I could eat the food in front of me, but maybe I’d prefer to eat something different, and how would I get there?  Is it worth the effort make a journey to get tastier snacks?”

Last is intention, These are the decisions that ultimately drive our critters, they’ve thought about being hungry, then they’ve imagined the possibilities (eat the bland kibble I have in front of me and satisfy my hunger, go for a walk and find something more appealing, or maybe I wasn’t really that hungry to begin with and go play with a toy.) After weighing their options, they will decide on whatever they think will make them most content.

They actually rehearse these scenarios in their minds while coming to a final decision.  These “rehearsals” also take place when the bolly’s fall asleep too, they have a REM sleep cycle wherein they can actually dream!



   
Mabus reacted
ReplyQuote
(@robowaifu-technician)
Member
Joined: 11 months ago
Posts: 6
 

@foggygoofball That reminds me of an idea I had not too long ago for the robot I’d like to make. DeepMind made an AI benchmark test that takes a few seconds of video footage and uses AIs model to generate what the next few seconds of footage looks like, to varying degrees of success. This made me think of combining predictive & generative AI with a need/drive to predict the future accurately, both adjusting its behavior & model of the world to make it come true. It’s probably not a great idea, but I know people find making correct guesses about the future rewarding, so my AI should too.



   
ReplyQuote
(@genesis)
Member
Joined: 1 year ago
Posts: 44
 

@robowaifu-technician reminds me, steve said once that he made the brain in a modular way that he could plug it into a robot down the line.

No idea how difficult that might be, but if we got someone with a robot walking ourside the PC and behaving like an animal. This could go viral and help the project gain a lot of traction.

And you would have a nice robot pet. But no odea how high that abillity stands on steves to do list? 



   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
 

@robowaifu-technician @genesis  So my understanding is that the way the Phantasian’s brains function is similar to our own.  Most modern AI such as LLMs are based on layered/high dimensional perceptrons which is to say that they’ve been fed many, many inputs all with “labels” (“supervised learning.”) In other words, humans told them what they are seeing and after being told thousands, possibly millions of times, they learn to recognize patterns in the data like how to differentiate between cats and dogs or which word or phrase is most likely to occur next in sequence. 

ChatGPT is barely more sophisticated than Eliza was, it just has more processing power and training data.

Now, our own brains are a bit of a mystery, and I’ve been doing a lot of deep research lately with the goal of understanding @steve ‘s own theory of cognition.  Theoretical neuroscience of the last fifty years has made huge strides in trying to break down the intricacies of brain dynamics.  Many researchers have written their thesis (thesi, thesisisis?) on the ways to approximate mathematical models of learning, cognition and intrinsic structure within various brain systems.

This comprises far too much to be relevant here, but what I feel is relevant is the extensive research done into the self organising structure of the visual cortex (“unsupervised learning.”)  If you’re into doing your own research, investigate Kohonen maps, competitive learning and Hebbian learning rules, this is where I started and seems to be as good as anything but don’t stress too much about the math.  Steve’s model of the mind is based upon his own intuition and any derivative mathematical models are relatively unimportant except perhaps to future researchers.

Anyway, coming back to the topic, Phantasian’s brains are like real brains in the sense that without an organic body full of chemistry and desires which need fulfilment, they might not behave in many interesting ways.  I’ve been fascinated by the idea of building the brain in real life, but there are myriad problems with it.  Nothing insurmountable but it’d be similar to how with each new revision of Unity engine or PhysX, Steve has had to rework the kinematics from scratch.  A “translation layer” or “simulation simulator” would need to be implemented to address real-world inputs vs digital stimulus.  

I’m only an electrical engineering dropout and granted, sensors and microcontrollers have become cheaper than gasoline in recent years, but I’d guess that we would need a large and expensive suite of very precise gyroscopes and accelerometers (presumably one at the terminus of each “bone” or limb, maybe hall effect sensors in the joints) to give our android a sense of proprioception(where it’s limbs are within 3d space.) Also, Phantasians don’t seem to have true binocular vision (i.e. depth perception) coded into their brains, so some adaptation for the real world would need to be accounted for (maybe a single camera with laser range finding). 

Finally, when it comes to object recognition, the world/reality of Frampton Gurney has  few “cheats” to save on CPU cycles/processing power.  The largest of which I’m currently aware (for any old cobblers out there) is that each object in the world has it’s own distinct “KIND” which insofar as I can tell, telepathically transmits a string of five numbers directly into a Bolly’s brain when they look at it (similar to how we all used to reserve agent IDs on CDN in C3 so there were no conflicts).  Each of the five numbers has a vague contextual meaning in human terms (Steve explained it to me once and I can repost if anyone is interested), but the Bolly’s only “see” the string of numbers, not what they mean (they can also see shape and colour) .  The upshot of this is that with appropriate forethought, each item in the world can be given certain numerical relations which the Bolly’s can use to distinguish it’s characteristics and how it feels about said item given prior experience with similar types of objects.

Again, this is primarily for computation reasons, I’m sure that if Steve had a bank of supercomputers at his disposal he would gladly increase to number of cortical columns in the visual cortex by an order of magnitude and allow them to figure out the nitty gritty details of how objects in the real world interrelate on their own.

But that’s all trivial, we would need to look at what actually drives and motivates an android.  Phantasians are driven by the same needs as us, like the need for socialization, comfortable climate, or food.  Clearly things which an android doesn’t “need.”

What an android should want seems the largest question in my mind.

Jeeze, this was supposed to be a brief response.  Sorry, I tend to write a lot.



   
ReplyQuote
(@genesis)
Member
Joined: 1 year ago
Posts: 44
 

@foggygoofball  gotcha, the same problem i had every time with “perfect” immortal norns. Removing their drives, because they produce everything on their own, always turned them into statues

bolly-robots would become the same, because interactions with the world mean nothing to them, since it can’t really interact biochemically

 

Partly what happens in current builds, because of the lack of objects to interact, specially in the starting areas – without stuff that can stimulate the bolly, why move at all…

…wait, that solved the question “what is the meaning of life?” – it is interactions with the world that influence your own body!



   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
 

@genesis Many years ago I tried to create my own A-life sim based on creatures, but I never could get them to want things badly enough to be interesting.



   
Mabus reacted
ReplyQuote
 Fern
(@kurgan)
Member
Joined: 12 months ago
Posts: 11
 

@genesis making it play pokemon could do pretty well for exposure too…



   
ReplyQuote
Share:
Chat Icon Close Icon