<?xml version="1.0" encoding="UTF-8"?>        <rss version="2.0"
             xmlns:atom="http://www.w3.org/2005/Atom"
             xmlns:dc="http://purl.org/dc/elements/1.1/"
             xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
             xmlns:admin="http://webns.net/mvcb/"
             xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
             xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <channel>
            <title>
									Psychology Lab - Phantasia Forum				            </title>
            <link>https://phantasia.life/community/psychology-lab/</link>
            <description>Phantasia Discussion Board</description>
            <language>en-US</language>
            <lastBuildDate>Wed, 22 Apr 2026 13:47:14 +0000</lastBuildDate>
            <generator>wpForo</generator>
            <ttl>60</ttl>
							                    <item>
                        <title>Why every brain metaphor in history has been wrong</title>
                        <link>https://phantasia.life/community/psychology-lab/why-every-brain-metaphor-in-history-has-been-wrong/</link>
                        <pubDate>Sat, 24 Jan 2026 04:02:49 +0000</pubDate>
                        <description><![CDATA[I really liked this special episode of Machine Learning Street Talk, so figured I&#039;d share it here:
It&#039;s a more philosophical topic, but I found it to be pretty thought provoking.]]></description>
                        <content:encoded><![CDATA[<p>I really liked this special episode of Machine Learning Street Talk, so figured I'd share it here: https://youtu.be/pO0WZsN8Oiw?si=E5d4We3fuIIn_I0r</p>
<p>It's a more philosophical topic, but I found it to be pretty thought provoking. </p>]]></content:encoded>
						                            <category domain="https://phantasia.life/community/psychology-lab/">Psychology Lab</category>                        <dc:creator>danielmewes</dc:creator>
                        <guid isPermaLink="true">https://phantasia.life/community/psychology-lab/why-every-brain-metaphor-in-history-has-been-wrong/</guid>
                    </item>
				                    <item>
                        <title>Likes and Dislikes</title>
                        <link>https://phantasia.life/community/psychology-lab/likes-and-dislikes/</link>
                        <pubDate>Fri, 12 Sep 2025 23:57:03 +0000</pubDate>
                        <description><![CDATA[@steve
As the resident &quot;god in the machine-code&quot; (terrible analogy of the actual definition of &#039;deux ex machina&#039; but fun to read nonetheless) ...febrile minds inquire as to how you associat...]]></description>
                        <content:encoded><![CDATA[<p>@steve</p>
<p>As the resident "god in the machine-code" (terrible analogy of the actual definition of 'deux ex machina' but fun to read nonetheless) ...febrile minds inquire as to how you associate 'likes' and 'dislikes' into your creations.</p>
<p>Do you use common 'human' associations? (i.e. chocolate tends to be favorable in humans due to the chemical <span>Theobromine)</span></p>
<p>Do you use your own personal associations? Or other real-world animals as a base?</p>
<p>Just curious how you code food/liquids as 'agreeable' / 'non-agreeable' /stimulating / non-stimulating...etc.   AND if subsequent effects (vomiting/ill-feeling/ or euphoria/post-orgasm chemical flood) affect initial responses.</p>]]></content:encoded>
						                            <category domain="https://phantasia.life/community/psychology-lab/">Psychology Lab</category>                        <dc:creator>Finnius</dc:creator>
                        <guid isPermaLink="true">https://phantasia.life/community/psychology-lab/likes-and-dislikes/</guid>
                    </item>
				                    <item>
                        <title>Johnjoe (Joejohn?) Mcfadden and CEMI email exchange circa 2014 on machineslikeus.com</title>
                        <link>https://phantasia.life/community/psychology-lab/johnjoe-joejohn-mcfadden-and-cemi-email-exchange-circa-2014-on-machineslikeus-com/</link>
                        <pubDate>Fri, 29 Aug 2025 07:23:18 +0000</pubDate>
                        <description><![CDATA[I just discovered machineslikeus.com.  @steve used to write for the website though it&#039;s now only available through the wayback machine at archive.org.  So much great stuff he wrote back in t...]]></description>
                        <content:encoded><![CDATA[<p>I just discovered machineslikeus.com.  @steve used to write for the website though it's now only available through the wayback machine at archive.org.  So much great stuff he wrote back in the day, but unfortunately the archive only lists the first ten pages of Steve's email debate regarding the seat of consciousness with a particularly smug academic type.</p>
<p>I feel like Steve had him on the ropes, as it were.  Perhaps it's a debate lost to the mists of time but I'm dying to know how it ended.</p>
<p>Call me what you will, sycophantic, a zealot, a groupie.  These things are all true, but no one in academia seems to care for the holistic nature of biology and as Steve has said many times (which I've honestly felt is the real intrinsic truth of a matter for a decades now) "scientists" or "academics" are often more concerned with looking clever and releasing papers than they are with actually solving or understanding the problems that they are studying.  So much effort goes into creating (again, Steve's words) "toy problems" which are so reductionist and simplified (or in Joejohn's case, simultaneously overcomplicated and oversimplified) that they have little bearing on reality. </p>
<p>Too few researchers are actually in it for the passion of discovery, it might just be the state of the modern world, but research always seems to be bound inextricably to finances.  Thinking is free, consciousness is just an emergent feature of life, unfortunately <em>modern life</em> is expensive.</p>
<p>There's a very legitimate reason that AI hasn't progressed beyond chatbots and that reason is; researchers are often more concerned with reproduceable results and flashy mathematics than they are with truly <strong>understanding</strong> the dynamics of inherently messy and difficult to reproduce systems such as the brain.</p>
<p>The version of the Turing test that I ascribe to is the one from xkcd ( https://xkcd.com/329/ ), and we just aren't there yet, in any chatbot conversation by the time I get three prompts deep it's all gone to hell.</p>]]></content:encoded>
						                            <category domain="https://phantasia.life/community/psychology-lab/">Psychology Lab</category>                        <dc:creator>FoggyGoofball</dc:creator>
                        <guid isPermaLink="true">https://phantasia.life/community/psychology-lab/johnjoe-joejohn-mcfadden-and-cemi-email-exchange-circa-2014-on-machineslikeus-com/</guid>
                    </item>
				                    <item>
                        <title>Self organizing maps</title>
                        <link>https://phantasia.life/community/psychology-lab/self-organizing-maps/</link>
                        <pubDate>Thu, 14 Aug 2025 05:57:00 +0000</pubDate>
                        <description><![CDATA[Over the last week or so I&#039;ve been trying to get to grips with the brains of Steve&#039;s cyberspawn.  I fear that I may never truly &quot;get it&quot; but that&#039;s not going to stop me from trying. 48 hours...]]></description>
                        <content:encoded><![CDATA[<p>Over the last week or so I've been trying to get to grips with the brains of Steve's cyberspawn.  I fear that I may never truly "get it" but that's not going to stop me from trying. 48 hours ago i was happy with the term "self organizing maps" as a comfortable, neat, little black box.  Since then I've been rabbit-holing.  I've read all manner of peer reviewed papers and @steve 's entire programming journal (at least as much of it as has been saved for posterity.)</p>
<p>After reading Steve's programming journals of the last 15 years, I've come to understand that all of our creature's learning and behavior are down to self organizing or "Kohonen" maps, named for the Finn who pioneered the technique back in the 70s.</p>
<p>Forgive me for any inaccuracies and feel free to provide corrections, but as i understand it Kohonen maps are a way of teasing structure out of inherently noisy data. usually, these neural networks are trained on an "input layer" or "data set".  In the traditional Kohonen model, a 2d object space, or a grid to us laymen is initialized with random "weights", the initial layout is then compared to the input layer and for each data point, whichever of the random weights is the closest match to the input data point is found through some sigmoid function (it's been too long since maths class to remember how to use sigma) which strikes me a kind of approximation of the Pythagorean theorem.</p>
<p>Upon finding the best matching point in the grid, "learning" is instituted by our best fitting node communicating with all of it's neighbors that it's found the solution which, in turn alters their weights to bring them closer to alignment with the "winning" node.</p>
<p>this process is repeated over and over to "train" the network ultimately resulting in clusters of nodes that can group complex data into smaller, more easily interpretable categories. Traditionally, this technique would be used to analyze "higher dimensional data" by making inferences about the ways the datapoints are connected based on their similarities and differences without requiring external classifications.</p>
<p>Now in Phantasia, our creatures are expected to learn and grow throughout their lifetimes.  This means that there will never be a "fully trained" map as our goals and therefore data sets will be in constant flux. At the moment, rather than totally random weights, many of our mental maps are calibrated by basic "instinct genes" which aid in a more human readable and "functionally organized" map.</p>
<p>Our brain maps have two distinct states, yin (bottom up impulses like sensory inputs) and yang (top down impulses from executive functioning maps)</p>
<p>Between these two layers we find the affect layer which i think is like our kohonen map.  this layer is constantly trying to find the best compromise between yin and yang, ideally bringing the two into alignment to signify that what I've been wanting and what i now have are one and the same ("I wanted to play with a ball" being our yang and "I'm currently playing with the ball" as our yin.)  The upside of this is that we get dynamic and emergent behavior from the feedback between the various brain systems.</p>
<p>It all does my head in a little bit, but I have a strong passion to understand what makes these creatures tick.  I may have a leg up on understanding all of this since I can look back on the many years Steve has been working and problem solving in order to find insights, however I feel like I'm working from first principals and I'm not quite sure that I have them right yet either.</p>
<p>To anyone with experience in machine learning or self organising maps, I would be forever in your debt if you could explain it all to me in the slowest most condescending way possible, like you were trying to explain the history of the East India Company to a tea leaf.</p>]]></content:encoded>
						                            <category domain="https://phantasia.life/community/psychology-lab/">Psychology Lab</category>                        <dc:creator>FoggyGoofball</dc:creator>
                        <guid isPermaLink="true">https://phantasia.life/community/psychology-lab/self-organizing-maps/</guid>
                    </item>
				                    <item>
                        <title>What does a cell know of itself?</title>
                        <link>https://phantasia.life/community/psychology-lab/what-does-a-cell-know-of-itself/</link>
                        <pubDate>Thu, 31 Jul 2025 18:20:25 +0000</pubDate>
                        <description><![CDATA[What Can a Cell Remember? | Quanta Magazine
It&#039;s a long read and a bit technical.
I read this article today and while it&#039;s not at all about this project, I&#039;m drawing some real parallels wit...]]></description>
                        <content:encoded><![CDATA[<p>What Can a Cell Remember? | Quanta Magazine https://share.google/WIxSi87PmPPdPHvze</p>
<p>It's a long read and a bit technical.</p>
<p>I read this article today and while it's not at all about this project, I'm drawing some real parallels with the Yin and Yang maps.  They explain that each individual cell may in fact remember past stimulus and predict future ones.</p>
<p>Almost as if many, possibly all, types of cells have their own feelings about their present condition and this cellular computation is a major factor in neural function of multicellular organisms.</p>
<hr />
<p> </p>]]></content:encoded>
						                            <category domain="https://phantasia.life/community/psychology-lab/">Psychology Lab</category>                        <dc:creator>FoggyGoofball</dc:creator>
                        <guid isPermaLink="true">https://phantasia.life/community/psychology-lab/what-does-a-cell-know-of-itself/</guid>
                    </item>
				                    <item>
                        <title>Copypasta from an ongoing conversation about my brain model.</title>
                        <link>https://phantasia.life/community/psychology-lab/copypasta-from-an-ongoing-conversation-about-my-brain-model/</link>
                        <pubDate>Tue, 29 Jul 2025 21:27:13 +0000</pubDate>
                        <description><![CDATA[@Mabus suggested we move this from the blog comments (where things easily get lost) to the forum. So here&#039;s the main part of the conversation. Feel free to weigh in below!

@FoggyGoofball:...]]></description>
                        <content:encoded><![CDATA[<div class="wpd-comment-text">
<p><strong>@Mabus suggested we move this from the blog comments (where things easily get lost) to the forum. So here's the main part of the conversation. Feel free to weigh in below!</strong></p>
<hr />
<p>@FoggyGoofball:</p>
<p>Speaking of brain code, I think that i’m going to diagram all of the lobes and layers onto graph paper. Being able to see the laid out physically will be a huge help to me.</p>
<p>I’m going to try not to bother you endlessly about this stuff, but i have two questions in that regard.</p>
<p>First, do you think that with adequate application of the “sucker” function on each of the feet that we could create a creature capable of climbing walls? Is it actually providing downforce, or is it an attraction to the surface underfoot?</p>
<p>As I understand it from looking at gloop genomes, we might be able to introduce a new IO brain organ that could translate a separate set of gait genes for climbing when we sense verticality in the spine and\or feet. Obviously it’s not simple, but would it be possible?</p>
<p>And last question for today is what do the visual properties kind, earth, air, fire, and water represent specifically?</p>
<p>I can also confirm that the fallback to previous d3d versions did work successfully in the pre alpha build that mabus shared with me.</p>
</div>
<hr />
<p>@Steve:</p>
<div class="wpd-comment-text">
<p>Wow! You’re really getting into it!!!</p>
<p>I do have a diagram of all the (main) brain maps and how they’re connected to each other. Do you want it or would you rather figure it out for yourself?</p>
<p>The suckers exert a down force. I guess they could be made to exert that force in any direction, although to climb walls the creature’s weight would exert a large force at right angles to it and I’m not sure what the effects would be (on their leg joints especially). Making a complete physics-based creature was a huge challenge (all the other attempts I’ve seen cheat far more than they admit to). As Unity and PhysX have improved over the years, it’s got less and less of a nightmare, but it’s still very weird and unpredictable, so the creatures needed a bit of help sticking to the floor! Anyway, I think climbing would be possible in principle. Swimming and flying would come further up my own list, but either way, adding an extra dimension makes the whole thing a lot more complex and resource-hungry. Just keeping them from falling over is a big enough task – every time Unity changes their physics code it all goes wrong again!</p>
<p>KIND is the basic type of object: { “outdoors”, “indoors”, “vehicle”, “furniture”, “device”, “utensil”, “plant”, “animal”, “person” } (where “person” means another phantasian or the player).</p>
<p>After that it’s a hierarchy. So, if KIND==outdoors, then EARTH == { “wild”, “natural”, “cultivated”, “builtup”, “industrial” }, and so on.</p>
<p>But these labels are purely for my own use. All the creatures see is a list of numbers from -1 to +1. They don’t know that -1 means “outdoors”. Mostly I just need to ensure that I give objects a unique pattern of these numbers, and choosing them sequentially would solve that. But obviously if all the plants have the same number for KIND and meaningful numbers for shape, color and habit, then it’s easier for the creatures to learn to recognize a few plants and then develop beliefs about whether an unfamiliar plant-like thing might be edible or poisonous.</p>
<p>I chose EARTH, AIR, FIRE, WATER, because that give me some vague logic I could use, based on the ancient ideas of the Elements. So the spikiness of a plant might equate to it having a lot of FIRE in it, whereas for “indoor” kinds, that might represent how well insulated or heated they are. It helps to make the world slightly more predictable for the creatures (but not so predictable that they have nothing to think about).</p>
<p>But really it’s just the IO maps that still retain this earth, air, fire, water naming convention. Elsewhere I took to thinking of the KIND as a noun and the other descriptive lists as different levels of adjective. There are verbs and adverbs, too.</p>
<p>One day I hope to have the luxury of some time to think this through much better. My son did his neuroscience PhD partly on the distinction between functional categorization and perceptual categorization. Right now I just have a kind of compromise, where objects are perceptually categorized, yet there’s also a vague functional logic built into it, so that learning can be generalized from experience of similar objects. But it’s yet another big challenge, so for now, every object in the world can be recognized as a pattern of five numbers, and creatures can <em>attempt</em> to divine some kind of structure in these patterns as they learn to recognize things. The whole subject of combining self-organizing maps, with functional v perceptual categorization, with the need for one-shot learning (IRL, nobody needs to see a million cats before they can recognize another cat, unlike today’s so-called AI) is something that has occupied a lot of my thinking over the years, but I don’t have good answers yet. Or at least, I haven’t had time to try them out…</p>
</div>
<hr />
<p>@FoggyGoofball:</p>
<p>I’d love to look at your brain maps! The grandroids genomes are really well annotated so I feel like I’m beginning to understand some of the broad strokes, but more data is always welcome!</p>
<hr />
<p>@Steve:</p>
<p>Here you go:<br /><a href="https://phantasia.life/docs/ep/biology/neuroscience/brain-maps/" rel="ugc">https://phantasia.life/docs/ep/biology/neuroscience/brain-maps/</a><br />It’s a bit out of date now, but good enough to be going on with. It only shows the yin (afferent) and yang (efferent) connections. Some maps also have affect layer connections that interface with chemistry to give the maps ‘beliefs’ about how it feels to be in different yin states under different circumstances. Everything is bidirectional – information flows up yin and at some point comes back down yang and flows out. This is how the real brain is wired, rather than the pipeline structure that I think most people assume, and it’s a crucial part of my theory.</p>
<hr />
<p>@FoggyGoofball:</p>
<div class="wpd-comment-text">
<p>So cool! That’s a great document! I still think that I want to map it out in actual physical space, I want to see it in more detail, but my screen is too small to even display everything here. It’ll be a project for sure, like a crime board from the cop dramas with strings all over the room.</p>
<p>Not sure how realistic it is judging from the scale, but I’m going to try to make a 1:1 neuron for neuron copy that I can walk around and literally look at from different angles.</p>
</div>
<hr />
<p>@Steve:</p>
<div class="wpd-comment-text">
<p>Haha! Well, you’ll have to make it a cortical column by cortical column copy, but sure. I’d love to see that!</p>
<p>Yes, seeing things from the side isn’t really all that helpful, since much of the action happens in the other directions – each map is a stack of plates – bottom layer, affect layer, top layer, plus occasional prograde, retrograde and other layers. It’s the pattern of activity on each plate that explains what’s going on, and you can’t easily visualize that from the side. Each map is in some kind of 2D “space” – eye space, head space, local space, object space, word space, conceptual space, and so on, and these spaces tend to form hierarchies.</p>
<p>If I was making this for real, each of the wires you see in the diagram would actually be a massive bundle of connections from every patch in one map, to every patch in the connected map. But because that would require a supercomputer, I crossed my fingers and hoped that just sending the XY position of the PEAK activity from map to map would be good enough. And mostly it is. Where the creatures eyes are looking is represented by a broad dome of activity in head space, but all the next map cares about is where the peak of this dome is, not how much every neuron is contributing to it or how oddly shaped it is. So all the inter-map signals are reduced down to single X,Y,A,Q values (the State struct – you’ll come across hundreds of them in the code!), where A represents the peak’s amplitude, and Q mostly represents its “tightness” (like the Q of an audio filter, if that means anything to you).</p>
<p>It’s strangely both very complicated and pretty simple! The peak of activity in one layer says what state the map thinks some aspect of the world is in (its yin state), while the peak in another layer says which state the same or a different map WANTS that aspect of the world to be in (yang state). The map’s task is just to try to bring the yin state into line with the yang state, and hence satisfy a desire by bringing reality into line with an intention or hope. But what one map wants, dictates what other maps want, and what one map believes is going on, constrains what other maps think they can achieve.</p>
<p>I think this is probably close to what happens in real brains, and to my knowledge nobody has ever built a system like this, but it’s just a hunch, really, based on a lifetime of “Hmm… I wonder if the brain is a bit like a model aircraft” -type thinking! I have a LOT to say about all this, but while I’m trying so hard to make enough 3D game-related progress that I can continue to pay my rent, I just don’t have time to say any of it! Anything you figure out and feel you can share with people will help me a lot.</p>
</div>]]></content:encoded>
						                            <category domain="https://phantasia.life/community/psychology-lab/">Psychology Lab</category>                        <dc:creator>Steve Grand</dc:creator>
                        <guid isPermaLink="true">https://phantasia.life/community/psychology-lab/copypasta-from-an-ongoing-conversation-about-my-brain-model/</guid>
                    </item>
							        </channel>
        </rss>
		