Forum

Self organizing map…
 
Notifications
Clear all

Self organizing maps

19 Posts
3 Users
3 Reactions
441 Views
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
Topic starter  

Over the last week or so I’ve been trying to get to grips with the brains of Steve’s cyberspawn.  I fear that I may never truly “get it” but that’s not going to stop me from trying. 48 hours ago i was happy with the term “self organizing maps” as a comfortable, neat, little black box.  Since then I’ve been rabbit-holing.  I’ve read all manner of peer reviewed papers and @steve ‘s entire programming journal (at least as much of it as has been saved for posterity.)

After reading Steve’s programming journals of the last 15 years, I’ve come to understand that all of our creature’s learning and behavior are down to self organizing or “Kohonen” maps, named for the Finn who pioneered the technique back in the 70s.

Forgive me for any inaccuracies and feel free to provide corrections, but as i understand it Kohonen maps are a way of teasing structure out of inherently noisy data. usually, these neural networks are trained on an “input layer” or “data set”.  In the traditional Kohonen model, a 2d object space, or a grid to us laymen is initialized with random “weights”, the initial layout is then compared to the input layer and for each data point, whichever of the random weights is the closest match to the input data point is found through some sigmoid function (it’s been too long since maths class to remember how to use sigma) which strikes me a kind of approximation of the Pythagorean theorem.

Upon finding the best matching point in the grid, “learning” is instituted by our best fitting node communicating with all of it’s neighbors that it’s found the solution which, in turn alters their weights to bring them closer to alignment with the “winning” node.

this process is repeated over and over to “train” the network ultimately resulting in clusters of nodes that can group complex data into smaller, more easily interpretable categories. Traditionally, this technique would be used to analyze “higher dimensional data” by making inferences about the ways the datapoints are connected based on their similarities and differences without requiring external classifications.

Now in Phantasia, our creatures are expected to learn and grow throughout their lifetimes.  This means that there will never be a “fully trained” map as our goals and therefore data sets will be in constant flux. At the moment, rather than totally random weights, many of our mental maps are calibrated by basic “instinct genes” which aid in a more human readable and “functionally organized” map.

Our brain maps have two distinct states, yin (bottom up impulses like sensory inputs) and yang (top down impulses from executive functioning maps)

Between these two layers we find the affect layer which i think is like our kohonen map.  this layer is constantly trying to find the best compromise between yin and yang, ideally bringing the two into alignment to signify that what I’ve been wanting and what i now have are one and the same (“I wanted to play with a ball” being our yang and “I’m currently playing with the ball” as our yin.)  The upside of this is that we get dynamic and emergent behavior from the feedback between the various brain systems.

It all does my head in a little bit, but I have a strong passion to understand what makes these creatures tick.  I may have a leg up on understanding all of this since I can look back on the many years Steve has been working and problem solving in order to find insights, however I feel like I’m working from first principals and I’m not quite sure that I have them right yet either.

To anyone with experience in machine learning or self organising maps, I would be forever in your debt if you could explain it all to me in the slowest most condescending way possible, like you were trying to explain the history of the East India Company to a tea leaf.



   
Perianthesis reacted
Quote
(@genesis)
Member
Joined: 1 year ago
Posts: 44
 

Posted by: @foggygoofball

Since then I’ve been rabbit-holing.  I’ve read all manner of peer reviewed papers

i got the feeling you might be able to submit a paper to peer review about disecting steves work if this goes on

 

I am impressed, with you and steves work. 

 

I always knew it was impressive, but it just gets more and more impressive! 



   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
Topic starter  

The more I reflect on all that I’ve read, the more I begin to realize that this project has gone on to become so much more than the sum of it’s parts.  This started out as a game, and I can tell you that it has evolved into so much more.  So far as I can tell, Steve has created a whole new kind of neural network starting from biologically plausible principles.  People in the machine learning field seem to scoff at the idea of self organising maps and Kohonen’s theories as being archaic.  Realistically, they just needed someone inspired enough to understand that “learning”, ‘intelligence”, and “consciousness” are all emergent from the most basic of principles.

 No single LLM or machine vision system can be expected to rise to consciousness, it’s only through the delicate interplay of many smaller networks that we can even hope to approximate natural brains and their limitless possibilities. Move fast and break things seems to be the mantra of machine learning, and no one really seems to be giving things the thought and attention that Steve does.  Everyone seems more concerned with the next big thing rather than unpacking the implications of the last theory.

As I’ve said before, I’m really interested in diving more into the GAZE system and trying to extrapolate some meaning, but the more I learn, the farther away that goal seems in a way.  After trying to make sense of it all myself, I’m not in the least bit surprised that Steve is still thinking about, fixing and tweaking things all these years later.

Maybe I’ll start with the brain dynamics involved in walking from point a to b, it’s only took Steve around a decade to get that right.



   
ReplyQuote
(@steve)
Member Admin
Joined: 1 year ago
Posts: 26
 

@foggygoofball Kohonen maps are a good example of a SOM, definitely. Although they suffer from an interesting problem, which is that like most kinds of neural networks, they only get good after many repeated exposures to examples. This is a form of what’s called the “plasticity/stability problem”. How do we make neural networks that can learn very quickly, without them just as quickly forgetting things? Or hastily learning the wrong conclusions and then being stuck with them.

A human child, for instance, may see a dog for the very first time, be told its name, and then when it sees its first cat it points and excitedly says “dog!” But children only tend to make that mistake once. By the time they’ve seen one cat and one dog and been alerted to the idea that they’re different categories, they’ve pretty much got it sorted out. Whereas most neural networks have to be shown many thousands or even millions of examples of dogs and cats, along with being told which ones belong to which category, before they learn very much at all. 

So I don’t exactly use Kohonen maps in these creatures, because we can’t wait for them to experience thousand of examples. Real life is based on the idea that it’s better to be wrong than dead, and so living things tend to show “one-shot” learning. We fall off one stool, and then we know not to do it again.

But the good thing about SOMs is not how they learn, so much as how they organize their learning. They spontaneously tend to categorize things so that similar things end up in similar locations. And this is fundamental to phantasian brains, so it’s the part that matters to me the most.

The more that things are organized by similarity, the more a system is able to generalize. Imagine a giant room full of all sorts of objects. You might notice that two things are very similar, so you put them next to each other. Then you see another thing that is similar to one of them but not so similar to the other, so obviously you’ll place the third object on one particular side of the other pair.

Do this to thousands of objects in this room, and you’ll end up with a MAP of how objects relate to each other. Then when you see a new object for the very first time, you can quickly figure out where to put it on your map, and then you can easily generalize from all the facts you’ve learned about nearby things.

Now, being a room, you can only really do this by arranging objects on the floor, so you end up with a two-dimensional map. If you can put objects above and below others as well, then you have a 3D map, and it gets a bit easier to put boats near to cars and cars near to bicycles, and bicycles near to roller skates, without having to awkwardly end up with the roller skates right next to the boats. In an ideal world, we’d build up an n-dimensional map, with one dimension for each kind of difference.

But brains are physical objects and thus 2D and 3D are basically all they have available. You can see this problem showing up in real brains – it’s a good explanation for how the primary visual cortex arranges its little pattern detectors, which recognize the angles and spacings and locations of visible edges: they end up arranged in little “whorls” or spirals. It’s not a perfect arrangement, but it’s an effective one. Where something is in space matters most, so that gets assigned the overall XY arrangement. After that comes the question of which angle it is at, so these have to be arranged in little blobs, like the spots on a leopard. Obviously angles are “sortable” quantities, so we end up with all the up-down pattern recognizers arranged as far as possible from the left-right recognizers of the same blob, like the points of a compass. Corners and curves then have to be slightly awkwardly jammed into the cracks.

Anyway, primary visual cortex is (I believe) a self-organizing map of visual features, arranged firstly by location, then by angle. But after that, the brain basically has to put all of the colors, directions of motion, types of curvature and so on, into their own separate (but hierarchically interconnected) maps, which to begin with are also arranged primarily by XY position. But by the time this hierarchy has got abstract enough that it can recognize categories of vegetables, it no longer matters so much where the vegetables are in space, or which way up they are, so by then the primary axes might represent something more conceptual – nasty vegetables to the left, nice ones to the right, and/or perhaps green leafy ones towards the top and root vegetables near the bottom.

What you’re discovering about phantasians is that they, too, arrange everything into maps, organized by two primary XY characteristics. Just occasionally, third and fourth characteristics have to be crammed in as blobs and swirls, but the goal is always to fit similar things closer together than different things.

I’m probably not making much sense – this is a huge collection of concerns and ideas. But sometimes the best map structure is actually blindingly obvious and doesn’t need to self-organize at all. Sometimes it’s just a “given”. For example, phantasians have touch sensors all over their skin, and the skin is basically a crumpled up 2D surface, so it’s not hard to imagine that this same left/right, front/back arrangement would persist in any brain maps concerned with touch.

The phantasian brain is all about COORDINATE TRANSFORMS between these various maps. It’s very much like the way a map of positions and edge angles in the human brain eventually gets transformed into a map organized by types of vegetable. The most clear-cut example of this in phantasia is how the positions of rapidly moving things on the creatures’ retinas get steadily transformed through various intermediate maps into a stable map of nearby space, and this map of space then combines with maps arranged by other factors such as appearance. But I’ve already written too much, so I won’t spoil your fun! I’ll let you keep on digging!


This post was modified 2 months ago 6 times by Steve Grand

   
ReplyQuote
(@steve)
Member Admin
Joined: 1 year ago
Posts: 26
 

Posted by: @foggygoofball

Between these two layers we find the affect layer which i think is like our kohonen map. 

Oh, I meant to say, the Kohonen-style stuff actually goes on in the bottom layer (where it self-organizes the yin patterns, if they actually need organizing). Once a map’s yin space has become organized in a specific way, this defines everything else in the map. The affect layer is more of a slave to this organization.

Imagine a conventional map: This uniquely defines an arrangement of landmarks, and once we have such a map, we can stick pins into it. One pin might say where we are now, while another pin might represent where we want to go. Our task is then to use the map to plan a route from one pin to the other.

In my system, the first pin is the “yin state” – a blob centered over where we think we are now. The second is the yang state – a blob centered over where we would like to end up. (Btw, sometimes “where” has quite a literal meaning, but sometimes it’s more abstract, such as “the kind of object we would like to have available”.)

Quite often it’s a higher map that tells us where we should end up. I call this an extrinsic goal. But many maps can also define their own intrinsic goals – they can decide for themselves where they would like to end up, and so they communicate this both upwards and downwards – creating extrinsic goals for lower maps, or adding their opinion to the developing intentions of higher maps.

The affect layer is where these states (and hence possible goals) are given a sense of value – it’s like the contour lines of the map. Each neuron in the affect layer is positioned over a particular yin / yang XY state – it thus means a specific thing in the ‘language’ of that particular map. So it is well-placed to learn how the creature tends to feel whenever the map is in that particular state.

Thus, it can say which states the map really doesn’t like being in, and which state the map would most like to be in right now, under the current circumstances. Hence the XY position of the point that has the strongest activity represents the state the map most wants to be in, and therefore its candidate for an intrinsic goal.

Sometimes it won’t win – it just has to do whatever one of the maps above it wants to do – but sometimes it gets to ignore what the boss says, and do what it feels is more important. And because of the way yin neurons self-organize (or are organized by instincts), it turns this one XY position into a pattern of other XY positions, which it has learned will tend to cause its child maps to get it into that desired state, and then it sends those XYs down to its children to be their extrinsic goals.

If you see what I mean…


This post was modified 2 months ago 6 times by Steve Grand

   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
Topic starter  

Thank you for the explanation, I’m going to file it in my notes under “Lessons from Steve”

I don’t quite picture it yet, but little by little I think I’m filling in the cracks.  At the very least I understand what a project it would be if I should choose to build a physical model of the brain (how many cortical fibers are there actually, half a million, more than a hundred thousand at least, right?)  I really admire your ability to simultaneously see the whole of the construction while also being able to break it down into its component parts.  I’m not certain that I need to understand the math in order to get the big picture, but I feel that the more I learn the less I know in a way.  

Knowledge is power but ignorance is bliss or so they say.  I’ve never been a fan of ignorance but you just don’t know what you don’t know.  I feel like you’ve laid out this incredible roadmap before me but it’s written in some strange dialect, like a Chinese flatpack instruction manual.  Surely in the end I’ll be able to build a bookshelf with it, but I’m bound to get a few bits backwards and have a couple of screws left over.

Also, you’ve alluded to the one-shot learning, it seems like you’ve made strides in the area, how have you gotten around he need for many trials or rounds of reinforcement?  I can’t wait to read your own personal reflections on the project in the future, I imagine that you’ll have a great deal to say about it all eventually.



   
ReplyQuote
(@genesis)
Member
Joined: 1 year ago
Posts: 44
 

Posted by: @steve

But the good thing about SOMs is not how they learn, so much as how they organize their learning. They spontaneously tend to categorize things so that similar things end up in similar locations. And this is fundamental to phantasian brains, so it’s the part that matters to me the most.

reminds me on some study i heard about years ago. It was using brain scans to locate the brainregion of “pokemon” , a different category than animal.

 

Somehow the place where the information was saved in the brain coresponded with our vision (the distance of the gameboy to our eyes) 

 

Would make sense to link them by distance, after all ‘works good enough’ whike giving a new category as side effect ‘stuff far away’ vs ‘stuff close’ just by sorting those together



   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
Topic starter  

I’m headed out of town and offline for a few days and I’m hoping to do some reading and thinking.  I’ve downloaded a number of research papers from arxive.org and researchgate to study while I’m away.  

I’ve mostly focused on papers related to one-shot, few-shot, and zero-shot learning in hierarchical self organizing maps.  It’s really frustrating how much of academia is locked behind paywalls, but that being said, I if anyone has any recommendations for good papers to study I’d probably be willing to pay for access to a couple of journals or networks.

Currently I’m unsure which research is going to be most useful, though the more I read, the more sense everything makes.  I never knew that I’d be so interested in cognitive neuroscience and biologically inspired neural networks.

 



   
ReplyQuote
(@steve)
Member Admin
Joined: 1 year ago
Posts: 26
 

Posted by: @foggygoofball

I’m bound to get a few bits backwards and have a couple of screws left over

Hah! I know that feeling well!

The way I do one-shot learning is basically to write the incoming pattern into every cell, with a strength proportional to how well the new pattern matches the current wiring of that cell. All of the labile (free to move their inputs) cells adjust the positions of their input wires proportionally towards the new input. I keep on repeating the input for as long as it takes until one cell becomes tuned to a near-perfect match and that cell fires so strongly that it becomes locked off. When another new input comes in, either there will already be a neuron tuned to that pattern, or there will be some labile neurons tuned to somewhat similar patterns. The more similar they are, the more rapidly they become even more similar – so it’s a race. The winner will be the one that ends up perfectly tuned, whereupon it becomes locked. Meanwhile, the losers will have been more or less pulled towards both patterns. The upshot of all this is that the map self-organizes in a way that similar patterns most likely end up near to each other. What makes it one-shot is that we only ever get one winner for each pattern, and this only takes a few milliseconds, from a single trial. There are other twiddly bits, but that’s the gist of it.

Btw, the vast majority of neural networks are based on the idea of synaptic weights – the wiring between cells is fixed but the weights change through experience. Don’t be misled/confused by them – my neurons don’t work that way at all. Nor do Kohonen’s, which are far more like mine, although you wouldn’t know it from the way they tend to get described! The way to imagine it is each cell sends out “feelers”, that move around, constantly seeking out sources of signal. The closer the end of a feeler is to an active signal, the stronger the amount of signal that it picks up. But unlike conventional NNs, each feeler can “feel’ signals from anywhere across the map – the signal just gets weaker the further away it is. In the phantasian case, each neuron sends one feeler out to each of the map’s children. So the XY locations of the feelers belonging to a given cell represent the multi-map pattern it is most tuned to.

Have a nice trip! Thank you for the diligent interest in my work, and for reminding me about some of the million things I’ve been working on for the past decade or so, beyond silly video games. I’ll turn you into a cognitive scientist eventually!

 

This post was modified 2 months ago 7 times by Steve Grand

   
Mabus reacted
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
Topic starter  

That is actually really helpful!  I’m imagining the feelers as sort of analogous to dendritic trees in a real brain, not sure if that’s totally accurate, but now I understand what you meant when you said that if you did the brains physically they every point in each map would connect to every other point. 

At first I thought you were simply thinking in terms of efficiency and being able to compute massive amounts of information in parallel instead of having to streamline it all for unity.  

I don’t think that I’ve come across anyone discussing a similar learning system, but I’m only about ten research papers in and still fiddling with concepts like hyperdimensional vectors and vector symbolic architectures.  It’s all very interesting, but am I pursuing the wrong line of research? 

I recall reading in your programming log something to the effect of thinking in 16 dimensions was getting too complicated and so you need efforts to reduce the dimensional space to something more easy to visualize.

I think I’m coming to terms with the concepts in various forms of SOM, I’m still quite eager to look into the gaze system, but I’m trying to grasp how your maps handle spatio-temporal learning.  Does all of the temporal stuff get stored separately in the memory system or are temporal patterns integrated into the individual maps?

 

 



   
ReplyQuote
(@steve)
Member Admin
Joined: 1 year ago
Posts: 26
 

@foggygoofball It’s hard to answer your question about spatiotemporal learning. There is some spatial learning, in the sense of maps that learn how one region of space is connected to another (NAVI) and how objects and obstacles are positioned in local space (OBST). And there’s temporal learning, in the sense that EXEC learns how one state tends to lead to another when a given action is taken, from which the system can rehearse potential multi-step plans. And I guess the affect layers are also temporal learning, in the sense that they record how our feelings tend to change when we are in a certain state. But there’s nothing like, say, convolutional NNs, which try to represent time and space together. No episodic memory either.

Nothing the phantasian brain does is “clever” – no fancy algorithms or abstruse neural representations. It’s all pretty trivial at the map level, but interesting at the system level. The system itself is very dynamic, so time plays a big part, but it really just embodies a very simple idea about nonlinear servos.

Most people who do NN research seem to live on a very different planet from me. I don’t really know why. Their fundamental paradigm is weird. They tend to think in terms of a “brain in a vat”, basically, and pay no attention to embodiment or situatedness. As if intelligence is some kind of magic substance that can exist and function entirely on its own. Pure thought. It’s mostly because they tend to be mathematicians, I think. They live in a very reductionistic, static, top-down, yet strangely Platonic world…

Anyway, I’m a natural-born cyberneticist, so I started at the other end, with the control problem: Given a physical body, situated in a complex dynamic environment, how is an organism supposed to control itself. Specifically, how can it predict the future, at least far enough ahead to compensate for the inevitable transmission delays from sensors to controllers and back to motors, so that they can act before it’s too late. My feeling is that consciousness is a product of such a need to predict the future, and the ‘place’ where we exist, as conscious beings, is inside a virtual world; a nervous-system-generated simulation of the physical world, where we flit between the possible futures and the remembered past.

Probably the best analogy for my approach is the autopilot of an aircraft (which is why I spent a few years trying to build an autonomous robot glider, until it fatally crashed on TV…). Didn’t you say you studied electrical engineering? If so, you’ll be familiar with the idea of a servomotor, yes? A servo constantly tries to bring its actuator into line with a desired state and keep it there. It’s symmetrical, in the sense that the actuator will move if the desire changes OR if the current state changes.

An autopilot is essentially a community of nested virtual servos, which ultimately control physical servos. To turn the plane onto a given heading, a high-level servo might note that the plane is X degrees east of where it should be, and so it wants to turn west. In order to turn west, it has to tell a lower-level servo to bank the plane at Y degrees. That servo looks at the difference between the present angle of bank and the one it has been ordered to achieve and tries to minimize the difference. In order to do that, an even lower-level servo has to be commanded to change the rate of roll, and in order to do that a real physical servo has to achieve and maintain a certain aileron angle.

Meanwhile, banking the aircraft causes its nose to drop, and so another nested bunch of servos connected to the elevator kick into gear to try to keep the plane level, despite all this. And banking also causes the plane to yaw, so a set of servos connected to the rudder has to keep the plane from skidding sideways. A simple “desire” to fly north thus causes a bunch of servos to have to achieve and maintain a fairly complicated sequence of states of their own, until the plane finally reverses the rate of roll for a moment to level out facing north.

Each servo (physical or virtual) needs one output, to drive its motor or send a command to the servo beneath, and one input from a sensor that says what state the system is in. In a real autopilot, the sensors are separate devices – a compass, an altimeter, an accelerometer, etc. But suppose the autopilot were to PERCEIVE the higher, more abstract information such as heading, based on lower-level memories of much simpler things, such as accumulating all the changes in direction over time, so as to predict what the heading must be now, instead of directly measuring it? 

Now we would have a flow of information UP through the chains of servos, as well as the information that flows down through them, as they command their children to do the things that are needed. It’s a bidirectional system, with information becoming more aggregated and abstract on the way up, and more primitive and literal on the way down. And it bears a very striking similarity to the wiring of the cerebral cortex of the brain.

The only thing that’s missing is some way for these virtual servos to know what they have to do, in order to minimize any difference between the top-down desires and the bottom-up percepts. In an autopilot this is pretty simple, since it’s fairly linear – if you’re not rolling fast enough, it’s easy to predict how much more you need to flex the ailerons. In electronics we do this sort of thing using PID controllers (proportional, integral, differential), so as to achieve the desired state without overshooting or whatever.

In the brain, things can often be this simple too – if our eyes are swiveled at angle X and we’re told they need to be at angle Y, it’s mostly a matter of subtracting the one value from the other to know how much effort to put into the eye muscles. But sometimes this servoing – this homeostatic response – can be extremely nonlinear. If we’re too hungry, say, we obviously want to reduce that hunger to neutral. But it might involve moving cooking utensils in very complex sequences or picking up the phone and ordering pizza.

So that’s where I bring the concept of maps into the equation. If the sensory state and the desired state of a ‘servo’ are represented by XY points, like two pins stuck into a map, then the ‘contours’ of the map can represent how to find a nonlinear route from the current state towards the desired state. Once the current state matches the desired state, the two pins will be right on top of each other, and all the servo has to do then is maintain this situation against any environmental disturbances. Maps (in a somewhat more mathematical sense) are a way of representing nonlinear transformations – each point on the map “maps to” a given set of points on the maps beneath (dendritic or axonal trees, as you say), such that they can be told what state THEY should desire, in order that WE can achieve our desires.

Rather to my surprise, it turned out that we can use exactly the same mapping in the other direction too – basically saying that “when we find our children are in this pattern of states, it means we must be in this specific state”, and “if we want to be in this other specific state, we need to tell our children which pattern of states they should aim for in turn”. The mappings upwards and downwards are basically identical, although we may sometimes have to travel across the map to get there, and thus send a whole sequence of desires down to our children, describing the intermediate states. There’s basically a servo at the very top, which permanently wants to feel warm, safe, replete, socially satisfied, etc.

So, what I get out of all this is a top-down and bottom-up bidirectional wiring that is highly reminiscent of the wiring of cortex, AND a spatial mapping of percepts, concepts and intentions that fits with all sorts of facts about how the brain seems to represent knowledge. But it’s a CONTROL system, not a brain in a vat.

It’s tremendously simple, in concept – no clever mathematics at all – but it can be pretty dynamic in practice, as all of these pieces constantly try to bring their sensory (yin) state into line with their, or someone else’s, intentional (yang) state.

I hope that makes some sense!


This post was modified 2 months ago 9 times by Steve Grand

   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
Topic starter  

I think I’m starting to get it.  The autopilot analogy along with some of your comments from back in 2012 are really helping to bridge the gap in my mind (I’m not terribly knowledgeable about the finer points of aviation, so this was new to me.)

Intuitively, I’ve suspected that I was on the wrong track with all the research papers, but there’s no such thing as bad learning. Seeing how others do it gives me more appreciation for the scope of possibilities. 

I’ve read many of your comments about how everything is very servo like, but I didn’t really “get it” before now, I feel like every few days something new clicks onto place in my mind.  The interconnectedness of everything has been requiring me to backtrack and reevaluate assumptions I’ve made previously, but that’s good!  I feel like my process of discovery is roughly mirroring the trajectory that the development has taken, so that in itself is exciting and tells me in probably on the right track most days.

I’ve also got plans to write an article on the project once I understand it better and it’s in a state fit for public consumption.

Planning to submit it to hackaday.com. It’s a bunch of engineering nerds over there, from radio to metal working to 3d printing and plenty of stuff about rolling your own coils and designing circuitry from the ground up.  If any community beyond this one is going to be interested, I suspect it’s them. 

Roll your own coils?  Child’s play!  Try rolling your own brains!



   
ReplyQuote
(@steve)
Member Admin
Joined: 1 year ago
Posts: 26
 

@foggygoofball An article will be great! Engineers are my kind of people 🙂



   
ReplyQuote
(@genesis)
Member
Joined: 1 year ago
Posts: 44
 

@foggygoofball and @steve if that article would be somewhere here on the website, with a meantion that supporting this artist/scientist/digital god just costs the equivalent of one coffee a month, it would be great to link it. The comparison to something cheap with this subscription is importand to lower the mental barrier of entry. Best somewhere organically in the article. This would help me spread the word further, i am currently preparing an account for reddit, if i know something big (next update or this acrticle) is just around the corner, i wait with my operation.



   
ReplyQuote
(@foggygoofball)
Enthusiast
Joined: 4 months ago
Posts: 62
Topic starter  

@Mabus. I’ve learned a lot so far, but I think I’ll need a couple weeks yet before I can write an article that does justice to the complexity and tireless efforts that Steve has been working on.

 

I also think it’ll be best to release it when there’s actually something that anyone can enjoy.  I know that most people who have tried the latest build seemed to think that the control system was indicative of what it might be when finished and I’d hate to draw a bunch of eyeballs here only to disappoint them.



   
ReplyQuote
Page 1 / 2
Share:
Chat Icon Close Icon