Quick progress update

·

·

Sorry I’ve been so quiet. I’m very good at concentrating, but I’m really bad at stopping!

Plus the stuff going on in America and the rest of the world at the moment is quite wearing – I’m sure many of you feel the same. All of the things I was worried were going to happen are, in fact, now happening, except they’re even worse than I’d expected (and much worse than many people yet realize). My friend works for the federal government in Washington – at least, she still had a job as recently as half an hour ago – and hence a lot of this stuff – the random mass firings, the utter chaos, the ignorant, authoritarian bosses, the sheer cruelty of it all – is hitting a bit close to home.

Anyway, I’m in no mood to debate politics – I’m just saying that I’m feeling a bit overwhelmed, and that’s partly why I haven’t had the chance to post much, lately.

Back to the topic. I’ve been working hard on two things. The first is a rewrite of the affect layer in the creature’s brains. There’s not much point in me trying to explain what I’ve done, because you’d have to know a lot about how phantasian brains work to be able to see the relevance, and I haven’t had time to tell you about that yet! But basically, the neurons in the affect layer are there so that creatures can learn how they feel when they are in a certain state under certain conditions, and I’d discovered that a lot of the time they were recording the right information but in the wrong location. It’s no wonder they didn’t seem to learn much!

This turned out to be because sometimes the reinforcement doesn’t arrive until they’ve changed to a different state – for instance it takes a little while for the chemistry to catch up. So I needed to reinforce these recent states as well as or instead of the current one. But to make it extra awkward, sometimes it’s the other way round, and I need to reinforce states that haven’t even been entered yet. If a creature hears a scary noise, for instance, they need to shift their attention and look at the source of the sound before they can know which object probably scared them. But by that time the event is already over. I’m sure our own brains have to resolve all sorts of timing problems like these, and there are actually some neat examples of it going wrong, but just because I know that something can be done, doesn’t mean I can figure out a way to do it.

Anyway, I think I’ve probably sorted this out well enough for now. And while I was there, I decided to change the way that learning happens in these neurons. I’ve shifted over to using my STM/LTM approach (a cheap way to get both short-term and long-term memory). Funnily enough, the last time I used this method was in Creatures, many years ago, and it still seems right now, even though these neurons grow in a very different way. I won’t go into the details, but imagine you’re a bear and you put your hand into a hole in a tree, hoping to find honey, except instead you get stung by a bee. If you learn from this experience too strongly, you’ll never put your hand into a hole again, and thus never get any honey. But if you learn it too weakly, you’ll just do it all over again while the bee is still in there. It’s important to be able to forget, but not forget entirely. STM/LTM solves this quite nicely.

The other thing I’ve been working on is the user interface. There is nothing about Unity that drives me more insane than the user interface APIs! The ‘new’ GUI display stuff is very powerful but also rather like using a sledgehammer to crack a nut, given that most Unity games are simple platformers. Meanwhile, the ‘new’ control input API seemed to be abandoned several years ago, although in the past few weeks it looks like someone is back working on it again. Their code, however, is still stuffed full of “FIX THIS!” and “How on earth are we going to do that?” comments. Maybe they’ll finish it one day.

The reason I started work on the UI is that I hoped to make it possible for people to reconfigure the keys and other controls they use to move around the world, select things, etc. Some people are using desktop machines, while others have laptops, perhaps with awkward trackpads and certain keys only accessible by holding down FN. Some have non-Windows devices, and one day I might even make it available on XBox. Not to mention the nightmare of VR headsets and controllers (it looks fantastic in VR, but there are many problems to solve and there isn’t yet much demand). So a one-size fits all approach to the user-interface was not ideal.

Unfortunately, it’s not going to be easy, even if I make things reconfigurable. The WASD keys are digital, of course, while a mouse is analogue, but it’s a very different style of analogue from a joystick, which is different again from the thumbstick on a gamepad. Trying to patch multiple functions into multiple controllers of very different kinds is more involved than it looks and hard for a user. With great power comes great responsibility, and so the more flexible I make it, the less likely it is that anyone will ever go to the trouble of using it.

But one consequence of trying to add more configurability was that the user interface is clearly going to get more complex over time, and it’s already getting quite cluttered, especially if you include all the tools I need myself. Beyond a certain point, all those buttons and sliders and check boxes start to feel overwhelming for little benefit.

At this point, many complex applications fall back on using a command line interface, because then they can add as many commands as they like without cluttering up the UI in the slightest. But unless you use that software all the time, it’s quite easy to forget the command syntax. Even with menus and dialog boxes it can be a nightmare. I work in Jetbrains Rider most of the time, which has hundreds of different commands. Every now and then, I have to switch to Blender, or Substance Painter, or SpeedTree, or Photoshop… By the time I’ve got back up to speed on that equally complex application, I’ve completely forgotten how to do anything in Rider!

Anyway, what I’m currently aiming for is a hybrid of a command line interface and cascading menus, so that you can either click or type, or even use a mixture of both. I think it should end up fairly clean, compact, contextual, self-documenting and easy to learn, yet still allow power users to do more involved things and me to debug the system. Want a weather forecast? Type in WEATHER FORECAST, or just W F, or click on WEATHER and then FORECAST. Or just silently ignore this without it getting in your way, and go check the weather station in the world instead. Want to chat to a friend? Type CHAT JENNY and start saying something. Want to change the key that slides the player to the left? Type (or click) SET KEYBOARD SLIDELEFT A.

Of course, I can’t please everyone, and it certainly won’t look a bit like a cute platform game UI, but sometimes you just have to make a decision and put in the effort, then see how it turns out. I have no idea how to make it work in VR yet, because then there’s no screen and you can’t type, but the cascading menu approach should at least help with that.

I still need a couple more weeks to finish coding this, but when I get the bulk of it working, I’ll post a new build and you can try it out. After that I’ll try to get back to some actual world-building.

Lord knows, I’d certainly like to escape from this one!


5 4 votes
Article Rating
Subscribe
Notify of
48 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mabus
Mabus
1 month ago

The fact that you can concentrate on one thing as well is a good thing. Otherwise you couldn’t do this project. And I am always a bit jealous of that trait, since my mind always jumps around between different things all the time. That’s why I always have multiple projects running at the same time… Luckily most of them need a lot of waiting time in-between, that way I they work well with my mind (real biology is slow… Special when your animals/plants have breeding seasons)

About the bear and the bee, a dog I had once always hunted water rats (don’t know the exact species). Early on one bite her in the snout. We tought she would never hunt them again.

The opposite happend, she was dead on killing all of them from that moment forward. I think that’s the reason stubbornness, revenge and aggression are often a reaction for a failed prediction in our mind, when we double down.

Mabus
Mabus
1 month ago

Oh, and when you have an other update where the stuff you made is to complex to be easily explained here (i assume that will happen multiple times)

You could tell us about the way you intend us to play the game.

Because this game will (hopefully) be highly modded with extra objects and phantasticals. Probably moving far away from your original idea.

An easy solution would be a challenge run, we players love challenge runs. Speedruns would be impossible here, wolfing runs will be easy. But you could add a “grand run” where we “play as intended” – whatever that means.

danielmewes
danielmewes
1 month ago

> I won’t go into the details, but imagine you’re a bear and you put your hand into a hole in a tree, hoping to find honey, except instead you get stung by a bee. If you learn from this experience too strongly, you’ll never put your hand into a hole again, and thus never get any honey. But if you learn it too weakly, you’ll just do it all over again while the bee is still in there. It’s important to be able to forget, but not forget entirely. STM/LTM solves this quite nicely.

Ohh. That makes so much sense! I was recently-ish doing some experiments with the Creatures 1 brains, and was wondering why the STM/LTM complication existed. I couldn’t quite figure out why it was needed for learning. But this explains it! You want to have a stronger short-term reinforcement to either discourage or encourage immediate repeats of an action than long-term reinforcement, because you have a prior on the world that the overall state of the environment (including not directly observable or understandable aspects of it, such as the hidden bee) will not change much over short time spans, but will be more different over long ones. Right?

SpaceShipRat
SpaceShipRat
1 month ago

Loved hearing some more about the brain. good point about short and long term memories, I suppose “forgetting” sounds like a flaw but is actually a necessary function of our brains.
I never considered how the delay in reinforcement would be an issue in both temporal directions. Strange!

SpaceShipRat
SpaceShipRat
1 month ago

Now re- the controls (and here we all collectively cringe, I know), I’m not really getting your logic. You’re worried about having so many different control methods between controllers, vr etc, so you’re making an extra control system that… willl only work on pc with a keyboard! Sounds entirely redundant to me, since the pc control system ought to be the simplest since it’s easy to navigate both nested menus and buttons with a mouse.

For other controllers, your best bet is to make a joystick/radial menu system that should be able to be generalized between controllers, phones and vr, and you can totally put that aside for much later.

my prayer remains, when thinking of building the controls, go look at a bunch of existing games and the tried and true control systems everyone uses. [If you can’t get a friend to share their steam account with you or something, you can look at the Yogscast’s “trucking tuesday” playlist where they try a new driving game every video, and they usually read the controls out loud and judge how easy they are]

Otherwise, it’s kind of like, trying to sell a new bycicle but you’ve got this really cool system where you steer with your feet and pedal with your hands, and then you wonder why everyone keeps falling off.

SoulSkrix
SoulSkrix
1 month ago

Sorry about your mood, I am also in a sour mood because I just visited the UK, came back, and lost £2500 worth of equipment on a train on my way home (I was so tired.. I think I simply forgot one of my bags on the chair next to me). It had my personal laptop, writing device (supernote) and some other bits and bobs in there..

I wrote a long message about a tool I like to use that lets me wrangle a whole bunch of other CLIs into my own way I like to interact with them within a project, and then I realised I misunderstood your message. But if you find remembering syntax for a bunch of CLI powered dev tools difficult and want to simply run them however you define it task build:thing then let me know and I will rewrite it..

Regarding your UI issues, I have looked at this when writing my own games, perhaps you can find some inspiration for some of your problems there? GameUIDatabase

I know there are so many options it can be wild..

Things I can think of off the top of my head would be:

  • Layered radial menus – like in Arma 3, Far Cry
  • Pie + Grid Hybrid Menus – they were in Star Citizen which I liked
  • In-Game UI (Virtual tablets, wrist interfaces.. very sci-fi) – Dead Space
  • Modifier Keys and contextual menus – MGS V, Rainbow Six Siege
  • Command palettes/Menu search – like what we have in Jetbrains when you double tap shift. I think this is the best power user option and is infinitely scalable
  • Multi-tiered D-Pad/Hotbar menus – Like in the Witcher 3, some RPGs like Balders Gate
  • I find this is nice for controllers because you can tap vs hold for different layers of interaction and thus expands the limited buttons you have on them. In combination with modifiers on triggers it’s great.
  • Not sure what the name is, but “Adaptive UI” – making the same UI elements context sensitive, where a button to do something changes based on the players state. Pick up, crouch and drop can all be the same button depending on the players state in the world.

Hope this helps a little bit just to help the brain juices flow in your UI quests

Mabus
Mabus
1 month ago

About the learning stuff with a single expurse, looked up someinteresting articles. I hink this article will interest @danielmewes and that system reads similar to the STW/LTW system Steve used in Creatures. Just with some kind of extra activation during novel information. https://www.cell.com/neuron/fulltext/S0896-6273(06)00132-2

And here a paper that talks how a sinleneuron can create a hughe spike for fast learning, with the use of other neurons – however that reads more like a math paper, math papers are not fun to read…. https://www.jneurosci.org/content/35/39/13351

Robowaifu Technician
Robowaifu Technician
1 month ago

I finally decided to upgrade to the membership that would let me try the latest build. I was pretty surprised that they don’t seem to need air! One of the first bollys I saw was sitting calmly at the bottom of a river. I tried to drown another one later and the only change I saw was that I think he got a little bit less (or more) thirsty?

Squirrel
Squirrel
1 month ago

I kinda want to eat some honey now. How does taste work? I don’t mean in detail (I don’t want to distract you) but is it fixed to a combination of flavours or could anything that exists in the world potentially be tasted but not all of it is? Like, we can’t taste carbon dioxide but we could if we evolved receptors for it.

Also are the creatures diploid or haploid? (Sorry- I know this will have been answered already but my own memory is a bit… unreliable.)

Chat Icon Close Icon
48
0
Would love your thoughts, please comment.x
()
x