Could be a CPU bottleneck, or RAM that’s too small or too slow. The steamdeck’s screen might also be a lower resolution than the screen for your desktop computer.
I’m guessing the RAM is the issue, it’s 16gb but it’s definitely low end as it came second hand in a used business computer. One of the corporations over here refurbishes and resells all of their old hardware whether they upgrade anything.
16GB is the same amount as what I have on my desktop, but I also haven’t tried editing anything with 6 million vertices in Blender, so that could very well be the case.
@midnight I’m not too sure how much experience you have in actually training models, but I have a new practical use case that would be far simpler than sign language generation.
I’m also not sure if you’ve been reading other threads of the form, but I’ve got a wasp breeding experiment I want to carry out,.
Do you know what it would take, or any good starting points in order to train an image recognition model to distinguish between individual insects within a colony? I can gather the training data over a couple of years if need be, but that would be huge if I could use computer vision rather than manual tagging or even in conjunction with it.
I have a little experience. I’ve trained a pre-existing image generation model on a friend’s art style with their permission, and a handwritten number recognition model as a learning project. Without looking into it more, I’m not sure about the specifics for something like that, aside from that you’d certainly need a lot of images of wasps! A lot of research on facial recognition (of humans) has been done, so that would probably be a good starting point. I do know that some species of wasps can recognize each other’s faces (and can even learn to recognize human faces!), so it should be possible in theory.
I did a quick search, and you might also want to train a face detection model, if you want to be able to run the facial recognition model on photos or webcam data or whatnot that you haven’t preprocessed. This seems like a pretty solid tutorial for both at a glance, though I haven’t read through the whole thing.
That python tutorial is awesome! I did some cursory reading on it and a few related Pages, it’s got my brain buzzing if you can pardon the pun.
In the last few days my thoughts on computer vision and monitoring the colony have evolved. I think I’ve learned just about enough to build and train an AI for identifying males from females and potential Queens from workers.
Any thoughts on training an AI to recognize spectrographs? I’m thinking about purchasing an inexpensive light diffraction grating used for jewelers to identify minerals and repurposing it with an old smartphone and an AI vision model to learn to identify various pheromone emissions.
I could buy a semi professional device for something like 750 dollars American. That puts it outside of my price range, but those nerds at hackaday have a few(supposedly) serviceable DIY plans and I feel like what the device leaks in precision might be enhanced by an efficient algorithm.
Diffraction gratings used for jewelers rely on the user’s eye and visible light whereas other spectroscopy methods are usually calibrated from a known quantifiable light source like a laser or a mercury vapor lamp.
One of my back burner projects that I’ve been percolating for a couple years is an x-ray fluorescent spectrometry machine. Perhaps I’ll get there someday but for now I think a good enough system might just be good enough.
Spectrographs and pheromones are solidly outside my wheelhouse, I’m afraid! Keep me updated on your wasp identification experiments?
For sure! But so you know, I’ve just registered at https://arachnoboards.com/
I’m hoping that the entomology buffs over there will lead to some interesting conversation. So depending on how interested you are there should be lots to read.
