Oh shit, that’s awesome, thanks for the heads up!
I recently had this issue needing to run Excel macros. I ended up using Oracle Virtualbox to run Windows from inside linux. Even more linuxey is using Proxmox to run your Windows VMs but that’s a bit more of a faff.
I have used Ubuntu as the daily driver for the last 10 years, because support and tools are widespread and easy, and I don’t need any extra pain in my life. Drivers are mostly present and working upon a clean install, and in the one case where the touchpad wasn’t recognized, it was super easy to find an ubuntu forum post containing a 1-line command to fix it. But everybody says i should hate it and use Mint instead.
I’m open to give it a go, but in general, will most of the tutorials and fixes you find for Ubuntu also work with Mint?
I have to disagree about that last sentence. Augmenting LLMs to have any remotely person-like attributes is far from trivial.
The current thought in the field about this centers around so-called “Objective Driven AI”:
in which strategies are proposed to decouple the AI’s internal “world model” from its language capabilities, to facilitate hierarchical planning and mitigate hallucination.
The latter half of this talk by Yann LeCun addresses this topic too: https://www.youtube.com/watch?v=pd0JmT6rYcI
It’s very much an emerging and open-ended field with more questions than answers.
In a sense… yes! Although of course it’s thought to be across many modalities and time-scales, and not just text. Also a crucial piece of the picture is the Bayesian aspect - which also involves estimating one’s uncertainty over predictions. Further info: https://en.wikipedia.org/wiki/Predictive_coding
It’s also important to note the recent trends towards so-called “Embodied” and “4E cognition”, which emphasize the importance of being situated in a body, in an environment, with control over actions, as essential to explaining the nature of mental phenomena.
But yeah, it’s very exciting how in recent years we’ve begun to tap into the power of these kinds of self-supervised learning objectives for practical applications like Word2Vec and Large Language/Multimodal Models.
Many modern theories in cognitive science posit that the brain’s objective is to be a kind of “prediction machine” to predict the incoming stream of sensory information from the top down, as well as processing it from the bottom up. This is sometimes referred to through the aphorism “perception is controlled hallucination”.
That’s useful to know that it at least mostly works. I should really try it out with my Thrustmaster T300, I could be pleasantly surprised. I use an Oculus Quest 2 headset, which requires Meta’s app to run on Windows, so not sure how that would pan out.
If I could one day be playing BeamNG, with my FFB wheel, in VR, on Linux - I will have truly attained nirvana.
TBF I haven’t actually tried Asetto Corsa with my steering wheel, or XPlane with my VR headset on Linux yet I just assumed it wouldn’t work. As soon as they do, I can’t wait to shitcan Windows forever.
And also Bender's car form was derived from the 1977 horror B-movie The Car, which was a Lincoln Continental modified to have a menacing visage.
I (maybe naively) believe a healthy society could find a way to build a robust public transport network and still accommodate the minority of enthusiasts who drive and work on cars for fun.
Engineers aren't just dry husks of people, robotically creating solutions to meet needs. The drive to create cars, planes, and motorbikes, which have significant technical overlap with trains, buses, and mobility aids, is at least partially borne from the thrill of piloting machines that extend human capabilities.
I have a Quest 2 VR headset that I use for playing sim racing like Assetto Corsa, and flight sim on Xplane 11. To use that I have to open up Meta's Quest app, connect the headset to the computer over the WIFI, and it sorta functions like a monitor. In that I can view the whole Windows desktop environment on a virtual screen floating in VR space. When you open a VR game like Xplane you stop seeing the floating monitor, and it takes over the whole VR eye space for the duration you play it.
Is this type of thing also possible on Ubuntu? If so, I'll shitcan Windows ASAP.
I once naively used Windows file copy utility to transfer my huge MP3 library to an external hard drive and later lost the originals. I came to find out it silently failed to copy any songs containing certain nonalphanumeric characters. To this day I’m still traumatized when I try to locate some song and find it’s not there. Burn in hell Windows.
deleted by creator
I’m reminded of that Futurama episode where the gang logs onto year-3000 VR internet and is immediately assaulted by a vicious swarm of flying viagra ads.
I made a kind of deal with myself that if I wanted baked goods and sweets I had to make them myself. Since then I’ve learned to make brownies, cookies, ice cream, sorbet, chocolate ganache tarts, pancakes, and more. It’s fun, allows you to be creative, and the extra work of having to make if yourself keeps you in check.
I like Gnome because it looks sexy and sleek, and comes default on my Ubuntu. I have a little experience with XFCE and LXDE on Proxmox and Raspberry Pis, and they’re perfectly functional and great, so I don’t want to besmirch them. But they give me a kinda uneasy sensation like I’m using a tamagotchi or something. I don’t know if this is only because I’m using them on low-power potato computers or without proper display drivers, but they just look a little crude by comparison.
Like my grandma always said, if you want a box hurled into the sun, you got to do it yourself.
God rest her zombie bones.
It’s Joseph Redmon, creator of the YOLO object detection neural net architecture, which is very widely used.