I agree with your thionking, everyone is different and unique. I’m happy to hear you’ve found some basic things that work for you, good job! I do have very good routines, but at no point have they become “habit”.
I think you’ve nailed it though. We are very well versed toward documenting the details or such atrocities; we don’t pay the same tribute to the good done by humanity. And this is certainly evidence that just “letting loose” and AI without clear and static “morals” is a bad idea.
I have alarms. I feel this in my core though. I have very strict routines that I follow, but they sure as fuck aren’t habbit’s I have to watch the clock and get extremely anxious around the time I know I need to do things, all.the.things.
It’s getting worse based on the feedback unfortunately, the need for safety and lack of meaningful deliberation towards how AI companies should operate and what should and should not be done has led Sam and co to be indesicive towards doing anything. Alongside the “morality” of the thing being hyjacked has lead to other AI’s performing better… lead by x employees of OpenAI, with actual bound morals and not inherently relying on user input to train future models, this will be the path forward, this will lead to safe and controlled integration.
I guess at the core of this, we are afraid of ourselves. We are afraid that the worste of humanity outpaces the better parts, that the inputs and training aren’t altruistic but are more pointedly “bad” or “wrong”, and thus leading to “harmful”, whether through misinformation, lies, or fabrications.
I hope we find a way to do better. I’m still excited for the future of AI, I mean crap, I’m closer to having a family doctor that’s a robot then I am to a real human doctor.
I don’t think the average user cares tbh. I have OpenSuSe, Fedora, Win 11, RHDesktop currently running. From an admin level though, so long as it’s well documented, transparent, and standard packages are available and maintained, I’m happy to continue to learn and be adaptable
Hanlons Razor.