• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • And even with that base set, even if a computer could theoretically try all trillion possibilities quickly, it’ll make a ton of noise, get throttled, and likely lock the account out long before it has a chance to try even the tiniest fraction of them

    One small correction - this just isn’t how the vast majority of password cracking happens. You’ll most likely get throttled before you try 5 password and banned before you get to try 50. And it’s extremely traceable what you’re trying to do. Most cracking happens after a data breach, where the cracker has unrestricted local access to (hopefully) encrypted and salted password hashes.

    People just often re-use their password or even forget to change it after a breach. That’s where these leaked passwords get their value if you can decrypt them. So really, this is a non-factor. But the rest stands.


  • ClamDrinker@lemmy.worldtolinuxmemes@lemmy.world-----BEGIN PRIVATE KEY-----
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    2 months ago

    While this comic is good for people that do the former or have very short passwords, it often misleads from the fact that humans simply shouldn’t try to remember more than one really good password (for a password manager) and apply proper supplementary techniques like 2FA. One fully random password of enough length will do better than both of these, and it’s not even close. It will take like a week or so of typing it to properly memorize it, but once you do, everything beyond that will all be fully random too, and will be remembered by the password manager.


  • PC is typically easier to develop for because of the lack of strict (and frequently silly) platform requirements. Which typically makes game development more expensive and slow than it needs to be when just targeting PC. If that barrier to entry was reduced to that of PC, you’d see a lot more games on there from smaller developers.

    With current gen consoles, pretty much every game starts as a PC game already, because thats where the development and testing happens.

    Rockstar here is the exception in that they are intentionally skipping PC - something that should be well within reach of a company their size while clearly being capable of doing so.

    If another AAA game comes out with only PC support I’ll be right there with you - but most game developers with the capability release for all major platforms now. But not the small console indie studio called Rockstar Games it seems.


  • Awesome and great explanation for a layperson. Because the industry has been faking lighting for so long and lighting is quite important, the industry has become incredibly good at it. But it also takes a lot of development time that could be spent on adding more content or features. There’s a reason the opinion about ray tracing is extremely positive within the game development industry. But also nobody’s expecting it to become the norm over night, and the period with hybrid support for both raytracing and legacy lighting is only just starting.


  • First: They did actually end up removing this and making it configurable, check the bottom of the page. In a vacuum, the idea to stop cut-and-clear racists and trolls from using Lemmy is not something that’s too controversial. Sure, they are being hard asses about changing their mind and allowing instance owners to configure it themselves (and I’m glad they changed their mind). But there’s a big overlap between passionate and opinionated people, so they have to be at times to ensure a project doesn’t devolve into something they can’t put your passion into anymore.

    Second: I mean… what do you expect? In the issue above they actively encourage people to make their own fork of Lemmy and run that if they don’t like something from the base version of Lemmy, so I kind of would assume they do as they preach. Instance owners also have the option to block communities without defederation. Lemmy.ml is basically their home instance. If anything this is a reason not to make an account on lemmy.ml, but as long as that doesn’t leak into the source code of Lemmy, who cares?





  • It’s a bit of a flawed comparison (AI vs a hammer) - but let me try.

    If you put a single nail into wood with a hammer, which anyone with a hammer can also do, and even a hammer swinging machine could do without human input, you can’t protect it.

    If you put nails into wood with the hammer so that it shows a face, you can protect it. But you would still not be protecting the process of the single nail (even though the nail face is made up of repeating that process many times), you would specifically be protecting the identity of the face made of nails as your human artistic expression.

    To bring it back to AI, if the AI can do it without sufficient input from a human author (eg. only a simple prompt, no post processing, no compositing, etc) it’s likely not going to be protectable, since anyone can take the same AI model, use the same prompt, and get the same or very similar result as you did (which would be the equivalent of putting a single nail into the wood).

    Take the output, modify it, refine it, composite it, and you’re creating the hammer equivalent of a nail face. The end result was only possible because of your human input, and that means it can be protected.


  • A mass exodus doesn’t really happen in the traditional sense unless shit really hits the fan. For that to happen a large majority or even everyone has to be displaced at once and there can be no way to salvage the situation. In this case, there were a lot of short term ways out here for users not directly affected.

    But, the whole situation is more akin to a war of attrition. The ones not convinced by the big things, will be convinced by the smaller things that accumulate over time. Goodwill for reddit is at an all time low, which hampers their ability to grow since word of mouth is effectively dead. People that provided effective labour for reddit in the form of moderation or content aggregation lost their morale to continue. Not all of them for sure, but it might very well be a critical mass (even if they didn’t move to lemmy).

    It’s like a line of dominos increasing in size, if the ones that fell now were big enough to topple the next, eventually there will be a ripple effect. Eventually the quality of content goes down, the discourse turns stale and antagonistic, and communities fall apart. Only once the users who took the easy way out now realize that will they finally start the process of moving. And if reddit was doing so bad they had to make this move, I can only assume their future will be very grim indeed. The seed of destruction has been planted. (And if you want an example of that future, look at Twitter)

    Whether or not that all actually happens, I’m not sure. I’d like to believe it will, but some people revel in their unreasonableness, and they’re often the easiest to exploit for financial gain. I think the best thing is to stop looking back, and focus on what we have here and now. I think what lemmy has achieved so far is already more valuable than reddit had.


  • That’s an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it’s just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.

    It’s highly likely for a company of OpenAI’s size (especially after all the positive marketing and potential funding they got from ChatGPT in it’s prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.

    But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn’t exist yet. Because they are different models, and so they store and express their data in different ways. And it’s not like training data exists for it either. And unlike physical beings like humans, it doesn’t have any kind of way to “interact” and “experiment” with the data it knows to really form concrete connections backed up by factual evidence.



  • I had YT premium for a while, and then I just wanted to download some videos (you know, like they advertise you can) and they just didnt allow it. Had to either watch it in the YT app or on youtube.com on my PC. That’s not downloading - thats just streaming with less computation for youtube, which helps youtube but not me. What a great ‘premium benefit’!

    Cancelled my premium right then and there, if they cant provide a feature as simple as just being able to download videos to mp4 or something, thats just misleading. Literally takes seconds to find a third party site or app (NewPipe) that does it.




  • You’re shifting the goal post. You wanted an AI that can learn stuff while it’s being used and now you’re unhappy that one existed that did so in a primitive form. If you want a general artificial intelligence that is also able to understand the words it says, we are still decades off. For now it can simply only work off patterns, for which the training data needs to be curated. And as explained previously, it’s not infringing on copyright to train things on publicized works. You are simply denying that fact because you don’t want that to be true, but it is. And that’s why your sentiment isn’t shared outside of some anti-AI circle you’re part of.

    The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists’ labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents…

    So because you don’t know any creative people who use the technology ethically, they don’t exist? Good to hear you’re sticking it up for the little guy who isn’t making headlines or being provocative. I don’t necessarily see these as ethical uses either, but I would be incredibly disingenuous to insinuate these are the only and primary ways to use AI - They are not, and your ignorance is showing if you actually believe so.

    Frankly, I’m absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies’ personal profit. It’s no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.

    I’m sorry, but you realize that this doesn’t make any sense right? Large corporations are the ones who would have enough information and/or money at their disposal to train their own AIs without relying on publicized works. Should any kind of blockade be created to stop people training AI models from using public work, you would effectively be taking AI away from the masses in the form of Open Source models, not from those corporations. So if anything, it’s you who is arguing for large corporations to have a monopoly on AI technology as it currently is.

    Don’t think I actually like companies like OpenAI or Meta, it’s why I’ve been arguing about AI models in general, not their specific usage of the technology (As that is a whole different can of worms).


  • The AI models (not specifically OpenAI’s models) do not contain the original material they were trained on. Just like the creators of Undertale consumed the games they were inspired by into their brain, and learned from them, so did the AI learn from the material it was trained on and learned how to make similar yet distinctly different output. You do not need a permissive license to learn from something once it has been publicized.

    You can’t just put your artwork up on a wall and then demand every person who looks at it to not learn from it while simultaneously allowing them to look at it because you have a license that says learning from it is not allowed - that’s insane and hence why (as far as I know) no legal system acknowledges that as a legal defense.


  • You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do. And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement. AIs don’t just generate things on their own without being prompted to by a human.

    You’re asking for a general intelligence AI, which would most likely be comprised of different specialized AIs to work together. Similar to our brains having specific regions dedicated to specific tasks. And this just doesn’t exist yet, but one of it’s parts now does.

    Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.

    The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.


  • Also, it should be mentioned that pretty much all games are in some form derivative works. Lets take Undertale since I’m most familiar with it. It’s well known that Undertale takes a lot of elements from other games. RPG mechanics from Mother and Earthbound. Bullet hell mechanics from games like Touhou Project. And more from games like Yume Nikki, Moon: Remix RPG Adventure, Cave Story. And funnily enough, the creator has even cited Mario & Luigi as a potential inspiration.

    So why was it allowed to exist without being struck down? Because it fits the definition of a derivative works to the letter. You can find individual elements which are taken almost directly from other games, but it doesn’t try to be the same as what it was created after.