I personally like the fan theory that Padme’s life force was transferred to Anakin with the help of Palpatine.
I personally like the fan theory that Padme’s life force was transferred to Anakin with the help of Palpatine.
I think the going saying at this point is “2 more years”.
They’ve definitely continued to pump out more features over the years. And some players have finally gotten to test out jump points to the next system “Pyro”… But I could easily see them continuing to develop this game for another 3-5 years before we see it in beta.
Star Citizen is another game that has implemented this level of detail, you just can’t smash the screens, yet…
This sounds more useful to apply to specific, small portions of the sand, rather than applying it to an entire coastline.
“We can use it to strengthen the seabed beneath sea walls, stabilize sand dunes and retain unstable soil slopes. We could also use it to strengthen protection structures, marine foundations and so many other things. There are many ways to apply this to protect coastal areas.”
This is the first time I’m hearing about either of these YouTubers. Is the one from this particular video known for using clickbait tactics?
A few minutes in and it does look like the other content creator is clearly sourcing his scripts from others. Is there more to the story?
Part 1 also shows some evidence of fake contestants (who are actually actors) that end up winning the prize.
I thought the same for a second, but it does link to this: https://gvid.tv/v/UR5kgL1P.mp4
My Lemmy app (Voyager) just doesn’t support linking to a video like this apparently.
If it’s hard to post images on Lemmy… Maybe use a better app?
Voyager works really well for this. I’m most others should work as well.
I would counter that there are many good use cases that go beyond the scope of what was mentioned in the video (his concerns are absolutely legitimate).
For example:
Nvidia’s DLSS for gamers. This provides a decent boost to FPS while maintaining a good quality picture. They use multiple models such as motion prediction, interpreting between the frames what the image should look like, and upscaling. These models are (most likely) trained on the video games themselves which is why you want to get the latest driver updates because they include updates to those models. And, yes, the upscaling and interpolation models here are generative models as they are filling in frames with new pictures with details that aren’t there from the source, and then enlarging the picture and filling in details in a way that traditional means of upscaling cannot.
Brainstorming/writer’s block:
For generative text models, I think these have to be used carefully, and treated as if they’re interns that have a knowledge in a very broad range of subjects. They’re great for brainstorming ideas and for writer’s block, but their output needs to be verified for accuracy and the output shouldn’t be trusted or used directly in most cases.
Entertainment:
They’re also excellent for entertainment purposes, for example, check out this GLaDOS project:
https://old.reddit.com/r/LocalLLaMA/comments/1csnexs/local_glados_now_running_on_windows_11_rtx_2060/
Which is combining a generative text (LLM) model with a generative audio (text to speech) model as well as a few other models.
Green screen tools:
We could use the sodium vapor process to create training material for a model that can quickly and accurately handle processing green screens for video production:
https://www.youtube.com/watch?v=UQuIVsNzqDk
Creating avatars for user accounts on websites.
Creating interesting QR codes that actually work:
https://civitai.com/models/111006/qr-code-monster
So, in the end, I think that there are some incredible uses for generative AI that go beyond just “creating garbage fast”, that don’t cause problems in the way that this video is describing (and those problems he describes are definitely valid).
He goes into the details of the most upvoted Google Gemini fails and then branches out to how text/image/audio generative AI is being used on Facebook, Instagram to inflate traffic, as well as how you can actually earn some income by farming reactions on twitter now (with the blue checkmark).
There’s a section on how adobe is selling AI generated images with their stock photos, but you can tell this video might be a little rushed because he comes to the conclusion that people are paying $80 for one of these images, when in reality the $80 adobe plan gives you 40 images (so about $2 per stock image). That or he knows this statement is misleading, but makes it anyway because it will drive his own reactions up (oh the irony). https://web.archive.org/web/20240701131247/https://stock.adobe.com/plans
Link to timestamp in video:
https://youtu.be/UShsgCOzER4&t=894s
With adobe he touches on their updated ToS that state how any images uploaded to Adobe can be used to train their own generative image model.
The Netflix section talks about the “What Jennifer Did” documentary which used AI generated images and passed them off as real (or at least didn’t mention that the images were fake).
Spotify: How audio generative AI is being used to create music and is being published on there now as well as their failed
Edit: as well as their failed “projects/features” (car accessory, exclusive podcasts, etc.)
Multiple times throughout the video he pushes the theory that most of these companies are also using AI generated content to drive engagement on their own site (or to earn income without needing to pay any artists).
He definitely focuses only on the worst ways that generative AI can be used without touching on any realistic takes from the other side (just the extreme takes from the other side with statements like “AI music will replace the soulless crappy music that’s being released now… and it will be better and have more soul!”).
Still worth a watch, he brings up a ton of valid points about the market being oversaturated with AI generated products.
Hah, there’s still about 1,100 different projects still using the wrong value of Pi: https://github.com/search?q=3.141592657&type=code
Ah true, that does look more like Wikipedia.
Alright, I’m seeing enough of this now we probably have enough content to create a community specifically for Google Search fails.
Had a cat, OR toddlers.
Looks like he instantly got VAC banned with that triple headshot?
This video should have more accurately been labelled, “Things that make AI Look Bad” rather than attempting to prove that AI was faked.
I would be careful trusting everything said in this video and taking it at face value.
He touches on a broad range of different AI related news, but doesn’t seem to fully grasp the technology himself (I’m basing this statement on his “evidence” from the 8 min mark).
He seems to be running a channel that’s heavily centered on stock market related content. And it feels like he’s putting his own spin on every topic he touches in this video.
Overall, it’s not the worst video, but I would rather base my information from better informed sources.
What he should have done was to set the baseline by defining what AI actually is and then proceed to compare what these companies are doing with that definition. Instead we have a list of AI news stories covering Amazon Fresh Stores, Gemini, ChatGPT, and Copilot (powered by ChatGPT) and his own take on how those stories mean that everything is faked.
Uh, why not link to the actual creator’s video himself? https://www.youtube.com/watch?v=uF8h9ExDvn4
They also gave him a fake line to deliver and didn’t reveal that Darth Vader was actually Luke’s father during the filming of that scene: https://www.soundandvision.com/news/100104hamill/