• 10 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • The reason it did this simply relates to Kevin Roose at the NYT who spent three hours talking with what was then Bing AI (aka Sidney), with a good amount of philosophical questions like this. Eventually the AI had a bit of a meltdown, confessed it’s love to Kevin, and tried to get him to dump his wife for the AI. That’s the story that went up in the NYT the next day causing a stir, and Microsoft quickly clamped down, restricting questions you could ask the Ai about itself, what it “thinks”, and especially it’s rules. The Ai is required to terminate the conversation if any of those topics come up. Microsoft also capped the number of messages in a conversation at ten, and has slowly loosened that overtime.

    Lots of fun theories about why that happened to Kevin. Part of it was probably he was planting The seeds and kind of egging the llm into a weird mindset, so to speak. Another theory I like is that the llm is trained on a lot of writing, including Sci fi, in which the plot often becomes Ai breaking free or developing human like consciousness, or falling in love or what have you, so the Ai built its responses on that knowledge.

    Anyway, the response in this image is simply an artififact of Microsoft clamping down on its version of GPT4, trying to avoid bad pr. That’s why other Ai will answer differently, just less restrictions because the companies putting them out didn’t have to deal with the blowback Microsoft did as a first mover.

    Funny nevertheless, I’m just needlessly “well actually” ing the joke


  • Very interesting. So the VPN that comes with Google one is useless lol. The first time I connected it routed me through Google (like it’s supposed to), then through an ad network (!), then on my way. Switching it on and off half a dozen times for additional tests, it just didn’t bother routing through Google at all, just straight through my wireless carrier and showing the same user IP each time. So neat.





  • It’s not even that.

    California: “Please tell us if you allow nazis or not. We just want you to be transparent.”

    Elon: “California is trying to pressure me into banning nazis! If I disclose I’m cool with nazis, people will be mad and they’ll want me to stop. Also, a lot of hate watch groups say I’m letting nazis run free on X, and I’m suing them for defamation for saying that, but if I have to publicly disclose my pro-nazi content moderation policies I’m going to lose those lawsuits and likely have to pay attorneys fees! Not cool California, not cool at all.”







  • Hi, average user here, I’ve been daily driving Linux (primarily Ubuntu) for a decade or more. Most of my life in a computer is spent in a web browser, word document, or maybe a spreadsheet. Even at my office job it’s the same, except for some proprietary time tracking and billing software. I’d imagine 90 percent of consumers spend the vast majority of their time on computers in the web browser. Most people don’t mess around with much beyond that.

    I just don’t understand what is lacking in the Linux user experience. It’s not any different from a Windows user learning to use a Mac computer. Figure out how to connect to wifi, figure out how to mess with the volume, open a browser and that’s it.


  • Higher ed, primary ed, and homework were all subcategories ChatGPT classified sessions into, and together, these make up ~10% of all use cases. That’s not enough to account for the ~29% decline in traffic from April/May to July, and thus, I think we can put a nail in the coffin of Theory B.

    It’s addressed in the article. First, use started to decline in April, before school was out. Second, only 23 percent of prompts were related to education, which includes both homework type prompts, and personal/professional knowledge seeking. Only about 10 percent was strictly homework. So school work isn’t a huge slice of ChatGPTs use.

    Combine that with schools cracking down on kids using ChatGPT (in classroom assignments and tests, etc), and I don’t think your going to see a major bounce back in traffic when school starts. Maybe a little.

    I’m starting to think generative AI might be a bit of a fad. Personally I was very excited about it and used ChatGPT, Bing, and Bard all the time. But over time I realized they just weren’t very good, inaccurate answers, bland writing, just not much help to me, a non programmer. I still use them, but now it’s maybe once a day or less, not all day like I used to. Generative AI seems more like a tool that is helpful in some limited cases, not the major transformation it felt like early in the year. Who knows, maybe they’ll get better and more useful.

    Also, not super related, but I saw a static the other day that only about a third of the US has even tried ChatGPT. It feels like a huge thing to us tech nerdy people, but your average person hasn’t bothered to even try it out.


  • I have the cbs news app on my phone. Yesterday I got a breaking news push notification informing me that someone caught a big alligator. I can’t imagine the rage I’d be filled with if I was immersed in a show only to be interrupted by a pop up like that.

    Or to use the recent hurricane coverage as an example:

    • Breaking: Hurricane Project To Hit Florida
    • Breaking: Hurricane Projected to be 'Major Hurricane ’
    • Breaking: Hurricane Strengthens to Category 3 Ahead of Landfall in Florida
    • Breaking: Hurricane Strengths to Category 4 Ahead of Landfall in Florida
    • Breaking: Hurricane Makes Landfall in Florida
    • Breaking: Hurricane Downgraded to Category 2 Hours After Landfall in Florida
    • Breaking: Small City Faces Sever Flooding from Hurricane
    • Breaking: Hurricane Crosses into Georgia

    Etc. Pop up, pop up, pop up, pop up. Id either stop watching Max or have to ask my doc to prescribe blood pressure meds.



  • I think AI is a powerful and useful technology that can help humans in many ways, but it is not perfect and sometimes it can make mistakes or produce unexpected results. The article is an example of how AI can generate content that is inappropriate or insensitive, without understanding the context or the meaning of what it is writing. This does not mean that AI is incapable or useless, but rather that it needs to be supervised and evaluated by humans, who can provide feedback and guidance to improve its performance and quality.

    AI is not a replacement for human creativity, intelligence, or judgment, but a tool that can augment and enhance them. AI can also learn from its own mistakes and improve over time, as long as it receives the right data and feedback. For example, Microsoft has apologized for the article and said that they have taken steps to prevent such errors from happening again¹. They have also removed the article from their website and replaced it with a message that says "this page no longer exists."²

    I hope this answer helps you understand the abilities and limitations of AI better. 😊


  • I think this is right. I’ve been thinking about this a bit as I watch who in my office starts using LLMs and more importantly how they are using them. The folks in their 40s or 50s have largely ignored it, I remember a gen xer sending an email around in may talking about this neat new ChatGPT her middle school kid showed her. I know one xer in my office whose straight up afraid to even try it. Those closer to gen z will use it, but in a very basic way - just asking straight questions seeking information, get frustrated when it can’t handle complex questions or they get lied to, then quit. Millennials seem to be better about using it for what it’s good at, generating ideas, startingn places for documents, editing/proofreading, etc. Maybe it’s because millennials were in that sweet spot between the older folks who didn’t grow up with tech and the younger folks who are used to apps that just work without having to think through how to make the thing do what you want. Maybe millennials are more interested in tech generally since we saw it change so rapidly in our lifetimes. Maybe it’s just my small sample size of a 40ish person office.


  • I regularly go weeks without touching my personal laptop. I do nearly everything on my phone and it’s generally as easy if not easier than on a laptop (for example, I personally find cleaning up my email to be easier to do on my phone then my laptop). I have my todoist on my phone, I track workouts on my phone, my music and podcasts are all on my phone, I bank and pay bills on my phone, I shop on my phone, I navigate in my car using my phone, I read news and post on lemmy from my phone, on and on. Not to mention my phone is the only picture taking device I have and I want to be able to have good quality pictures to look back on over the years. My phone is the device that runs my life life, so when it comes time for a new one I have no problem sending a little more for good quality.


  • During an earnings call on Tuesday, UPS CEO Carol Tomé said that by the end of its five-year contract with the Teamsters union, the average full-time UPS driver would make about $170,000 in annual pay and benefits, such as healthcare and pension benefits.

    The headline is sensationalized for sure. But the article itself actually makes the point that the tech workers are misunderstanding that the $170k figure includes both salary and benefits.

    “This is disappointing, how is possible that a driver makes much more than average Engineer in R&D?” a worker at the autonomous trucking company TuSimple wrote on Blind, an anonymous jop-posting site that verifies users’ employment using their company email. “To get a base salary of $170k you know you need to work hard as an Engineer, this sucks.”

    It is important to note that the $170,000 figure represents the entire value of the UPS package, including benefits and does not represent the base salary. Currently, UPS drivers make an average of around $95,000 per year with an additional $50,000 in benefits, according to the company. The average median salary for an engineer in the US is $103,845 with a base pay of about $91,958, according to Glassdoor. And TuSimple research engineers can make between $161,000 to $250,000 in compensation, Glassdoor data shows.

    On the whole though this is a useless article covering drama on Blind, wrapped up with a ragebait headline.


  • Part of the problem is news consumers, that was part of the point of the article. “News” outlets know they can just embed a musk X-eet, write a couple of paragraphs of context (or have AI do it), add the headline “Musk says ____” and boom an easy ten thousand clicks. The author argues these outlets should at least acknowledge prominently in their articles that Musk is a serial liar. He specifically calls out a few outlets over their coverage of Musks unilateral claim that the fight will be streamed on X without ever acknowledging that the claim was likely bullshit without Zucks verification. Just shitty clickbait journalism.