• 1 Post
  • 55 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • I’m not disagreeing with you. I’m just stating that a broken unplayable game objectively has no value. The publisher has forced that value to 0 if they turn off their servers without support, regardless of if there was any value there before or not.

    Edit: I realize we might be talking about different things when saying “stop supporting”. I meant that to mean when the servers are turned off, not when they stop releasing updates or delist it from stores.










  • I’m not sure that’s even a valid comparison? I’d love to know where you got that data point.

    LLMs run until they decide to output an end-of-text token. So the amount of power used will vary massively depending on the prompt.

    Search results on the other hand run nearly instantaneously, and can cache huge amounts of data between requests, unlike LLMs where they need to run every request individually.

    I’d estimate responding to a typical ChatGPT query uses at least 100x the power of a single Google search, based on my knowledge of databases and running LLMs at home.







  • This graph actually shows a little more about what’s happening with the randomness or “temperature” of the LLM.
    It’s actually predicting the probability of every word (token) it knows of coming next, all at once.
    The temperature then says how random it should be when picking from that list of probable next words. A temperature of 0 means it always picks the most likely next word, which in this case ends up being 42.
    As the temperature increases, it gets more random (but you can see it still isn’t a perfect random distribution with a higher temperature value)