• 0 Posts
  • 19 Comments
Joined 8 months ago
cake
Cake day: February 10th, 2024

help-circle
  • My best guess: whatever they’re filing now was so exhaustively researched that it took months to prepare the strongest case they’re able to make, possibly delayed by the lawyers working on several other cases. Plus waiting until sales have dried up can maximize damages.

    Another possibility is that Nintendo/TPC is planning to make some big Pokémon announcements soon and wants to target this shortly before their own new games to reduce competition. Palworld might seem like more of a threat to the execs now that Pokémon is nearing a major release than it was in the middle of a long drought for the series.



  • zarenki@lemmy.mlto196@lemmy.blahaj.zoneRulekemon
    link
    fedilink
    arrow-up
    11
    ·
    1 month ago

    The baby god event was never officially released, so this actually didn’t canonically happen.

    It was released. The Azure Flute and the event where you meet and battle Arceus in the Hall of Origin in DPPt was indeed never released, but this is different.

    Arceus had various distributions in 2009-2010; the US one was at Toys R Us for example. Trading that legit Arceus to HGSS and then bringing it to the Ruins of Alph triggers this event which takes you to a special location where you can choose one egg of either Dialga, Palkia, or Giratina.


  • The conditions that processors run under in situations like military equipment are drastically different from those of consumer devices. Consistency and stability are more important than performance in those contexts. So much so that RTOS systems like VxWorks are popular in that space. They’d probably already have features like clock boost disabled (or use processors completely lacking it) in favor of a lower fixed clock speed, probably avoiding these issues entirely.


  • zarenki@lemmy.mlto196@lemmy.blahaj.zone📄 rule
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    and it’s free

    This is very uncommon in the US. Most major banks (I’m not aware of any exceptions) charge a fee for each outgoing wire transfer, usually $25-$30. Bank of America, Wells Fargo, Chase, and PNC for just a few examples I’m aware of, plus every credit union that has local branches in my area. Some of those banks even add a second fee at the recipient’s side for incoming wire transfer.

    They often encourage customers to rely on third party services like Zelle instead for small transfers to friends and family. Many banks’ sites/apps can also handle transfers between two accounts that both belong to the same bank for free too.


  • A ground-up overhaul of the copyright system would make things so much worse, not better, considering the current climate of power. In the US for example, MPA, RIAA, Entertainment Software Association, Association of American Publishers, and others wouldn’t want public libraries or the used market to exist at all; they would push for making every single transfer of “ownership” on any media involve a payment to the rights holder. Lawmakers are far more likely to accommodate those groups’ desires than the public good.

    The worst parts of the current copyright system are the most recent. Both the DMCA and the extension of US copyright term to 95 years took effect in 1998, and the early 2000s saw many other countries passing laws to make their copyright system closer to US’s in various ways such as the WIPO Copyright Treaty which took effect in 2002 and EU’s 2006 Copyright Directive. Just about the only positive news we’ve seen in US copyright law since then is in temporary exemptions to DMCA’s anti-circumvention rules (Section 1201) which change every year. Copyright law was far less hostile to consumers and the public before the 90s than it is now, and up until 1976 it used to be expected that most media someone consumes would enter public domain within their lifetime.

    The digital era makes market relevance far more ephemeral than ever and yet the laws written for the digital era moved copyright in the opposite direction. Movie studios simultaneously judge whether a film succeeded almost exclusively based on its first week of ticket sales and also claim that depriving public domain for 95 years is necessary. Nothing should be able to justify more than 20 years of copyright. Media formats don’t even last as long as copyright; CDs and DVDs rot, game cartridges die, servers shut down, and even books printed on today’s low-quality paper will fall apart.

    Some of it is absurd to me, like the way something can be online but geographically restricted.

    This is a consequence of contract terms moreso than copyright. One issue in copyright law that this does connect to, though, is the fact that the question of whether the rightsholder keeps a work reasonably available on the market does not impact whether the work retains copyright protections. If copyright law did hypothetically include that limitation, providers would become far more likely to make sure that all content is available in all countries, but even then things could still vary in terms of which content is on which platform.


  • Yes.

    My home server has dropbear-initramfs installed so that after reboot I can access the LUKS decryption prompt over SSH. The one LUKS partition contains a btrfs filesystem with both rootfs and home as subvolumes. For all the other drives attached to that system, I use ZFS native encryption with a dataset that decrypts with a keyfile from that rootfs and I have backups of an encrypted copy of that keyfile.

    I don’t think there’s a substantial performance impact but I’ve never bothered benchmarking.


  • Something I’ve noticed that is somewhat related but tangential to your problem: The result I’ve always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don’t use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

    volumes:
      nextcloud:
      db:
    
    services:
      db:
        image: docker.io/mariadb:10.6
        ...
      app:
        image: docker.io/nextcloud
        ...
    

    I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory’s name otherwise.

    The reasons I adjust my own compose files to be different from the image maintainer’s recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I’ve had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.


  • I never had problems with Debian stable, especially on headless server. But it’s not especially well-suited for brand new desktop hardware; even Ubuntu LTS and RHEL focus more on hardware enablement backports than Debian.

    I’ve had a worse experience with Debian testing breaking my system with updates than Arch. Adding to that the freeze period (2012’s was the worst, lasting 11 months) makes testing feel like the worst of both worlds between rolling and standard release distros.


  • zarenki@lemmy.mltolinuxmemes@lemmy.worldIndeed
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    As someone who used to use Arch a decade ago: I still use pacman for devkitpro at least, and I do miss how fast its parallel downloads get, but the tool I use to manage packages is far from the most important difference between distros to me, even if you assume not needing AUR.


  • I’ve avoided RGB-lit stuff for everything else, except for my wireless headset. A Logitech G733. In every other respect I love it, but it has bright lights on the front that drain the battery and reflect in my glasses. They default to constantly changing random colors until host software sends a command to control the light. Thankfully there exist tools to control it on Linux (HeadsetControl) but adjustments reset on every power cycle.

    The mouse in OP (M510, I’ve had a few of them myself) doesn’t have those problems. There does exist specialized software to manage device pairing for the included “unifying receiver” but it comes by default pre-paired so the software is only particularly helpful for the niche use case of having other wireless logitech devices and wanting to save USB ports by making them all share one receiver.



  • The first I tried was Ubuntu 7.04 but I didn’t stick with it and went back to XP. Until I ended up with a hardware setup that wouldn’t work on Windows XP (widescreen monitor + Intel graphics driver with no widescreen mode options) but worked perfectly on Ubuntu 9.10. I never truly went back to Windows since.

    Tried a few other distros in 2011 then switched to Arch for a couple years, Xubuntu for a couple years, Ubuntu GNOME for 7-8 years, and finally switched to Fedora last year.


  • zarenki@lemmy.mlto196@lemmy.blahaj.zonemicrotransactions rule
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 months ago

    You joke, but it really exists: the company that acquired uTorrent 17 years ago now sells an ad-free version of their current torrent client as “BitTorrent Pro” for USD$20/year, or alternatively as part of a VPN service bundle for $70/year.

    Needless to say, stick with FOSS clients like qBittorrent/Deluge/etc instead.


  • Nonfree media codecs like HEVC/h265 are affected by US software patents. Distributing them from US servers without paying license fees to MPEG LA can put the host at risk of lawsuit. VLC, deb-multimedia (Debian), and RPM Fusion (Fedora) all avoid that by hosting in France, but even with those sources enabled patent issues can break things like hardware acceleration. Free codecs like AV1/VP9/Opus avoid all these problems.

    Microsoft is US-based and can’t avoid those per-install fees. They could cut the profit from every single Windows license but apparently chose not to.



  • Debian. I was in a similar boat to OP and just a couple weeks ago migrated my almost 8-year-old home server setup from Ubuntu LTS to Debian Stable. Decided to finally move away from Ubuntu because I never cared for snap (had to keep removing it with every upgrade) and gradually gained a few smaller issues with Ubuntu. Seems good to me so far.

    I considered RHEL/Rocky but decided against them largely because I wanted btrfs for my rootfs, which their stock kernel doesn’t have, though I use a few Red Hat developed tools like podman and cockpit. Fedora Server and the like have too fast a release lifecycle for my liking, though I use Fedora for my desktop. That left Debian as the one remaining obvious choice.

    I also briefly considered throwing a Debian VM into TrueNAS Scale, since I also use this system as a ZFS NAS, but setting that up felt like I was fighting against the “appliance” nature of what TrueNAS tries to be.


  • If you’re assuming “as long as the hardware will function” in the first place: even digital copies, DLC, and updates installed on the system before the servers shutdown will continue working even without hacks. There’s no check-in requirement except for the subscription-locked things like SNES games.

    However, the result of a nonrepairable hardware failure when you have no hacks nor official servers is rather bad no matter how your games are obtained: OFW does not allow you to transfer save data from one system to another without going through Nintendo servers and a vast majority of cartridge games are incomplete without updates or DLC.