The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 0 Posts
  • 143 Comments
Joined 6 months ago
cake
Cake day: January 12th, 2024

help-circle


  • Yeah, it’s actually good. People use it even for trivial stuff nowadays; and you don’t need a pix key to send stuff, only to receive it. (And as long as your bank allows you to check the account through an actual computer, you don’t need a cell phone either.)

    Perhaps the only flaw is shared with the Asian QR codes - scams are a bit of a problem, you could for example tell someone that the transaction will be a value and generate a code demanding a bigger one. But I feel like that’s less of an issue with the system and more with the customer, given that the system shows you who you’re sending money to, and how much, before confirmation.

    I’m not informed on Tikkie and Klarna, besides one being Dutch and another Swedish. How do they work?


  • Brazil ended with a third system: Pix. It boils down to the following:

    • The money receiver sends the payer either a “key” or a QR code.
    • The payer opens their bank’s app and use it to either paste the key or scan the QR code.
    • The payer defines the value, if the code is not dynamic (more on that later).
    • Confirm the transaction. An electronic voucher is emitted.

    The “key” in question can be your cell phone number, physical/juridical person registre number, e-mail, or even a random number. You can have up to five of them.

    Regarding dynamic codes, it’s also possible to generate a key or QR code that applies to a single transaction. Then the value to be paid is already included.

    Frankly the system surprised me. It’s actually good and practical; and that’s coming from someone who’s highly suspicious of anything coming from the federal government, and who hates cell phones. [insert old man screaming at clouds meme]



  • I’ll go a bit earlier than the video.

    In the Edo period, from 1603 to 1868, you have the consolidation of power in a rather isolated Japan, under military leaders called shoguns.

    Then, in the Meijin era (from 1868 to 1912) the emperor is restored, the country resumes contact with the outside world, and there’s a campaign of modernisation. But the centralisation from the Edo period remains there, and gets further strengthened over the head of the emperor.

    When combined, you have a country changing its means of production from feudalism to capitalism. However, a change in means of productions cause a change in the forces of production, and those create a need for materials. As the video says: oil, rubber, iron etc. And the solution found was Europe Japan scrambling for Africa the Pacific.


  • Do you mind if I address this comment alongside your other reply? Both are directly connected.

    I was about to disagree, but that’s actually really interesting. Could you expand on that?

    If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

    In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

    Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.





  • Think on the available e-books as a common pool, from the point of view of the people buying them: that pool is in perfect condition if all books there are DRM-free, or ruined if all books are infested with DRM.

    When someone buys a book with DRM, they’re degrading that pool, as they’re telling sellers “we buy books with DRM just fine”. And yet people keep doing it, because:

    • They had an easier time finding the copy with DRM than a DRM-free one.
    • The copy with DRM might be cheaper.
    • The copy with DRM is bought through services that they’re already used to, and registering to another service is a bother.
    • If copy with DRM stops working, that might be fine, if the buyer only needed the book in the short term.
    • Sharing is not a concern if the person isn’t willing to share on first place.
    • They might not even know what’s the deal, so they don’t perceive the malus of DRM-infested books.

    So in a lot of situations, buyers beeline towards the copy with DRM, as it’s individually more convenient, even if ruining the pool for everyone in the process. That’s why I said that it’s a tragedy of the commons.

    As you correctly highlighted that model relies on the idea that the buyer is selfish; as in, they won’t care about the overall impact of their actions on the others, only on themself. That is a simplification and needs to be taken with a grain of salt, however note that people are more prone to act selfishly if being selfless takes too much effort out of them. And those businesses selling you DRM-infested copies know it - that’s why they enclose you, because leaving that enclosure to support DRM-free publishers takes effort.

    I guess in the end we are talking about the same

    I also think so. I’m mostly trying to dig further into the subject.

    So the problem is not really consumer choice, but rather that DRM is allowed in its current form. But I admit that this is a different discussion

    Even being a different discussion, I think that one leads to another.

    Legislating against DRM might be an option, but easier said than done - governments are specially unruly, and they’d rather support corporations than populations.

    Another option, as weird as it might sound, might be to promote that “if buying is not owning, pirating is not stealing” discourse. It tips the scale from the business’ PoV: if people would rather pirate than buy books with DRM, might as well offer them DRM-free to increase sales.


  • Does this mean that I need to wait until September to reply? /jk

    I believe that the problem with the neolibs in this case is not the descriptive model (tragedy of the commons) that they’re using to predict a potential issue; it’s instead the “magical” solution that they prescribe for that potential issue, that “happens” to align with their economical ideology, while avoiding to address that:

    • in plenty cases privatisation worsens the erosion of the common resource, due to the introduction of competition;
    • the model applies specially well to businesses, that behave more like the mythical “rational agent” than individuals do;
    • what you need to solve the issue is simply “agreement”. Going from “agreement” to “privatise it!!!1one” is an insane jump of logic from their part.

    And while all models break if you look too hard at them, I don’t think that it does in this case - it explains well why individuals are buying DRM-stained e-books, even if this ultimately hurts them as a collective, by reducing the availability of DRM-free books.

    (And it isn’t like you can privatise it, as the neolibs would eagerly propose; it is a private market already.)

    I’m reading the book that you recommended (thanks for the rec, by the way!). Under a quick glance, it seems to propose self-organisation as a way to solve issues concerning common pool resources; it might work in plenty cases but certainly not here, as there’s no way to self-organise people who buy e-books.

    And frankly, I don’t know a solution either. Perhaps piracy might play an important and positive role? It increases the desirability of DRM-free books (you can’t share the DRM-stained ones), and puts a check on the amount of obnoxiousness and rug-pulling that corporations can submit you to.




  • When it comes to English the problem can be split into two: the origin of the word, and its usage to refer to the planet.

    The origin of the word is actually well known - English “earth” comes from Proto-Germanic *erþō “ground, soil”, that in turn comes from Proto-Indo-European *h₁ér-teh₂. That *h₁ér- root pops up in plenty words referring to soil and land in IE languages; while that *-teh₂ nouns for states of being, so odds are that the word ultimately meant “the bare soil” or similar.

    Now, the usage of the word for the planet gets trickier, since this metaphor - the whole/planet by the part/soil - pops up all the time. Even for non-Indo-European languages like:

    • Basque - “Lurra” Earth is simply “lur” soil with a determiner
    • Tatar - “Zemin” Earth, planet vs. “zemin” earth, soil
    • Greenlandic - “nuna” for both

    The furthest from that that I’ve seen was Nahuatl calling the planet “tlalticpactl” over the land - but even then that “tlal[li]” at the start is land, soil.

    The metaphor is so popular, but so popular, that it becomes hard to track where it originated - because it likely originated multiple times. I wouldn’t be surprised for example if English simply inherited it “as is”, as German “Erde” behaves the same. The same applies to the Romance languages with Latin “Terra”, they simply inherited the word with the double meaning and called it a day.

    And as to why Earth has become the accepted term rather than ‘terra’, ‘orbis’ or some variant on ‘mundus’, well, that’s a tougher question to answer.

    In English it’s simply because “Earth” is its native word. Other languages typically don’t use this word.


  • I don’t think that a different training scheme or integrating it with already existing algos would be enough. You’d need a structural change.

    I’ll use a silly illustration for that; it’s somewhat long so I’ll put it inside spoilers. (Feel free to ignore it though - it’s just an illustration, the main claim is outside the spoilers tag.)

    The Mad Librarian and the Good Boi

    Let’s say that you’re a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books.

    So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book.

    At the start, the dog doesn’t do it. But then as you train him, he’s able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now he’s putting the ball over the history book. Nope - he doesn’t know how to tell sci-fi and geography books apart, you were “leaking” the answer by the placement of the books.

    Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - he’s identifying books by the smell.

    To fix that you try again, with new versions of the books. Now he’s identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbour’s cat. The dog would happily put the ball over the neighbour’s cat and ask “where’s my treat, human???” if the cat allowed it.

    Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog “hallucinating”.

    We might argue that, by now, the dog should be “just a step away” from recognising books by topic. But we’re just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it won’t go much past that.

    And, even if you and the dog lived forever (denying St. Peter the chance to tell him “you weren’t a good boy. You were the best boy.”), and spend most of your time with that training routine, his little brain won’t be able to create the associations necessary to actually identify a book by the topic, such as the content.

    I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if they’re unable to speak.

    At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they don’t “know” stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that they’re able to reliably output. Different training and/or algo might change the info that it’s outputting, but they won’t “magically” go past that.


  • I’m reading your comment as “[AI is] Not yet [an existential threat], anyway”. If that’s inaccurate, please clarify, OK?

    With that reading in mind: I don’t think that the current developments in machine “learning” lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I don’t think that it’ll progress much past the current state.

    In other words I believe that the AI that would be an existential threat would be nothing like what’s being created and overhyped now.



  • Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? It’s just… perfect! Model degeneration is a lot like what happened with the Habsburg family’s genetic pool.

    When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I don’t think that the models are misbehaving, they’re simply behaving as expected, and that any “improvement” in this regard is basically a band-aid being added to humans to a procedure that doesn’t yield a lot of useful outputs to begin with.

    And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, it’ll “magically” become smart. It won’t, just like 70kg of bees won’t “magically” think as well as a human being would. The underlying process is “dumb”.


  • May I be blunt? I estimate that 70% of all OpenAI and 70% of all “insiders” are full of crap.

    What people are calling nowadays “AI” is not a magic solution for everything. It is not an existential threat either. The main risks that I see associated with it are:

    1. Assumptive people taking LLM output for granted, to disastrous outcomes. Think on “yes, you can safely mix bleach and ammonia” tier (note: made up example).
    2. Supply and demand. Generative models have awful output, but sometimes “awful” = “good enough”.
    3. Heavy increase in energy and resources consumption.

    None of those issues was created by machine “learning”, it’s just that it synergises with them.