• 9 Posts
  • 123 Comments
Joined 4 months ago
cake
Cake day: March 12th, 2024

help-circle

  • From Wikipedia:

    1. High self-esteem and a clear sense of uniqueness and superiority, with fantasies of success and power, and lofty ambitions
    2. Social potency, marked by exhibitionistic, authoritative, charismatic and self-promoting interpersonal behaviours
    3. Exploitative, self-serving relational dynamics; short-term relationship transactions defined by manipulation and privileging of personal gain over other benefits of socialisation

    I think we can put big checkmarks on all three of those for Trump. We don’t need a professional psychologist for that.

    Which may be partially beside the point and argument the article is trying to make, but I still want to point these out.











  • The only way to meaningfully advocate for it after your company already announced their conditions and offerings is to present value gain.

    What do you suggest concretely? What should be offered under what conditions? What would that mean as cost? What would the benefit be? How substantial is it?

    Reaching out privately to them is certainly going beyond what you are employed for. I don’t know about ill-advised - if you never disclose it or are at least mindful of that. But it’s a personal assessment. You seem to be willing to invest a lot into a single customer, who tries to do something not offered or considered by the company. Whether it’s personal interest, or first a broader better understanding of the use case, I can see how it could be worth or worthwhile. But I wouldn’t get my hopes up about changing the opinions of your company [from their information alone].

    Your company offered API access. So there is an interface available. They won’t make it free unless they see and deem it worth it to do so.


  • A block on Twitter doesn’t say anything unless you know why they were blocked and know the person. Blocking can be more than warranted and justified. Be it spam, toxicity, harassment, or similar things. “I saw a screenshot of someone being blocked on Twitter” is not a good foundation for an argument.

    They talk about malware in npm packages. One example isn’t enough to make a general claim that all software with political opinions or voices becomes malware.

    When a platform follows sanctions, and the law, I don’t think you can claim them to be political and activism decisions. If you want to make that argument and want to do so in an absolutist fashion (not assess and reduce risks but evade them entirely), then you can only self-host and I guess on your own servers? No platforms, no services?

    Nowadays, there are many teams who buy popular apps and browser extensions to inject malware.

    … which has nothing to do with political views and especially not political views of the original authors and sellers.

    As you can see, the “opinion” or “political view” of a company is not only a way to hype on sanctions and curry favor with investors, the government, and consumers, but it is also a clear signal about potential threats. It signals that your sensitive data may be hijacked, sold, or wiped anytime if the political compass spins tomorrow and recognizes you as an enemy.

    No. None of what was written before showed me any of that.

    Some of the red flags I actively use to reject software:

    Direct political opinions in a product’s blog, like “we support X” or “we are against X”

    “We are free software and we support free software” -> REJECTED! (?)





  • Does it apply if you don’t say that you are posting under the license? It may be implied, the intent is reasonably clear, but an argument of ambiguity can be made. You’re merely linking to a license.

    Does it apply if the link label mismatches the license? CC by-nc-sa does more than deny commercial AI training. It requires attribution, requires general non-commercial use, and requires share-alike.

    Personally, I prefer when it’s at least differently formatted to indicate it as a footer and not comment content. I’ve seen them smaller and IIRC italic on other commenters, which seems more appropriate and less distracting and noisy [for human consumption]. When the comment is no longer than the license footer… well…



  • I don’t think it seems like too few samples for it to work.

    What they train for is rather specific. To identify anger and hostility characteristics, and adjust pitch and inflection.

    Dunno if you meant it like that when you said “training people’s voices”, but they’re not replicating voices or interpreting meaning.

    learned to recognize and modify the vocal characteristics associated with anger and hostility. When a customer speaks to a call center operator, the model processes the incoming audio and adjusts the pitch and inflection of the customer’s voice to make it sound calmer and less threatening.


  • I think it’s to be expected and excusable. When reading the summary with it in mind, that it’s a bot summary, not a human summary, it’s acceptable and still useful. Text is not necessarily coherent. And when it isn’t, it can indicate other content.

    I read a different autosummary earlier today with a similar issue. It referred to something or someone not previously mentioned in the summary. With auto-summarization in mind, it was obvious that there is more information on that in the full article. In a way, that was also useful in and of itself (instead of simple emission).

    Dunno why asking whether to ban. Are others even better? None logically understand the text. If most are coherent, this may be an outlier. If machine summarization is not good enough for someone they don’t have to read it.