I've noted before that because AI detectors produce false positives, it's unethical to use them to detect cheating.
Now there's a new study that shows it's even worse. Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased.
This is
Yea, this AI is good for writing unimportant stuff like “talking to” famous dead people, or D&D descriptions on the fly. It can be useful for basic coding if you know how to fix its mistakes. Oh and keeping telemarketers busy a la Jolly Rodger. And I guess spam blog posts.
It’s best used as a toy still, after I tried to use it to augment my work, it’s just usually worse than a good search engine still in terms of answering questions.
The free image generators are pretty impressive again for like making flavor art for D&D on the fly, or just if you’re not an artist. Some of the tuned ones can make decent unconnected art or fake pictures, but so far I don’t think you can pick a character you create and like get it to make a graphic novel with it.
So - watch out people who make RPG modules I guess.