Ah, interesting. I myself have made my own library to create callable “prompt functions” that prompt the model and validate the JSON outputs, which ensures type-safety and easy integration with normal code.
Lately, I’ve shifted more towards transforming ChatGPT’s outputs. By orchestrating multiple prompts and adding human influence, I can obtain responses that ChatGPT alone likely wouldn’t have come up with. Though, this has to be balanced with giving it the freedom to pursue a different thought process.
If you don’t mind me asking, does your tool programmatically do the “whittling down” process by talking to ChatGPT behind the scenes, or does the user still talk to it directly? The former seems like a powerful technique, though tricky to pull off in practice, so I’m curious if anyone has managed it.
The work you do is much appreciated, friend :)
Retro Game Mechanics Explained is one of my favorite YouTube channels of all time. There’s an absolute treasure trove of interesting technical deep-dives about the inner workings of retro games, famous glitches, and how the hardware works. And it’s all presented with clear, silky smooth animations that make everything so much easier to understand.
I’m not even into retro games that much, yet the content is so good that it has me completely hooked anyway. I’d highly recommended it for anyone who wants to learn more about computer science or the clever techniques programmers used to get things to run on old hardware.
Reddit now:
What’s your all-time favorite video game?
u/totallynormaluser: “I’m sorry, but as an AI language model, I don’t have personal preferences or emotions, so I don’t have the ability to have a favorite video game.”