![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/8140dda6-9512-4297-ac17-d303638c90a6.png)
I recall that there is a USB GPIO dongle which gives you a bunch of pins to play with. You would have to hunt around to find it though.
I recall that there is a USB GPIO dongle which gives you a bunch of pins to play with. You would have to hunt around to find it though.
Can we assume that all vacuums are frictionless? Makes the math easier.
Really? Do you have a source for that?
I had totally forgotten about this game. Tyvm!
And red. Don’t forget the copious amounts of red
What’s the digital clock in your terminal?
There are others above who provide instructions warnings against bypassing paywalls (⌐■_■)
Check protondb. It sometimes has workarounds for launcher issues.
Tell me more…
Thank you for the works best editor Bram. :x!
Very good to know, thanks!
I can see most individuals and SMBs going with specialist “good enough” models which they can run on prem/ locally, leaving the truly huge systems to those with compute to spare. The security model for these MAAS systems is pretty much “trust me bro”. A lot of companies will not want to, or be able to, trust such a system. PI/CID can not be left in the hands of the ai as a service company. They will have to either go on prem, or stand up their own models in their private cloud. Again, this limits model size for orgs, available compute etc. This points to using available models, optimised, etc. OSS FTW (I hope)
Given the pace of oss optimisation, I fully expect the requirements for a gpt3.5 equivalent performance model to be much lower in the coming year. The biggest issues are around training or fine tuning right now. Inference is cheaper, resource wise. For truly large models, the moat is most definitely gpu compute and power constraints. Those who own their own gpu farms will be at an advantage until there is significant increase in cloud gpu capacity - right now, cloud gpu is at a premium, and can also include wait time for access. I don’t expect this to change in the next year or two.
Tl;dr; moat is real, but it’s gpu and power constraints.
Thanks for the link!
Thanks for the summary!
Of course they do!