Incorrect. Open source means using a license that conforms to the open source definition. You can find that here: https://opensource.org/osd
Incorrect. Open source means using a license that conforms to the open source definition. You can find that here: https://opensource.org/osd
If a license forbids LLM training, it is by definition not open source.
Only if you can reasonably argue that the output is the input (even with exact matches over a certain size being auto-rejected), and that it is enough to qualify as a copyrightable work. I’d argue line completions can never be enough to be copyrightable, and even a short function barely meets the bar unless it is considered creative in some way.
Every open source license grants permission for AI training, and GitHub copilot by default rejects completions that exactly match code from its training. You can’t pretend to be pro-open source or pro-free software but at the same time be upset that people are using licensed software within its license terms.
This seems kinda reasonable for what it is? It’s not the usual kiosk device that other companies have tried, it is just a touchscreen on the top of a cylindrical speaker. Probably will let you access most of the same controls already present but now with a UI capable of giving better feedback.
You don’t need to wonder, Apple has said as much that their AI is built on LLMs, just like everybody else. While hallucinations are still a major unsolved problem, that doesn’t mean they aren’t able to be reduced in frequency and severity. A ChatGPT like chatbot is going to hallucinate because you’re asking it to give extremely open ended responses to literally any query. The more data you feed it in the prompt, and the more you constrain its output, the less likely it is to hallucinate. It’ll likely be extremely rare that using the grammar check or rephrasing tools in Apple AI will be affected by hallucinations for that reason. Siri is more comparable to ChatGPT with regards to open ended questions, but it’s likely that they will integrate LLMs primarily for transforming inputs and outputs rather than the whole process. For example, the LLM could be prompted to call a function based on the user’s query. Then, that function finds a reliable result, either using existing APIs for real time information like weather, or using another LLM with a search engine. The output from this truth-finding process is then fed back into an LLM to generate the final output. The role of the LLM is heavily constrained at every step of the way, which is known to minimize hallucinations.
You arguing that this is an unsolvable problem is defeatist and not helpful to actually mitigating the real issue.
As a person who has been managing Linux servers for about a decade now, trust me that a few hours or days of learning docker now will save you weeks if not months in the future. Docker makes managing servers and dealing with updates trivial and predictable. Setting everything up in docker compose makes it easy to recover if something fails, it’s it’s self documenting because you can quickly see exactly how your applications are configured and running.
Looks basically the same as the previous gen. I guess it’s marginally thinner?
Probably not. Electron is popular not just for its cross-platform support, but also that its skills are highly transferable from existing web dev.
Oh nice, didn’t know that. Sponsorblock is also available on Safari but it’s a few dollars I think.
It’s great but does it block ads?
I doubt they would replace a smartphone for people unless it was small and comfortable enough that you would want to wear it 24/7. Smartphones succeeded because of the convenience, I can check my phone at my desk, in bed, while walking, while pooping. Unless it shrinks down to a pair of glasses I don’t see it happening, and even then input is a whole other problem, touchscreens are insanely intuitive.
All it would take is a simple law to be passed.
I’m all down for weight reduction, like switching to titanium.
Admittedly yeah this would be super cool, especially if it was as thin as the new iPads. Never considered buying a foldable phone, and part of that is that when folded up they do seem too thick. An iPhone that could fold out to basically be an iPad mini and folded completely flat would be cool.
I have a 14” M3 Max MBP for work and I have to agree, the design is fantastic. The weight and thinkness have never been an issue for me, and it’s nearly silent unless I have all cores maxed out for more than a couple of minutes. Battery life is phenomenal too, I love when traveling that it can make it through an entire day no matter what I throw at it. If they ruin that for me I’ll be so disappointed.
I thought we did this already and came to the conclusion that thinness only matters up to a point, then it’s just removing battery life/functionality/durability chasing a benchmark that nobody actually cares about. Oh well, hopefully they learned their lessons last time and it’s better this go around.
What are you actually trying to do that your laptop webcam isn’t sufficient? You might want to just consider a USB webcam, cheap and reasonably reliable.
The URL to the ipsw can be found in this commit to virtualbuddy, likely useable with UTM today, and virtualbuddy whenever it gets its next release.
https://github.com/insidegui/VirtualBuddy/commit/120cff9f99aab4fdbb78f7071ce771d26feb2c23
Why this over a much more popular modern language like Rust?