The lies you are able to generate will likely never outweigh all of the accurate data other people create and definitely won’t remove it, just add some noise.
The lies you are able to generate will likely never outweigh all of the accurate data other people create and definitely won’t remove it, just add some noise.
It doesn’t even have to be your friends. It could just be you walking by in the background of a photo someone else took.
They have forked it though? That’s why almost all the other Chromium-based ones don’t have this enabled by default or completely disable it (even if you tried to turn it on).
If you’re talking about forking the entire project and using it as a base that diverges from what Google does, I don’t think that’s going to happen. Not even Microsoft with their billions had the desire to maintain a totally separate engine anymore and I don’t see the other Chromium-based browsers redirecting efforts from useful things like better UIs, privacy enhancements, etc into just keeping feature/performance parity.
It’s in Chromium, the other browsers have just disabled/patched it out: https://chromium.googlesource.com/chromium/src/+/refs/heads/main/components/privacy_sandbox/privacy_sandbox_prefs.h
// Un-synced boolean pref indicating if Topics API is enabled.
inline constexpr char kPrivacySandboxM1TopicsEnabled[] = “privacy_sandbox.m1.topics_enabled”;`
e.g. Vivaldi:
https://vivaldi.com/blog/technology/heads-up-googles-going-off-topics-again/
For this feature though they’ve tried to select the topics to be ones that “[do] not include sensitive categories (i.e. race, sexual orientation, religion, etc.)”. The list is also public and gambling is not on it:
https://github.com/patcg-individual-drafts/topics/blob/main/taxonomy_v2.md
While this won’t satisfy those who want no individualized ads or no ads at all, it would be an improvement over what we have now and put control over what topics are used (or even if it’s enabled at all) in the local browser instead of some server online.
Isn’t this client-side solution for analyzing the history and coming up with ad topics for sites better in your scenario than the server-side solutions currently in use though? A government would have a much harder time trying to get access to the data when it’s on each individual’s device, rather than a profile created through an online ad service.
A most favored nation clause, but for security. Great… just what we needed.
That’s one of my main problems with Microsoft at this point. They can make improvements to the underlying technologies (WSL, better security sandboxing, FDE by default on supported hardware, etc) and develop actually decent software (Edge) but then they keep doing things to piss off the users like forced online account logins, the mess they made of the default app selection going from 10 to 11, pre-installed junk, and now this. They just need to get out of their own way and focus on making decent products: ones people want to use, instead of ones they’re coerced to use.
I don’t really fault them for getting their filtering/blocking systems setup and tested ahead of time before they are liable, considering the estimated cost of $329.2 million per year between Google and Meta:
There’s no set size but there needs to be an imbalance of power:
https://www.parl.ca/DocumentViewer/en/44-1/bill/C-18/royal-assent
Application 6 This Act applies in respect of a digital news intermediary if, having regard to the following factors, there is a significant bargaining power imbalance between its operator and news businesses:
(a) the size of the intermediary or the operator;
(b) whether the market for the intermediary gives the operator a strategic advantage over news businesses; and
(c) whether the intermediary occupies a prominent market position.
From the text of the bill: https://www.parl.ca/DocumentViewer/en/44-1/bill/C-18/royal-assent
Making available of news content (2) For the purposes of this Act, news content is made available if
(a) the news content, or any portion of it, is reproduced; or
(b) access to the news content, or any portion of it, is facilitated by any means, including an index, aggregation or ranking of news content.
(b) sounds like just linking or indexing it would count as making it available, and thus require payment.
That seems to be backed up by at least a couple of the news sites: https://www.theglobeandmail.com/politics/article-bill-c18-online-news-law-explained/
What is Bill C-18? Bill C-18 is legislation that would force tech companies such as Google and Meta to negotiate compensation deals with news organizations for posting or linking to their work.
https://ottawacitizen.com/news/politics/online-streaming-news-bills-whats-next
At the Heritage committee, Liberal MPs resisted efforts from the Conservatives to take links out of the bill […]
Like how nuclear power plants are just steam engines with a different heat source?
The least they could do is bring back the Office of Technology Assessment to help them understand things:
https://en.wikipedia.org/wiki/Office_of_Technology_Assessment
They have the code for their open-source implementation of security keys here:
https://github.com/google/OpenSK
Their actual announcement post is here:
https://security.googleblog.com/2023/08/toward-quantum-resilient-security-keys.html
TechDirt wrote an article titled “Do People Want A Better Facebook, Or A Dead Facebook?” back in 2019. I feel like that tells you that a fair number of people won’t be happy with Facebook even existing, no matter if the position that it’s taking is one they would normally agree with (ie not having to pay to link to something). Sadly, I think you may be right that some might take the pyrrhic victory, even at the cost of linking on the web.
If it’s enforced server-side, then there’s still an initial connection that is unsecured and can potentially be intercepted/modified before it gets to the redirect from 80 to 443.
I’m not sure which thing you’re referring to.
If it’s between http and https, the s stands for secure and the connection to the server is authenticated and encrypted.
It does if you just type in something like wikipedia.org . This most recent change they’re working on is so that a link on a page to:
http://wikipedia.org will get redirected to https://wikipedia.org if the site supports it.
This will fix a bunch of old links that are still floating around on various sites, forums, etc and keep people on https, instead of doing the https -> http -> https redirect bouncing around that can happen now.
Additionally, it seems to me like you could bind the app that needs to use the VPN to the VPN adapter/interface specifically, preventing it from going out the wrong route.
Amassing a huge dataset to search through with all the metadata (usernames, names, etc) is the part than an individual would probably have trouble with doing, not the actual “is this a photo of the person in this other photo” part.