- cross-posted to:
- linkedinlunatics@sh.itjust.works
- cross-posted to:
- linkedinlunatics@sh.itjust.works
Microsoft is running one of the largest corporate espionage operations in modern history. Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.


First comment from the link:
That is very different from “searches their computer for installed software”
Well, I guess it’s technically installed software… but the scope is significantly less than what’s implied from the headline. My immediate reaction was, “how?”
This is basically standard browser fingerprinting, hence why it’s sold for surveillance activities. Linked in is big brother.
Yeah, the description is misleading because anyone reading it is thinking desktop software. But… hear me out. I know that all the surveillance capitalism companies do this, but in this case it is literally pairing what is mostly corporate IT policy data (browser, hardware, OS, and extensions) with employee name, title, and employer. That does technically fit the definition of corporate espionage, and I am always open to getting more people -especially people with some levers in government- onto “our side” of the Internet privacy conflict.
Still don’t really understand why browsers expose this data to sites.
Web browsers are just such a massive security hole.
On the contrary, websites are incredibly sandboxed. It’s damn near impossible to find out anything about the computer. Off the top of my head: Want to know where the file lives that the user just picked? Sure, it’s C:\fakepath\filename. Wanna check the color of a link to see if the user has visited the site before? No need to check. The answer will be ‘false’. Always.
Here’s the information a web server needs to deliver content to a browser:
Everything else is a fucking security hole. There’s no good reason for servers to know what extensions you have installed, what OS you’re running, the dimensions of your browser window, where your mouse cursor is positioned, or any one of a thousand other data points that browsers freely hand over.
There are absolutely reasons. Firefox is done by a reasonable job of anti-fingerprinting, and it’s a fine line to walk to disable as many of those indicators as possible without breaking sites.
Browsers do give away too much, but at least Firefox is working on it. And it’s not extremely straightforward.
I use waterfox with all of the privacy and security settings enabled to the max, plus a few extensions like ublock origin, decentraleyes, consent-o-matic, and clearurls.
Not that many sites break. And the ones that do, I don’t visit. If you don’t need to offer an https option, or you don’t work without trackers, I don’t need to go to your site. Simple as that.
The browser can never know what information is needed for a certain use case. So it needs to be permissive in order to not break valid uses.
For instance, your list does not include the things a user clicks on the website. But that’s exactly the info I needed to log recently. A user was complaining that dropdowns would close automatically. We quickly reached the assumption that something was sending two click events. In order to prove that, I started logging the users’ clicks. If there were two in the same millisecond, then it’s definitely not a bug but a hardware (or driver or OS or whatever) issue.
Bug fixing is not a reason to enable massive privacy violations.
If the site doesn’t know the window width of can’t react to mobile or desktop users automatically or scale elements/ change to best for your display.
You need mouse input for hovering effects as well
That can all be done 100% client side. The server does not need this information.
If you can do it client side, you can send it to a server…
The difference is intent.
Yes, because web browsers, under current web architecture, allow this.
This is entirely my point.
They will always allow it as long as you have javascript or any other code.
How would they prevent it? If they allow your app to read a value client side, it can do whatever it wants with it, including sending it.
If your app needs to present different behavior based on user settings, it needs to read it.
They allow this because they are being developed to allow this.
Browsers that don’t allow this in a Web-like system without such functionality (like Gemini) can be written in two days or a week if you don’t hurry.
Or at least take as long as Mosaic or Arena took to become usable.
Enormous resources are being invested into continued development of a platform where users provide valuable feedback.
By the way, ML is long past the point where that data could even be interpreted ambiguously. Those who have the data know exactly who you are and probably some useful traits of what you are thinking the moment you are typing a comment at any big website.
Ah I read as the Brower doesn’t need that data. I’d say it needs width (maybe height) but that’s it
But this info talked about in OP is done via client sending the data to a server not the server getting it all the time
False. Browsers can announce themselves as desktop or mobile, or even advertise pre-determined fake window and screen sizes for this purpose (in Firefox it’s called “letterboxed” in the hidden settings). There is no need for a server to have any of this information anyway - either the design of the webpage should be responsive by default, or the server can send specifically whichever files for styles the browser specifically asks for, perhaps falling back to a “all.css” or something.
WTF is this article? Browser extensions are standard browser fingerprinting data.
Gonna have to agree here. Article headline is rage bait
DuckDuckGo my friends
DuckDuckGo is still a Chromium browser. Firefox, buddies, Firefox.
That sounds… normal? and maybe even sensible, especially if LinkedIn does SSR, since that could allow the servers know how to tailor the content to the specific browser requesting a page.
That might have been a sensible argument 20 years ago. Mozilla has spent the last 5 or so slowly stripping most of that out for “anti-fingerprinting” without breaking website layout.
I have been doing web development pretty much since the web was created.
“Sniffing your browser extensions is normal to be able to render the page correctly” is not and was never a sensible argument. 20 years ago, neither Chrome nor the iPhone existed yet. Most people browsed the web on computers, and “works best in Internet Explorer” was widespread. Web developers were lazy and many of them literally only tested their sites in IE on Windows. Browser extensions themselves were much more of a niche thing since IE didn’t support them.
I will have to yield to your experience then. I mainly thought of it as a naive type of sensible argument, given people were not all that concerned about tracking and particularly browser fingerprinting. I guess back then, the main thing was web developers who used flash needed to check for it. But those people were anti-open web back then and deserved to be ignored by the browser makers.
I am guessing you were strongly in the open web camp back then. I am glad we sort of won that particular battle, even if we lost so many others.
Yeah, you’re right on that you needed to check for Flash if your site used it. But at the risk of sounding overly pedantic: Flash wasn’t a browser extension either; it was a plugin, which though named similarly were completely different implementation-wise. Browser plugins are not really supported anymore in 2026, due to them having essentially unrestricted access to the host machine.
In what fucking world is it “normal” or “sensible” to scan your browser extensions to decide how to render a page? Please explain.
I’ve been doing web development for 30 years (since the time when “SSR” was just called “building a web app”) and I have not once ever had the desire or need to do this.
I can only think of reasons that are meant to block you based on what you are using to augment your browsing experience.
The reason is fingerprinting. Verrrry old technique. Adtech stuff. You might collect browser extension, webgl information, CPU core count, screen resolution, IP address, reverse dns, locale, headers, user agent, akamai hash, etc. The reason is so that these metrics can then be enriched to build a consumer profile and used in analytics
Thanks, I worked in adtech for a number of years so I’m aware of this use case. I could tell some stories that would likely surprise you at how sophisticated that industry has been for a long time, even as long as 10-15 years ago.
But the parent post specifically said this was “sensible” and maybe “normal” to do this to decide how to render a page. My question was specifically how that claim makes sense at all.