What's your Threat Model?
Learning about threat modeling by analyzing the viral "Claude installs Spyware" claim

If you spend any time in vaguely AI-critical circles, chances are you saw the article by "That Privacy Guy" on how Anthropic installs Spyware on your PC if you install Claude desktop. I think this article is a great illustration on how not to make an argument, and why threat modeling can be a valuable tool.
Briefly summarized: When you install the Claude desktop app, it automatically provisions a Native Messaging manifest in your browsers. This manifest allows specific browser extensions to call a specific binary on your computer and communicate with it (something that browser extensions normally cannot do). By pre-provisioning this file, it skips a permissions dialog that would normally be shown by the browser the first time such a call is attempted. This binary is (or was?) used to establish a communication link between the Claude browser extension and the Claude or Claude Code app.
What are the implications?
The author of the article, rightly, considers this a violation of consent. In particular, they find it shady that this is done without telling the user, and that these manifests are automatically reinstalled if you delete them. I agree with them on that count. However, the author also makes some additional claims about the security implications of this, and it is these that I want to discuss today.
In security, we talk about "Threat Models". This means, basically: given specific capabilities, what is an attacker able to do? So, when we leave aside the issues of consent and purely consider the security impact of this, what does this actually mean? Where does it make us less secure? I mean, if the author goes so far as to call this "installing spyware", it must be quite horrible, right? So, let's do some recreational threat modeling to find out.
Threat modeling the bridge
The core question for security is: What does pre-installing this bridge actually change, in the real world? To answer that, we can consider two separate scenarios. Note that in threat modeling, you sometimes need to make assumptions. Let me state my assumptions clearly:
I assume that the messaging bridge is required for correct operation of the Chrome extension - its job is to facilitate communication between the extension and the native app.
I assume that your PC is otherwise free of malware and other undesired software, and only you have access to it. (We will talk about what relaxing this assumption means further down as well)
That's pretty much all the assumptions we need for the moment.
Scenario 1: Claude is installed, extension is not installed
In this scenario, the Claude app has already pre-provisioned the consent for the Native Messaging connection, but there is nothing in the browser that does anything with it. Your attack surface is unchanged until the extension gets installed.
Dismissed threat: other extensions using the bridge
You might ask: What if another extension tried to use the bridge? The answer is: Permission to use the bridge is gated to the extension ID. The extension ID is hardcoded by Anthropic to three extension IDs under their control. Other extensions cannot have the same ID, because the ID is derived from the signing key of the extensions, which only Anthropic has access to.
Scenario 2: Claude is installed, the extension is also installed
In this scenario, the user has consciously chosen to install the extension because they want to use it. In threat modeling, when assessing how severe a risk is, I like to consider a counterfactual world in which only one thing is different, to check what impact it has on the end result. In this case, let's assume the Claude app did not pre-provision the manifest file while everything else remains the same. This would mean that the user would install the extension, the extension would show a page that basically says "hey, your browser is about to ask you to allow something. We need this permission to work, please say yes." It would then trigger the browser to ask the user to allow installation of the native messaging manifest, the user would click "yes", and we would be at the exact same place we are now.
Now, would this be the better way of doing things? From a consent perspective, yes. Asking the user for consent is the right thing to do, which is why the browser requires this. Circumventing this is a shady thing to do. But looking at this through the lense of security? I'd argue it doesn't matter. Do you seriously think someone would go to the trouble of installing Claude, finding the extension on the store, but then say "oh no, my browser is asking me to allow a permission, I think this is too dangerous, I'd rather stop."? Keep in mind that the extension will also have to ask for permissions like "read the content of any website you are viewing", which is a lot scarier than whatever the native messaging thing might be.
So, I'd argue: there isn't actually a significant difference in your vulnerability compared to not provisioning the manifest in advance.
Scenario 3: Relaxing the second assumption
Now, earlier, I said I would assume that you are the only person who has access to your computer, and there is also no malware or other backdoors installed. For the sake of argument, let's relax this assumption. Let's say someone else has the ability to install the extension on your machine, either because they know your password, or because they have installed a trojan on your machine. Let's further assume that this person was interested in installing the Claude extension for whatever reason. What changes because of the preprovisioned manifest file?
The answer is: nothing. If I have physical access to your machine, I can click the "OK, allow this native messaging thing" button like I clicked the other buttons I needed to click to install the extension. And if I have a trojan on your machine that allows me to install an extension, it would also allow me to drop the manifest file in the right location.
Additionally, what do I gain by this? I give special access to Anthropic, not to myself. And also: I already have access! I don't need to install the addon to do evil things, I can just... do the evil thing directly.
The basic question: Who do you trust?
In threat modeling, the most important question is: who do you trust, and who do you want to defend against? And here we come to the central part of where the argument from the article falls apart for me.
Once again, we have two scenarios: Either you trust Anthropic, or you don't.
If you trust Anthropic, you have nothing to fear. The Claude app and extension will do what you want, and only what you want, so you might as well give consent.
If you do not trust Anthropic, why on earth are you putting their software on your machine and allowing their browser addon to read the contents of all websites? In this situation, Anthropic already has practically unlimited code execution on your machine (through the Claude app) and access to your browsers contents (through the extension). They don't need a bridge to do evil things with that, they can just... do the evil thing directly!
Now, we will soon see that this, too, is a bit of an oversimplification, but it serves as a starting intuition for threat modeling: decide who to trust, and then be consistent in that. In practical threat modeling, this is frequently more difficult and nuanced -you might say "I don't trust my developers completely, but I trust that no two of them will collude, and so as long as all code has been reviewed by another developer, I consider it impactical to insert backdoors into it".
On the whole "Supply Chain" argument
The article also makes the argument that "if Anthropic's Chrome store gets compromised, this access is already there, waiting for the backdoored version of the extension." I find this just as unconvincing. If you do not have the extension installed, a compromise of the extension does not matter for you. If you have the extension installed, the malware does not need the bridge to do evil things, it can just (say it with me) do the evil things directly.
The one thing where the bridge would make a difference would be if it granted the extension permission to act on your machine from inside the browser. But, once again: if I legitimately installed the extension, I would have given that access anyway, so nothing changes.
What's up with that bridge, anyway?
A question that the author of the piece did not really answer is: what is the actual functionality of the bridge as it stands today? The way they describe it, it seems like its a quite powerful binary that gives wide-ranging access to the machine it is running on. In actual fact, it is a tiny binary that basically serves as a pipe between Claude and the extension. The extension acts as an MCP server that is delivered through a socket (a special file on the machine that allows passing data between processes) - the Claude app connects to one end of the socket, the browser extension to the other, and the helper binary itself just passes the data through. It does not expose any tools for accessing the file system, network, or anything else aside from this one socket.
So, as you can see - it's completely harmless. Right?
Wrong. As it turns out, this is the actual Archilles' heel of the whole setup that you should be worried about.
What bad threat modeling hides
This part is the reason I actually wanted to write this article - because it's not just the author of the piece discussed above that does not threat model properly, it is also Anthropic themselves.
Again, let's threat model this. In threat modeling, I like to look at how data and instructions flow, and how the different sides of that communication can actually be sure that they are talking to the correct system. So, how is this communication channel secured?
The answer is: through file system permissions on the socket file. Only the current user of the device can access the socket, and thus only they can communicate with it. So, problem solved?
Well, what's your threat model? If you assume that only benign applications are running on your machine under your own user account, then yes. (In threat modeling, we call this a "trust boundary" - a border between a "less trustworthy" and "more trustworthy" environment). If we relax that assumption, however, things start to get interesting.
Actually weaponizing the bridge
Matt Hand at Origin has a writeup in their blog of their analysis of this attack vector. They show that if the extension is installed and you have file system access to the socket file, you can actually remote control the browser extension and thus the browser itself. Funnily enough, Anthropic closed the security report, basically by saying what I was saying above: you need local code execution capabilities to exploit this, and we choose not to try to defend against such a situation, as the attacker is already in an extremely powerful position to begin with. This sometimes gets called "no exploit from the heavens" - if the precondition of the attack is that the attacker is as powerful as the effect of the attack itself, it should not count as a vulnerability.
I actually believe that here, this argument gets stretched beyond its limits. Because yes, an attacker can do this directly. But accessing the contents of the browser is a real slog, with encrypted cookie files, TLS encryption on the transmitted data, and so on. And messing with that is quite a "loud" operation, which a decent Antivirus might flag - usually, applications that aren't Chrome don't tend to try to read or write Chromes' cache, files, and configuration, places that a good AV will keep an eye on. But no AV thinks that some socket in /tmp will grant privileged access to the contents of the browser, so no AV will be looking for that. If it is Anthropic software on both sides of that bridge, that's not an issue, because Anthropic can indeed just do the evil thing directly in the browser. With a third party application, that is no longer the case. In security, we call this a "capability uplift" - you turn basic code execution and file system access into stealthy browser control and data exfiltration, bypassing most AV controls.
And this brings us back to the "oversimplification" I mentioned above. You might trust Anthropic - but do you trust every single developer who wrote every single small utility you installed to your machine? Because they can now potentially stealthily remote control your browser - anything you can do in your browser, the attacker can do as you, with your active logins, and without triggering any of the alarms that go off when malware reaches into the cookie jar. I for one would not be so thrilled to learn that the small utility I installed to keep my Mac awake while plugged into an external monitor also has full access to the entire content of my browser because Anthropic put an unauthenticated pipe in a semi-publicly accessible place. And while App Store applications are sandboxed and cannot access the socket, most non-App Store applications on Mac OS have file system access by default...
So, in threat modeling, be sure that you actually enumerate all relevant actors, because if you miss one, you might get into trouble from an unexpected direction.
Now, to be fair, there are some mitigations. The browser extension explicitly asks if you want to let your browser be remote controlled on this specific page every time it is called, so the stealthy access is limited to pages in the allowlist. And while, as Matt Hand points out in their post, this allowlist can be overwritten by malware to allow the access everywhere, we are once again in the area of "a good AV should flag such a thing" territory. This is an example of "defense in depth" - you stack multiple defenses, so that if one of them fails, another might prevent disaster. And this is what is otherwise lacking for this communication channel - there is no mutual authentication of the communicating parties that proves that access to the extension should be granted, just filesystem permissions.
Conclusion
Personally, I think that the writeup by Matt Hand / Origin is the much more impactful research, and it appears to have gotten none of the attention that the PrivacyGuy article got. That makes me sad, because I think that making people aware of what kind of access they are giving not just Anthropic, but everyone else on their machine, would be quite useful. However, if this kind of level-headed, technical reporting gets drowned out by, quite frankly, overblown and hysterical claims about spyware and backdoors that don't hold up to scrutiny, everyone loses.
Yes, Anthropic should not pre-provision these pipelines. But that has less to do with the fact that they allow Anthropic to spy on you, and more with common decency. What the discussion hides is that this channel actually gives control of your browser to pretty much any application on your machine, regardless on whether you had to give explicit consent or not. Bad threat modeling can land near a good answer, but for the wrong reason, and if you stop searching there, you might miss the more important or impactful result.
Also, interestingly enough: as of today, the local connection seems to only be used as a fallback - it seems like the primary method of connection between the browser and Claude will go via a Websocket relay hosted by Anthropic, with the practically used method being controlled by a server-side feature flag. I haven't had the time to dig into that (in particular the issues of authentication, and if the Gateway can actually see all the traffic in plain text or if there is some sort of application-level encryption on that connection). But by now, you should know enough to try and threat model this yourself ;).
Acknowledgements and further reading
I would like to thank Claude Opus 4.7 for its help with this article. I wrote every word myself, but Claude was quite helpful in researching the technical aspects, clearing up my thinking, challenging my ideas, and highlighting where I was doing something implicitly that I should promote from subtext to text.
If you want to know more about threat modeling, the single best resource I can point at is "Threat Modeling: Fast, Cheap and Good" by Adam Shostack. It's a free whitepaper that contains lots of ideas on how to do simple, quick threat modeling sessions, which can be as easy as asking "what could possibly go wrong?" and discussing that question for a minute or two before starting implementation on a story. You can then graduate to more advanced techniques if and when you want to. It won't replace a security professional, but it'll make their job easier.
And, once again, I encourage you to read and share the writeup by Matt Hand / Origin.




