While attackers continue to develop new methods to exploit systems, traditional techniques remain just as effective and just as dangerous. A researcher has shown that Windows’ native AI capabilities can be used as a delivery channel for malware.
The researcher, hxr1, shared the proof of concept exclusively with Dark Reading. It showed how the Living-off-the-Land (LOTL) technique can be used to exploit ONNX model files. LOTL is a technique where attackers use legitimate system tools and software to carry out malicious actions.
ONNX stands for Open Neural Network Exchange. It is a common, portable file format for machine learning models. It stores a model’s computation graph, parameters, and basic metadata so different frameworks and applications can load and run the same model locally.
Because ONNX is a legitimate, supported format used by Windows features and apps, security tools, and the operating system treat those files as benign by default. That default trust is what makes this approach attractive to an attacker.
The demonstration showed how attackers can quietly hide malicious code inside these seemingly harmless model files. One approach is to stash a payload in a metadata field, though that is easy to detect because the data is in plaintext. Another is to split a payload across model components such as nodes, inputs and outputs so that the malicious bits look like an ordinary model structure. Attackers could even embed data inside the model weights using steganographic techniques. All of these options rely on having a nearby loader that calls standard Windows APIs to extract and reassemble the payload in memory and then execute it.
The major advantage of this technique is the stealthiness it offers. The PoC succeeds in part because the Dynamic Link Libraries (DLLs) that operate on ONNX files are signed by Microsoft and built into Windows. The malicious content never appears as a foreign executable on disk. Therefore, it can evade many endpoint detection systems that are tuned to spot suspicious binaries or unusual process behaviour.
The process to deliver the poisoned model to a target is straightforward. Attackers may use a phishing email with an ONNX attachment and loader or publish it to an open-source hub, counting on users or developers to run it. Because model hubs are treated as legitimate sources for AI assets, this distribution path exploits that implicit trust.
To address this issue, hxr1 recommends that organizations do the following:
-
Adapt security tools to detect threats hidden within AI model files.
-
Configure Endpoint Detection Systems (EDRs) to monitor which processes load AI models, track the data being extracted, and observe where that data is sent.
-
Use static analysis tools, such as YARA rules, to identify suspicious strings within model data.
-
Implement application controls, like AppLocker, for additional mitigation.
-
Combine these measures to create a comprehensive and more effective detection strategy.
As AI formats and local inference become normal parts of desktop and application workflows, attackers will look for ways to hide in plain sight inside those formats. Security teams need to treat model files and model loading behaviour as potential attack surfaces and add checks for unusual model metadata, unexpected loaders, and anomalous model usage.




