Unleashing the Unseen: Harnessing the Power of Cobalt Strike Profiles for EDR Evasion
In this blog post, we will go through the importance of each profile’s option, and explore the differences between default and customized Malleable C2 profiles used in the Cobalt Strike framework. In doing so, we demonstrate how the Malleable C2 profile lends versatility to Cobalt Strike. We will also take a step further by improving the existing open-source profiles to make Red-Team engagements more OPSEC-safe. All the scripts and the final profiles used for bypasses are published in our Github repository. The article assumes that you are familiar with the fundamentals of flexible C2 and is meant to serve as a guide for developing and improving Malleable C2 profiles. The profile found at (https://github.com/xx0hcd/Malleable-C2-Profiles/blob/master/normal/amazon_events.profile) is used as a reference profile. Cobalt Strike 4.8 was used during the test cases and we are also going to use our project code for the Shellcode injection. The existing profiles are good enough to bypass most of the Antivirus products as well as EDR solutions; however, more improvements can be made in order to make it an OPSEC-safe profile and to bypass some of the most popular YARA rules. Bypassing memory scanners The recent versions of Cobalt Strike have made it so easy for the operators to bypass memory scanners like BeaconEye and Hunt-Sleeping-Beacons. The following option will make this bypass possible: set sleep_mask “true”; By enabling this option, Cobalt Strike will XOR the heap and every image section of its beacon prior to sleeping, leaving no string or data unprotected in the beacon’s memory. As a result, no detection is made by any of the mentioned tools. BeaconEye also fails to find the malicious process with the sleeping Beacon: While it bypassed the memory scanners, cross-referencing the memory regions, we find that it leads us straight to the beacon payload in memory. This demonstrates that, since the beacon was where the API call originated, execution will return there once the WaitForSingleObjectEx function is finished. The reference to a memory address rather than an exported function is a red flag. Both automatic tooling and manual analysis can detect this. It is highly recommended to enable “stack spoof” using the Artifact Kit in order to prevent such IOC. It is worthwhile to enable this option even though it is not a part of the malleable profile. The spoofing mechanism must be enabled by setting the fifth argument to true: During the compilation, a .CNA file will be generated and that has to be imported in Cobalt Strike. Once imported, the changes are applied to the new generated payloads. Let’s analyze the Beacon again: The difference is very noticeable. The thread stacks are spoofed, leaving no trace of memory address references. It should also be mentioned that Cobalt Strike added stack spoofing to the arsenal kit in June 2021. However, it was found that the call stack spoofing only applied to exe/dll artifacts created using the artifact kit, not to beacons injected via shellcode in an injected thread. They are therefore unlikely to be effective in obscuring the beacon in memory. Bypassing static signatures It is time to test how well the beacon will perform against static signature scanners. Enabling the following feature will remove most of the strings stored in the beacon’s heap: set obfuscate “true”; Once the profile is applied to Cobalt Strike, generate a raw shellcode and put it in the Shellcode loader’s code. Once the EXE was compiled, we analyzed the differences in the stored strings: During many test cases we realized that the beacon still gets detected even if it is using heavy-customized profiles (including obfuscate). Using ThreadCheck we realized that msvcrt string is being identified as “bad bytes”: This is indeed a string found in Beacon’s heap. The obfuscate option isn’t fully removing every possible string: So let’s slightly modify our profile to remove such suspicious strings: This didn’t help much as the strings were still found in the heap. We might need to take a different approach to solve this problem. Clang++ to the rescue Different compilers have their own set of optimizations and flags that can be used to tailor the output for specific use cases. By experimenting with different compilers, users can achieve better performance and potentially bypass more AV/EDR systems. For example, Clang++ provides several optimization flags that can help reduce the size of the compiled code, while GCC (G++) is known for its high-performance optimization capabilities. By using different compilers, users can achieve a unique executable that can evade detection: The string msvcrt.dll is not shown anymore, resulting in Windows Defender being bypassed: Testing it against various Antivirus products leads to some promising results (bear in mind that an unencrypted shellcode was used): Removing strings is never enough Although having obfuscate enabled in our profile, we were still able to detect lots of strings inside the beacon’s stack: We modified the profile a little by adding the following options to remove all the mentioned strings: Problem solved! The strings no longer exist in the stack. Prepend OPCODES This option will append the opcodes you put in the profile in the beginning of the generated raw shellcode. So you must create a fully working shellcode in order not to crash the beacon when executed. Basically we have to create a junk assembly code that won’t affect the original shellcode. We can simply use a series of “0x90” (NOP) instructions, or even better, a dynamic combination of the following assembly instructions’ list: Pick a unique combination (by shuffling the instructions or by adding/removing them) and lastly, convert it to \x format to make it compatible with the profile. In this case, we took the instruction list as it is, so the final junky shellcode will look like the following when converted to the proper format: We took this a step further by automating the whole process with a simple python script. The code will generate a random junk shellcode that you can use on the prepend option: When generating the raw shellcode again with the changed
Making Sure Lost Data Stays Lost

Steven Lawton published an article on Dark Reading that explores the potential risks of retired hardware and forgotten cloud virtual machines. A recent report revealed that 56% of decommissioned routers sold on the secondary market contained sensitive corporate material, highlighting the regular exposure of confidential data. Greg Hatcher, CEO of cybersecurity consultancy White Knight Labs, explains that cloud-based vulnerabilities usually result from misconfigurations or naïve actions. He emphasizes the importance of having both hardware and software asset inventories to protect networks and cloud environments. Hatcher also notes that sensitive data might not stay hidden, even when applications are deleted from retired hard disks. Access policies can remediate vulnerabilities by allowing specific individuals to access data within a platform, while guidelines and security frameworks can help define controls for decommissioning hardware and protecting data.
“Can’t Stop the Phish” – Tips for Warming Up Your Email Domain Right
Introduction Phishing continues to be a lucrative vector for adversaries year after year. In 2022, for intrusions observed by Mandiant, phishing was the second most utilized vector for initial access. When Red Teaming against mature organizations with up-to-date and well configured external attack surfaces, phishing is often the method of initial access. To give the readers a recent example, I was on an engagement where one of the goals was to bypass MFA and compromise a user’s Office365 account. The target had implemented geo-location based restrictions, MFA for users, and conducted user awareness trainings. To get past these controls, a customized version of Evilginx was hosted in the same country as the target. This server was protected behind an Apache reverse proxy and Azure CDN to get through the corporate web proxy. The phishing email contained a PDF, formatted to the target’s branding (from OSINT) with a ruse that required users to re-authenticate to their Office365 accounts as part of a cloud migration. The phish was sent using Office365 with an Azure tenant, which helped get through the email security solution (more on this later). The campaign began, and eventually a user’s session was successfully stolen, resulting in access to business critical data. Interestingly, although a few users reported the phish, the investigating security analyst did not identify anything malicious about the email and marked the reported phish as a false positive. The engagement concluded with valuable lessons learnt from identifying gaps in response processes, user awareness, and an ‘aha moment’ for the executives. Reel It In: Landing The Phish Landing an email in a recipient’s inbox can be a challenge, for both cold emailing and spear phishing. To avoid getting caught in spam filters, it’s important to follow a few best practices. Never use typosquatted target domains. For instance, Mimecast has an ‘Impersonation rule’ to prevent typosquatted domains from landing in an inbox. Instead, purchase an expired domain that resembles a vendor that may be related to the target. For instance, ‘fortinetglobal.com’ or ‘contoso.crowdstrikeglobal.com’. I tend to rely on Microsoft’s domains. Azure tenants provide a subdomain of ‘onmicrosoft.com’. Alternatively, you could rely on Outlook (‘@outlook.com’), and impersonate a specific user from the target, such as IT support. Rely on your OSINT skills to identify the email naming syntax, signature, and profile picture of the user you want to impersonate. Most RT shops use reputed providers such as Office365, Mailgun, Outlook, Amazon SES, or Google Workspace. Some folks still use their own Postfix SMTP server. To avoid blacklisted source IPs, you may want to use a VPN on the host you use to send emails. Enumerate the target’s DNS TXT records to identify allowed third-party marketing providers. A quick Shodan lookup of the allowed IPs should identify the associated vendor. Rely on the same service to send your phish, for example, Sendgrid or Klaviyo. Sender policy framework (SPF): Identifies the mail servers and domains that are allowed to send email on behalf of your domain. DomainKeys identified mail (DKIM): Digitally sign emails using a cryptographic signature—so recipients can verify that an email was sent by your domain and not spoofed by a third party. Domain-based message authentication (DMARC): Allows email senders to specify what should happen if an email fails the SPF record check —for example, whether it should be rejected or delivered to the recipient’s inbox. Words such as Free, Prize, Urgent, Giveaway, Click Here, Discount etc. are associated with spam or unsolicited email and cause your email to be automatically sent to Junk or not delivered at all. I prefer to send my phishes in three stages. For instance, on a Friday I’d send a ‘Downtime Notification’ informing users that they may need to re-authenticate to their accounts after a scheduled maintenance activity. This email would not contain anything suspicious—no links, no action required—it’s only informational. The following Monday, I’d reply to the same email thread, with the phish requesting users authenticate on the provided link. On Tuesday, I’d send a ‘Gentle Reminder’ email, once again replying to the same mail thread. The user compromised from the example in the introduction section was phished with the ‘Gentle Reminder’. For a targeted attack, it’s recommended to establish two-way communication with the user before sending the main phish. With Google Workspace: Create a document themed according to target’s branding → Embed payload URL within sheet → Add victim as ‘Commentator’ → Victim receives an email from ‘comments-noreply@docs.google.com’ With SharePoint: Create a folder or document themed according to target’s branding → Embed payload URL within sheet → Provide only read access to anyone with the URL → Share the folder\file with victim’s email ID → Embedded link is a subdomain of ‘sharepoint.com’. Personally, I’ve had better success landing shared files instead of shared folders on SharePoint. SharePoint also provides notifications when users access the document. Instead, provide a link in the email, redirecting the user to the site that’s hosting the payload. Using your own website provides benefits such as user\bot tracking and quick payload swaps. Use JavaScript to hide the payload from the <a href=””> tag, and download based on an event such as onclick(). Alternatively, Azure blob storage, Azure app services, AWS S3 buckets or IPFS are worth considering. Use ‘https://www.mail-tester.com’ and Azure’s message header analyzer. Note that just because you get a 10/10 score, does not mean you will land the phish in your target’s inbox. This brings us to the idea of email sender reputation. Warm It Up Email sender reputation refers to the perceived trustworthiness of the sender of an email, as determined by various email service providers and spam filters. This reputation is based on a number of factors such as the sender’s email address, the content of the email, and the frequency and consistency of sending emails. When new senders appear, they’re treated more suspiciously than senders with a previously-established history of sending email messages (think of it as a probation period). For instance, if you try to send more than 20 emails
Masking the Implant with Stack Encryption
This article is a demonstration of memory-based detection and evasion techniques. Whenever you build a Command & Control or you perform threat hunting, there will be scenarios when you might need to analyze the memory artifacts of a specific system—something that is really useful during your live forensics or when you’re going to perform an incident response on a host by segregating that host from the network. In such scenarios, it would be required to identify the payload that is currently running in memory. We will be taking a look at some of the examples of how that payload investigation can be performed, and how that investigation can be bypassed as well. A lot of times during an engagement, an engineer might execute a payload: either Cobalt Strike, Havoc or any other open source C2s that are currently there. There are specific scenarios where the red teamer might want to execute a command on the endpoint, which gathers a lot of strings and sends that to your C2 host. These strings can be username, hostname, or even information related to your command and control server itself and the information might also be encrypted during transit. However, when the payload sleeps on the endpoint, and the red teamer adds sleep and jitter to the beacon, these commands need to be stored in an encrypted way. In this current scenario, this information can be either stored into a heap or on stack. Regarding heap memory, usually we don’t have to worry about it because you can eventually walk a heap, extract information, and encrypt when you are sleeping. However, things change a bit when we talk about stack encryption. The problem with loaders In a traditional shellcode loader, the shellcode is stored in stack memory since it is stored in a variable inside or outside of a function. When the shellcode is written with WriteProcessMemory to a local/remote process, not only is the shellcode stored in that particular memory but also it remains stored in the stack, where the variable lives. Finding the stack To quickly identify where the stack is located, we need to retrieve the RSP address. This register will contain the address of the top of the stack. While the top of the stack is easily identifiable, the bottom is much harder as the stack dynamically increases and/or decreases in size based on the variables that are stored and freed as the code is executed. Luckily VirtualQuery makes it so easy for us to retrieve information about the range of pages in the virtual address space of the calling process. So using the RSP address that we found previously allows us to determine the top and the bottom of the stack: Suspending the thread to avoid abnormal behavior It is required to suspend the process before encrypting or decrypting the stack. This is because modifying the stack while the process is still running can cause unpredictable behavior and potential crashes. To suspend the process, we can use the SuspendThread function from the Windows API, which suspends the execution of a thread until it is resumed with the ResumeThread function. Encrypting what we need to hide Encrypting from the beginning of the page to the bottom of the stack might look suspicious and is not the most OPSEC-safe approach. Instead, we will encrypt where the stack actually begins (RSP address) and the bottom of the stack (minus 8 bytes). Below is the image of the range that should be encrypted, starting from RSP address (where the plaintext strings and the shellcode is stored) until the end of the stack: The encryption routine is pretty simple; XOR byte per byte until you reach the end of the stack: If we analyze the stack of the loader, we can clearly see what the stack will look like after the encryption: There’s no sleepmask without sleeping Some modern detection solutions possess countermeasures against a basic Sleep(). For example, hooking sleep functions like Sleep in C/C++ or Thread.Sleep in C# to nullify the sleep, but also fast forwarding. There is already a nice technique that leverages CPU cycles to perform a custom sleep. I am not going to further describe how it works as it is already well-explained here in this article. Wrapping everything up In conclusion, understanding memory-based detection and evasion techniques is crucial for effective threat hunting and incident response. Investigating the payloads that are running in memory can provide critical information about a system’s state, but it can also be bypassed through stack encryption techniques. The code for this PoC can be found in this GitHub repo. Credits https://shubakki.github.io/posts/2022/12/detecting-and-evading-sandboxing-through-time-based-evasion/ White Knight Labs – Red Team Engagements White Knight Labs is an expert in conducting red team engagements that are tailored to the specific needs of our clients. We believe that every organization has unique security requirements, which is why we work closely with our clients to develop customized testing plans that align with their objectives and security goals. As a tactical, objective-based company, we excel in intense scenarios and strive to provide our clients with a realistic assessment of their security posture. Our experienced team employs advanced tactics and techniques to simulate real-world attacks and identify vulnerabilities that could be exploited by malicious actors. Our red team engagements include a thorough analysis of your organization’s defenses and culminate in a detailed report that outlines our findings and recommendations for strengthening your security posture. With White Knight Labs’ red team engagements, you can be confident that your organization is better equipped to defend against targeted attacks and other sophisticated threats.