Cloud Memory Forensics: Acquiring and Analysing Volatile Evidence from AWS EC2 Instances
When responding to a security incident involving a compromised cloud instance, disk images and log data tell part of the story. Memory tells the rest. Running processes, active network connections, injected code, encryption keys, and credential material exist only in volatile memory and are lost the moment an instance is stopped or terminated. In cloud environments where instances are routinely scaled, terminated, and replaced, the window to capture this evidence is narrow.
This article covers practical approaches to acquiring and analysing memory from AWS EC2 instances during incident response. The techniques discussed apply broadly to Linux-based cloud workloads, with specific focus on tooling that works within AWS infrastructure constraints.
Why Memory Forensics Matters in Cloud Incidents
Traditional forensics relies heavily on disk artefacts: filesystem timestamps, log files, persistence mechanisms, and deleted file recovery. In cloud environments, these artefacts are often incomplete. Ephemeral storage is lost on termination, EBS volumes may be encrypted with keys you need to recover, and attackers operating primarily in memory can leave minimal disk footprint.
Memory forensics fills critical gaps that disk and log analysis cannot:
- Running processes and their full command lines, including processes that have deleted their on-disk binaries
- Active and recent network connections, including resolved DNS entries and established C2 channels
- Loaded kernel modules, including rootkits that hide from userspace tools like
lsmodorps - Injected code and hooked functions within legitimate processes
- Credential material such as SSH keys, API tokens, and session cookies held in process memory
- Encryption keys for encrypted volumes or communications
- Environment variables containing secrets passed to running applications
The volatile nature of memory evidence means its aquisition must be prioritised early in the response timeline, ideally before any containment actions that involve stopping or isolating the instance at the hypervisor level.
Memory Acquisition Techniques
Acquiring memory from a cloud instance requires getting an acquisition tool onto the target, executing it, and extracting the resulting dump, all while maintaining forensic integrity and minimising changes to the system under investigation. There are several approaches, each with trade-offs.
AWS Systems Manager (SSM)
AWS Systems Manager provides agentless-style remote command execution to EC2 instances that have the SSM Agent installed (included by default on Amazon Linux, Ubuntu, and most AWS-provided AMIs). For incident response, SSM is valuable because it does not require SSH access, does not depend on security groups allowing inbound connections, and all command execution is logged in CloudTrail.
SSM can be used to deploy a memory acquisition tool, execute it, and upload the resulting dump to S3 without opening any additional network paths to the instance.
A basic workflow using SSM Run Command:
# Step 1: Deploy AVML to the target instance via SSM
aws ssm send-command \
--instance-ids "i-0abc123def456789" \
--document-name "AWS-RunShellScript" \
--parameters 'commands=[
"curl -L -o /tmp/avml https://github.com/microsoft/avml/releases/latest/download/avml",
"chmod +x /tmp/avml"
]' \
--comment "Deploy AVML for memory acquisition"
# Step 2: Execute memory capture and upload to S3
aws ssm send-command \
--instance-ids "i-0abc123def456789" \
--document-name "AWS-RunShellScript" \
--parameters 'commands=[
"/tmp/avml --compress /tmp/memory.lime",
"aws s3 cp /tmp/memory.lime s3://forensics-bucket/case-2026-001/memory.lime"
]' \
--comment "Acquire memory and upload to S3"
Note: To push data to S3 presigned URLs, or termporary scoped down keys are recommended.
Considerations: SSM Agent must be running on the target. If an attacker has killed the agent or the instance has no outbound connectivity to the SSM endpoint, this approach will not work.
LiME (Linux Memory Extractor)
LiME is a loadable kernel module (LKM) that acquires memory by reading directly from /dev/mem or through the kernel’s page mapping interface. It has been the standard Linux memory acquisition tool for over a decade and produces output in its own LiME format or raw/padded formats compatible with most analysis tools.
# LiME must be compiled against the running kernel's headers
sudo apt-get install linux-headers-$(uname -r) build-essential
git clone https://github.com/504ensicsLabs/LiME.git
cd LiME/src
make
# Load the module to acquire memory
sudo insmod lime-$(uname -r).ko "path=/tmp/memory.lime format=lime"
LiME writes the dump directly to the specified path. The format=lime option produces a format that includes memory range metadata, which Volatility and other tools can parse natively. The format=raw option produces a flat binary that may be preferred for some workflows.
Advantages: Well-understood and widely supported across analysis tools. Produces forensically sound acquisitions. Supports output over TCP for remote collection without writing to local disk.
Challenges in cloud: LiME requires kernel headers matching the running kernel version. On cloud instances that auto-update or use custom AMIs, the correct headers may not be available. Compiling a kernel module also modifies the target system (loading packages, writing to disk), which is not ideal from a forensic integrity standpoint. For pre-built response toolkits, you need to maintain compiled LiME modules for every kernel version in your fleet.
AVML (Acquire Volatile Memory for Linux)
AVML, developed by Microsoft, addresses many of LiME’s cloud-specific limitations. It is a statically compiled, standalone binary that does not require kernel headers or module compilation. It acquires memory by reading /proc/kcore (the kernel’s virtual memory exposed as an ELF core file) or falling back to /dev/crash and /dev/mem.
# Download the pre-built binary
curl -L -o avml https://github.com/microsoft/avml/releases/latest/download/avml
chmod +x avml
# Acquire memory with compression
sudo ./avml --compress output.lime
# Acquire and upload directly to Azure Blob or write locally
sudo ./avml output.lime
AVML produces LiME-format output by default and supports optional Snappy compression, which significantly reduces the size of the memory dump for transfer. A 16GB memory dump typically compresses to 2-4GB depending on memory utilisation.
Advantages: Single static binary with no dependencies. No kernel module loading. Works across kernel versions without recompilation. Drop it on the instance and run it.
This is generally the recommended tool for cloud incident response due to its simplicity and portability. It is well suited to automated runbooks where you cannot guarantee kernel header availability.
EDR-Based Acquisition
If the compromised instance is running an EDR agent, some platforms support remote memory acquisition or already collect memory-relevant telemetry:
- CrowdStrike Falcon supports full memory dump collection via Real Time Response (RTR). The
memdumpcommand initiates acquisition from the Falcon console without deploying additional tools. - SentinelOne provides remote shell capabilities and memory collection through its Deep Visibility feature.
- Velociraptor (open-source DFIR tool) has built-in memory acquisition artefacts and can orchestrate AVML or LiME deployment across fleets.
Using existing EDR for acquisition is attractive because the agent is already deployed and authenticated. However, if the instance is compromised at a level where the attacker has kernel access or has tampered with the EDR agent, the acquisition may not be trustworthy. In high-severity incidents, an independent acquisition method is preferable.
Memory Analysis
Once you have a memory dump, analysis tools parse the raw binary data into structured information: process trees, network connections, loaded modules, and extracted artefacts. The three primary tools for this work are Volatility 3, MemProcFS, and Volexity Volcano.
Volatility 3
Volatility is the industry standard open-source memory forensics framework. Version 3 is a complete rewrite from Volatility 2, with a plugin-based architecture and native support for Linux, Windows, and macOS memory analysis. It uses symbol tables (ISF files) generated from kernel debug symbols to parse memory structures.
Setting up for Linux analysis:
# Install Volatility 3
git clone https://github.com/volatilityfoundation/volatility3.git
cd volatility3
pip3 install -r requirements.txt
# For Linux analysis, you need the correct symbol table (ISF)
# Generate from the target's System.map and kernel debug symbols
# or use dwarf2json to create one
python3 vol.py -f /path/to/memory.lime \
linux.bash.Bash
Generating the correct ISF (Intermediate Symbol Format) file for your target kernel is a common stumbling block. The dwarf2json tool converts kernel DWARF debug information into Volatility 3’s symbol format:
# On a system matching the target kernel
# (or using the kernel debug package)
dwarf2json linux \
--elf /usr/lib/debug/boot/vmlinux-$(uname -r) \
--system-map /boot/System.map-$(uname -r) \
> linux-symbol-table.json
# Place in volatility3/symbols/linux/
Key plugins for incident response:
# List running processes with parent relationships
python3 vol.py -f memory.lime linux.pslist.PsList
python3 vol.py -f memory.lime linux.pstree.PsTree
# Identify hidden processes (compares multiple sources)
python3 vol.py -f memory.lime linux.psaux.PsAux
# Active and recent network connections
python3 vol.py -f memory.lime linux.sockstat.Sockstat
# Loaded kernel modules (detect rootkits)
python3 vol.py -f memory.lime linux.lsmod.Lsmod
# Check for modifications to the syscall table
python3 vol.py -f memory.lime linux.check_syscall.Check_syscall
# Bash command history from memory
python3 vol.py -f memory.lime linux.bash.Bash
# Extract environment variables (may contain secrets)
python3 vol.py -f memory.lime linux.envars.Envars
# Recover files mapped in memory
python3 vol.py -f memory.lime linux.proc.Maps
Strengths: Extremely mature, well-documented, and widely supported in the DFIR community. Plugin architecture means it is extensible. Accepted in legal and regulatory contexts. Large body of training material and community knowledge.
Limitations: Linux symbol table generation requires matching debug symbols, which can be difficult to obtain for some distributions or custom kernels. Analysis is offline and batch-oriented, meaning each plugin runs as a separate pass over the dump.
MemProcFS
MemProcFS, developed by Ulf Frisk, takes a fundamentally different approach to memory analysis. Rather than running individual plugins, it mounts a memory dump as a virtual filesystem. Processes appear as directories, network connections as files, and registry hives (on Windows) as browsable structures. This makes memory analysis accessible to investigators who may not be deeply familiar with Volatility’s command-line interface.
# Mount a memory dump as a virtual filesystem
./memprocfs -device /path/to/memory.lime -mount /mnt/memfs
# Browse processes like a filesystem
ls /mnt/memfs/pid/
# 1/ 2/ 357/ 1042/ 4521/ ...
# View process details
cat /mnt/memfs/pid/4521/cmdline
cat /mnt/memfs/pid/4521/environ
# View network connections
cat /mnt/memfs/sys/net/netstat.txt
# View loaded modules
cat /mnt/memfs/sys/modules/modules.txt
# Timeline of forensic artefacts
cat /mnt/memfs/forensic/timeline/timeline.txt
# When done
umount /mnt/memfs
MemProcFS also includes a built-in web interface and supports both live analysis and dump file analysis. Its LeechCore library handles the underlying memory access and supports a wide range of input formats including LiME, raw, and crash dumps.
Strengths: Intuitive filesystem-based interface lowers the barrier to entry. Fast, supports parallel analysis. The virtual filesystem approach integrates well with standard Unix tools (grep, find, cat) and scripting. Excellent for rapid triage when you need answers quickly during active incident response.
Considerations: Linux support is more recent and less mature than Windows support. Some advanced analysis capabilities available in Volatility may not have direct equivalents.
Volexity Volcano
Volexity Volcano is a commercial memory analysis platform developed by Volexity, the team known for their threat intelligence and incident response work. Volcano focuses specifically on detecting malicious activity in memory rather than general-purpose memory forensics.
Volcano combines signature-based detection, heuristic analysis, and YARA rule scanning against memory dumps. It maintains a curated database of indicators drawn from Volexity’s own incident response casework, which includes advanced threat actor techniques that may not be covered by open-source rule sets.
Key capabilities:
- Automated malware detection across process memory, kernel space, and memory-mapped files
- YARA scanning against the full memory dump with context-aware results showing which process and memory region matched
- Rootkit and hooking detection identifying syscall table modifications, inline hooks, and hidden kernel modules
- Credential extraction from memory for common services and applications
- Reporting with structured output suitable for incident reports and legal proceedings
Volcano is particularly effective when dealing with sophisticated threats such as fileless malware, in-memory implants, and kernel-level rootkits where manual Volatility analysis would be time-consuming. Its value is in reducing the time from acquisition to actionable findings, especially when the analyst may not have deep memory forensics expertise.
Considerations: Volcano is a commercial product requiring licensing. For organisations that perform regular incident response, the time savings and detection coverage justify the cost. For occasional use, Volatility 3 combined with public YARA rule sets provides capable open-source coverage.
Putting It Together: A Cloud IR Memory Workflow
A practical workflow for memory forensics during a cloud incident:
1. Preserve first, ask questions later. As soon as a compromised EC2 instance is identified, acquire memory before any containment action. Isolating the instance via security group changes is acceptable as it preserves the running state, but stopping or terminating the instance destroys volatile evidence permanently.
2. Acquire using AVML via SSM. Deploy AVML to the target using SSM Run Command, execute the capture with compression enabled, and upload directly to a secured S3 bucket designated for forensic evidence. Document the acquisition time, instance ID, and the hash of the resulting dump.
# One-liner via SSM for rapid acquisition
aws ssm send-command \
--instance-ids "i-0abc123def456789" \
--document-name "AWS-RunShellScript" \
--parameters 'commands=[
"curl -sL -o /tmp/avml https://github.com/microsoft/avml/releases/latest/download/avml && chmod +x /tmp/avml && sudo /tmp/avml --compress /tmp/mem.lime && sha256sum /tmp/mem.lime && aws s3 cp /tmp/mem.lime s3://forensics-evidence/case-001/i-0abc123def456789-mem.lime"
]'
3. Triage with MemProcFS. Mount the dump and perform rapid triage: check running processes, network connections, and look for obvious anomalies. This gives you initial findings within minutes.
4. Deep analysis with Volatility 3. Run targeted plugins based on triage findings. If you identified a suspicious process in MemProcFS, use Volatility to dump its memory regions, inspect its loaded libraries, and trace its network activity.
5. Scan with YARA rules. Whether using Volcano or Volatility’s yarascan plugin, scan the dump against known malware signatures and custom rules relevant to the threat you are investigating.
Preparation Is Everything
The most common failure in cloud memory forensics is not having the capability ready when it is needed. During an active incident is not the time to figure out how to acquire memory from your instances.
Organisations should prepare by maintaining a forensic readiness toolkit: pre-staged AVML binaries in a secured S3 bucket, SSM documents configured for memory acquisition, IAM roles with the necessary permissions ready to attach, and an S3 bucket with appropriate retention policies and access controls for evidence storage. Regularly test the acquisition process against representative instances in your fleet to confirm that AVML works with your current kernel versions and that the SSM workflow completes reliably.
For analysis, maintain a forensic workstation (an EC2 instance or local machine) with Volatility 3, MemProcFS, and pre-generated symbol tables for the kernel versions running in your environment. Generating symbol tables during an incident adds delays and complexity that can be avoided with preparation.
Related Services:
Related Resources
My Email Was Hacked: Phishing Response for Microsoft 365
Immediate response steps for phishing attacks that compromise Microsoft 365 and Google Workspace accounts. Covers MITM phishing that bypasses MFA, removing attacker persistence, and prevention.
AWS Lambda Backdoor Investigation: Compromised Credentials, EventBridge Persistence, and the Serverless Logging Gap
Technical analysis of an AWS cloud breach involving exposed credentials, Lambda function backdoors, EventBridge persistence mechanisms, and the unique challenges of incident response in serverless environments.
OpenClaw: Why You Should Avoid These AI Agents
OpenClaw and Moltbot AI agents promise productivity but introduce severe security risks. Learn why these tools threaten Australian SMBs and what to use instead.
Ready to Work Together?
Let's discuss how we can help protect your business and achieve your security goals.