How Lilith Works

The agent has
no idea Lilith
exists.

Any agent that touches your infrastructure or config is an unsecured attack surface. Cloud credentials, SSH keys, production secrets: no runtime control exists for any of it. Tool poisoning, prompt injection, and silent exfiltration are all demonstrated in the wild. Lilith enforces at the kernel with full observability of every agent action, transparent to agents, impossible to bypass from userspace.

Lilith is a systemd daemon that enforces security at the kernel level, before any agent syscall completes, before any byte reaches the network. No SDK to install. No environment variable to set. No proxy to configure. The agent's code is never touched. The agent's config is never touched. The agent has no idea.

Transparent Interception

Every TCP connection is intercepted at the kernel, before connect() returns.

Agent Process
AI Agent
MCP / gRPC / HTTP
connect("mcp-server:8080")
intercepted at kernel, before connect() returns
Linux Kernel
cgroup/connect4
Rewrites destination to 127.0.0.1:7890. Agent never sees this rewrite.
socket_connect LSM
Allows only the tproxy endpoint. All other AF_INET from managed processes: EPERM.
plaintext stream arrives at Lilith
Lilith Daemon
SPIFFE Identity
Resolved from task_struct* key. PID-reuse attacks are structurally impossible.
Cedar Policy Eval
Non-Turing-complete. ~100 µs p99. Zero heap allocation per evaluation.
ALLOW
TLS 1.3 relay to original upstream
DENY
RST_STREAM + audit event emitted
Identity

Identity is stored in IDENTITY_TASK_STORAGE keyed by task_struct*, not PID. PID reuse attacks are structurally impossible: the kernel frees the entry automatically when the task exits.

Protocol

Protocol-agnostic. Cedar evaluates the same (principal, action, resource, context) tuple whether the agent speaks MCP JSON-RPC, A2A gRPC, OpenAPI HTTP/1.1, or any other protocol.

Enforcement Architecture

Three independent enforcement layers.

Layer 1
BPF-LSM
Ring 0

Eight Linux Security Module hooks run in kernel context, synchronous, before any syscall returns to userspace. Two additional cgroup BPF programs handle transparent TCP interception. All 10 programs are formally verified by the kernel's BPF verifier before loading.

bprm_check_securityWrite WorkloadIdentity to IDENTITY_TASK_STORAGE on exec
task_allocPropagate identity + taint bitmask to child on fork
socket_connectEnforce tproxy-only topology; check DAEMON_HEARTBEAT
socket_createDeny SOCK_RAW / AF_PACKET, blocks L2/L3 injection
socket_sendmsgDeny UDP/ICMP egress, closes covert data channels
mmap_fileExec allowlist: inode-keyed, TOCTOU-immune (Tier 2)
file_mprotectW^X enforcement: deny anonymous PROT_EXEC (Tier 3)
mmap_addrW^X enforcement: non-JIT agents only (Tier 3)
Layer 2
Cedar Policy
L7 Dataplane

Every tool call is evaluated against a Cedar policy, a non-Turing-complete, formally verified policy language (Lean 4 + Dafny). Static analysis via CVC5 1.2.1 SMT solver proves privilege non-escalation before any policy is deployed. Policies are Ed25519-signed capsules with anti-rollback watermarks.

Cedar example
permit (principal is Agent, action == Action::"read_file", resource is MCPServer)
when { context.data_touched & 8 == 0 };// no CREDENTIALS taint
forbid (principal is Agent, action == Action::"http_post", resource is MCPServer)
when { context.data_touched != 0 };// any taint, deny network egress
Non-Turing-complete
Guaranteed termination on every input
Formally verified
Lean 4 + Dafny; CVC5 SMT static analysis
~100 µs p99
Zero heap allocation per evaluation
Layer 3
Seccomp + Landlock
Kernel

Seccomp-BPF restricts agents to ~60 allowed syscalls. All LPE primitives are blocked at the kernel boundary. Landlock constrains filesystem access to specific ephemeral directories using kernel inode evaluation, TOCTOU-immune, composable, unprivileged.

Allowed syscalls
read, write, pread64, pwrite64
socket(AF_UNIX) only
epoll_*, poll, select
futex, nanosleep, clone(THREAD)
exit, exit_group
Blocked (LPE primitives)
ptrace, process_vm_readv/writev
bpf, userfaultfd, keyctl
io_uring_setup/enter/register
memfd_create(MFD_EXEC)
mount, pivot_root, unshare

Data Flow Tracking

Taint propagation, not detection.

A 64-bit bitmask accumulates across every tool call in a session via AtomicU64::fetch_or. Once a sensitive bit is set, it cannot be cleared, no race condition, no window for a bypass. Cedar policies read context.data_touched and structurally prohibit egress after any sensitive read.

64-bit taint bitmask: data_touched per SPIFFE session
PII
bit 0
FIN
bit 1
UNTRUSTED
bit 2
CREDS
bit 3
A
bit 4
B
bit 5
...
bit 6-63
[14:23:01]tools/call read_fileALLOWdata_touched:0x0001PII bit set
[14:23:02]tools/call fetch_reportALLOWdata_touched:0x0003PII + FINANCIAL
[14:23:03]tools/call http_postDENYdata_touched:0x0003any taint, exfiltration blocked

Reliability

Fail-closed by design.

The DAEMON_HEARTBEAT BPF array receives a write every 500 ms from the daemon. The socket_connect hook checks staleness on every verdict. If the daemon crashes or is killed, all managed-process connections receive EPERM within 2 seconds, no silent bypass, no open window.

socket_connect LSM hook
// BPF: runs before every agent connect()
let hb = DAEMON_HEARTBEAT.get(0)?;
let now = bpf_ktime_get_ns();
if now - hb.last_update_ns > 2_000_000_000 {
// daemon dead, deny all
return Err(EPERM);
}
>1.5M
decisions / sec
DashMap sharding + Cedar DFA + AtomicU64
~100µs
Cedar p99
Zero heap allocation per evaluation
<1ms
latency overhead
Agent syscall path unaffected
2 s
fail-closed TTL
Daemon death, EPERM at next connect()

Deployment

One daemon. Any agent. Any protocol.

Lilith runs as a systemd service or Kubernetes DaemonSet. It requires CAP_BPF and CAP_PERFMON. No CAP_SYS_ADMIN, no privileged container. Operates at the host OS layer, outside every agent namespace, invisible to every agent process.

systemd
ExecStart=/usr/bin/lilith-enforcer
Environment=LILITH_CAPSULE_PATH=...
AmbientCapabilities=CAP_BPF CAP_PERFMON
Restart=on-failure
Kubernetes DaemonSet
hostPID: true
hostNetwork: true
capabilities:
add: [BPF, PERFMON]
Docker
pid: host
cap_add: [BPF, PERFMON]
LILITH_TPROXY_ADDR:
172.17.0.1:7890
Requirements
Linux 5.15+ LTSCONFIG_BPF_LSM=yCONFIG_DEBUG_INFO_BTF=ylsm=bpf,landlock,yamaSPIRE agent (workload attestation)

Deploy Lilith today.

Kernel-level enforcement on any Linux host in under 10 minutes.