FAQ
How is Dam Secure different from an AI PR review?
Dam Secure offers determinism and knowledge of your codebases. Modern AI PR reviews surface issues using creative evaluation criteria that shift from one PR to the next. Issues get raised, but it's unclear what was actually checked, what was missed, or whether the PR before yours was held to the same standard. There's no defensible answer to "what does our review enforce?"
Dam Secure evaluates every pull request against a fixed, curated rule set. The rules don't change between runs, so you can point to the exact guardrails enforced on any given PR, and build engineering and application security processes, onboarding, and audits on top of that consistency.
How is Dam Secure different from a traditional SAST tool?
Static Application Security Testing (SAST) tools detect vulnerabilities by matching code against signatures and patterns for known classes of issues. That works for surface-level bugs like a hardcoded secret or an obvious SQL string concatenation, but it can't reason about the system around the code.
Dam Secure understands developer intent, system architecture, and how data moves between components. That makes it possible to enforce rules pattern matching can't express, like "no PII may flow from the import endpoint to the analytics pipeline without redaction," or "every endpoint that mutates billing data must call the audit logger." The check is grounded in how your code actually behaves.
Do you store a copy of my code?
We don't permanently store your code.
During onboarding and scans, our cloud agents create an ephemeral working clone of your repository. We may reuse that clone briefly across nearby jobs to reduce latency, but only while it is actively needed. In production, cleanup runs every 15 minutes. Once a cache is no longer active, it is reclaimed automatically, with a maximum retention window of 1 hour (and often sooner).
To support fast, relevant scans, we generate and store a derived semantic index of your code using embeddings (numerical representations). This helps our tools find related code patterns without storing another human-readable copy of your repository. These numberical embeddings are stored in our database and discarded if you remove the repository.
Use .damsecure-ignore to block agent access to specific files that could expose sensitive data (see repo-specific excludes).
NOTE: Dam Secure does not have write access to your repository.
Dam Secure is SOC 2 Type II compliant; see our trust center at trust.damsecure.ai.
What happens when you onboard my repository?
Your codebase is analysed in order to build a security knowledge graph. This knowledge graph provides our engine with the necessary context to find security issues efficiently and work with larger repositories & monorepos.
Below are the high-level steps involved in building the security knowledge graph:
- Structural Analysis — We walk your repository to discover the projects inside it like services, apps, libraries, infrastructure packages and profile each one with its language and framework. We also select ignore patterns to decide which files won't be scanned for security issues (see file exclusions).
- Rule Curation — We look at your codebase the way an experienced AppSec engineer would by figuring out risky hotspots and sharp edges. From there, we pull guardrails from our catalog where they fit, and write tailored ones where they don't, so every risky surface ends up covered.
- Security Indexing — We build the embeddings and analysis artifacts our engine uses to reason about your code. This is what makes scans fast and lets us work efficiently across larger repositories and monorepos.
Additional steps are outlined in repository onboarding.
How do you curate rules for my repository?
Our repository analysis workflow includes a comprehensive catalog of risks that we evaluate against every repository onboarded. A dynamic risk profile is created for the repository by combining two signals: the technology stack we detect (languages, frameworks, infrastructure) and direct evidence we find in your code through our security knowledge graph.
For each risk in the catalog, we run targeted queries to determine whether the relevant controls are already in place or whether security anti-patterns are present. Every match is then verified against the actual source code by an AI AppSec engineer, so rules aren't surfaced from superficial keyword hits but only from confirmed evidence.
From there, we activate rules from our catalog that apply to your repository and generate tailored rules with rationale and severity for risks unique to your codebase. Each rule is mapped to the specific projects where the evidence was found.