How Atlas Review Works
Atlas Review is not just a prompt wrapped around a code diff. It is a review system with stable scope, trusted inputs, persisted outcomes, and review-specific controls.
If you ask a general-purpose agent to “review this code,” you can still get useful feedback. But that workflow usually depends on whatever files, diff context, and instructions you happen to provide in that moment. Atlas Review is designed to make code review more reliable than that.
Why Atlas Review Is Different
Section titled “Why Atlas Review Is Different”1. Atlas reviews a defined branch or PR scope
Section titled “1. Atlas reviews a defined branch or PR scope”Atlas Review works against a real review target:
- a branch review for iterative local work
- a PR review for an explicit pull request
That gives the review a stable identity instead of a one-off conversation. You are not relying on someone to paste the right files or remember which commits were already reviewed.
2. Atlas keeps a trusted review baseline
Section titled “2. Atlas keeps a trusted review baseline”For branch and PR review, Atlas anchors review against a trusted base branch or PR base. That matters because review quality drops quickly when the baseline is ambiguous.
In practice, this means Atlas can distinguish between:
- the trusted repo state your team should rely on
- local working-tree previews you may want to test before committing
That trust model is especially important when you are experimenting with review policy changes or rerunning a review after fixes.
3. Atlas shapes review effort intentionally
Section titled “3. Atlas shapes review effort intentionally”Atlas does not need to spend the same amount of effort on every file.
With the review policy file, Atlas can treat:
- source code as
deep - migrations and config as
structural - tests as
shallow - lockfiles and snapshots as
collapsed - generated output as
skip
That is much better than a generic review prompt that forces the model to guess what matters most in a diff.
4. Atlas persists findings and outcomes
Section titled “4. Atlas persists findings and outcomes”Atlas keeps review findings as tracked units, not just text in a chat thread.
That means you can:
- inspect the finding again later
- mark it
addressedordismissed - rerun review and compare against the new snapshot
- publish the current PR review state when it is ready
Generic agent review usually stops at “here is a list of issues.” Atlas Review keeps the result attached to the branch or PR workflow.
5. Atlas expects evidence-oriented review
Section titled “5. Atlas expects evidence-oriented review”Atlas Review is built around evidence, anchors, and recommendation discipline. It is designed to avoid the common failure mode where a generic agent produces speculative or style-heavy feedback with weak grounding.
In practice, Atlas can enforce:
- evidence-oriented findings
- recommendation budgets
- project-specific review guidance
- explicit reruns after fixes
That makes the output more usable in an engineering workflow.
6. Atlas fits into the rest of the review lifecycle
Section titled “6. Atlas fits into the rest of the review lifecycle”Atlas is not only a review generator. It also sits inside the workflow around the review:
- branch review loops while a change is still evolving
- PR review when the review needs to map to GitHub publication
- findings and outcomes that persist across reruns
- agent integrations that read the same review state instead of starting from scratch each time
The result is a system of record for review, not just a one-time answer.
What A Generic Agent Review Usually Misses
Section titled “What A Generic Agent Review Usually Misses”A strong general-purpose agent can still be useful, but it typically lacks some or all of the following:
- stable branch or PR identity
- a trusted baseline for the diff
- persisted findings and outcome tracking
- review policy files that shape effort by path
- explicit rerun discipline tied to a fresh snapshot
- publication flow for PR review
That is why Atlas Review is better treated as the primary review surface, with connected agents acting as interfaces into that system.
A Simple Mental Model
Section titled “A Simple Mental Model”Think of Atlas Review as doing six things in order:
- Identify the review target: branch review or PR review.
- Resolve the trusted base and current reviewable change set.
- Apply review policy and scope-shaping rules.
- Generate findings with evidence and recommendation discipline.
- Persist findings, outcomes, and review status.
- Support reruns or publication from that same review scope.
That is the difference between “ask an agent to review code” and “run a review system.”
When To Use Which
Section titled “When To Use Which”Use Atlas Review when:
- you want a reliable branch or PR review workflow
- you need findings that persist across reruns
- you want policy-driven review depth
- you want review output anchored to a real repo state
Use an ad hoc agent review when:
- you want a quick second opinion on a small code fragment
- you are brainstorming before a formal review exists
- you need exploratory discussion, not a tracked review result
The two can work together, but Atlas Review should be the source of truth when the review matters.