← Back to Blog

Every AI code reviewer has the same problem: it reviews everything, whether you asked for it or not. Security findings next to style nitpicks. Accessibility warnings on an internal admin tool. Performance suggestions on a throwaway script.

The signal-to-noise ratio kills adoption. Developers start ignoring the bot. The findings that matter get buried under the ones that don't.

Today we're shipping AI Review Preferences — interactive calibration that lets you tell QualityMax exactly what to check, what to skip, and what to focus on. Per-user globally, per-repo when you need overrides.

The Problem: One-Size-Fits-All Reviews

A fintech startup cares about security and secrets_scanning on every PR. An early-stage consumer app cares about performance and accessibility. A solo founder building an internal tool doesn't want style or type_safety noise at all.

Until now, you got all eight categories whether you wanted them or not. The reviewer had no way to know.

The Fix: Ask Once, Remember Forever

Connect to QualityMax from Claude Code, Cursor, qmax-code, or any MCP client. If it's your first time, the AI asks one calibration question:

"Which review categories should I focus on for your repos?"

You get eight toggles:

SecurityON
Secrets scanningON
PerformanceON
Test coverageON
Type safetyON
AccessibilityON
StyleOFF
AI agent safetyON

Plus a custom_focus field for free-text instructions: "Only flag PCI-DSS compliance issues in the auth module" or "Skip anything in the /legacy/ directory".

Set it globally once, override per-repo when you need to. The settings persist across sessions — the reviewer remembers what you told it.

Structured Findings That Tell You What to Do

Before today, QualityMax's PR comments looked like a table of severities and suggestions. Useful, but not actionable. Now every finding is a structured card:

Finding 1: security

File: demo_vuln_test.go:18 | Severity: critical

What: SQL injection vulnerability: user input concatenated directly into SQL query via fmt.Sprintf

Fix: Use parameterized queries with placeholders (e.g., db.QueryRow("SELECT * FROM users WHERE id = ?", userID))

Expand "Fix with your LLM agent" for a copy-paste command

Finding 2: security

File: demo_vuln_test.go:23 | Severity: critical

What: Command injection: unsanitized user input passed to shell via exec.Command with 'sh -c'

Fix: Use exec.Command with explicit argument list (no 'sh -c'). Validate and sanitize input.

Expand "Fix with your LLM agent" for a copy-paste command

One Command to Fix Everything

Every finding includes a collapsible "Fix with your LLM agent" block with a generic <your-llm-agent> command you can paste into your terminal. But the real power is at the bottom of the PR comment:

<your-llm-agent> "Fix all QualityMax review findings in this PR:
- security in demo_vuln_test.go:15: Hardcoded API secret
- security in demo_vuln_test.go:18: SQL injection via string concat
- security in demo_vuln_test.go:23: Command injection via sh -c
- security in demo_vuln_test.go:28: Open redirect, unvalidated URL
- security in demo_vuln_test.go:34: Password leaked in error response"

One paste. Your LLM agent reads the full context, fixes all five files, and you push. The review cycle that used to take 30 minutes of back-and-forth now takes 30 seconds.

How It Works Under the Hood

  1. Preferences are stored per-user, per-repo in a dedicated Postgres table. Global defaults + repo-level overrides. The merge_preferences() function layers: system defaults → user global → repo override.
  2. Two new MCP toolsget_review_preferences and set_review_preferences — so any MCP client (Claude Code, Cursor, qmax-code) can read and write them conversationally.
  3. The reviewer prompt is dynamically constructed: enabled categories become "Focus on: security, performance, test_coverage"; disabled categories become "SKIP entirely: style, accessibility". The LLM follows these instructions and filters its findings accordingly.
  4. Both gates respect preferences: the Alpha gate (AI diff analysis) and the SAST gate (security scanning) both load your preferences before generating findings.

Try It Now

If you're already connected via Claude Code or qmax-code, just say:

set my review preferences — turn off style and accessibility, add custom focus "only flag auth-related security issues"

The AI walks you through it. If you're new, connect in 3 steps and the calibration happens on your first message.

Start reviewing smarter

Connect Claude Code to QualityMax and tell it what you actually care about. The reviewer adapts to you — not the other way around.

Get Started