How it works

A cited corpus, deterministic checks, and a human with the final call.

Safe to Publish is not a chatbot. Under the hood is a retrieval-and-verification pipeline designed around one principle: an examiner should be able to trace every flag back to a specific chunk of authority.

Step 1 · Corpus

The rule text, chunked and cited.

We maintain a versioned corpus: Rule 206(4)-1, the adopting release, 41 Division of Investment Management staff FAQs, and every Division of Examinations risk alert. Each chunk has an immutable ID that flags refer back to.

When a new risk alert drops, we ingest it within seven days, tag it, and stamp the new corpus version on every review going forward.

Live corpus · v2026-04-22
rule-206-4-1-a-2Reasonable basis for material statements of fact
rule-206-4-1-b-1-i-ATestimonial client-status disclosure
rule-206-4-1-d-1-gross-netGross vs. net performance — equal prominence
rule-206-4-1-d-6-hypotheticalHypothetical performance preconditions
risk-alert-2025-12-16-hyperlink-failDisclosures behind a hyperlink
faq-2023-12-comp-specificsSpecific vs. vague compensation disclosure
Deterministic checks · always run
DET-01Regex: "guarantee", "risk-free", "never lost"
DET-04Performance % without neighboring "net of fees"
DET-07"Important disclosures available at" → hyperlink fail
DET-09Testimonial pattern: quoted block + byline initials
DET-12"SEC approved", "SEC-reviewed", "endorsed by the SEC"
DET-14"Backtested", "hypothetical", "would have returned"
Step 2 · Checks

Deterministic patterns catch the textbook gotchas.

Some violations are deterministic — they're exact phrases staff has already called out. Those run as regexes on every review, independent of the model. If the model misses one, the deterministic layer still flags it.

The LLM handles the nuance: context-sensitive claims, implied inferences, whether a testimonial's disclosures are complete. Every LLM flag must resolve to a real corpus ID — flags whose citation doesn't resolve are silently dropped before you see them. Zero hallucinated citations, by construction.

Step 3 · Decision

The tool flags. The firm decides. The log remembers.

Nothing is ever "approved" by Safe to Publish. A flag is a recommendation for the CCO or reviewer to act on — apply the rewrite, dismiss with a note explaining why it doesn't apply, or escalate to counsel. Every decision appends to your audit log.

We flag what an SEC examiner might cite. You decide whether the flag applies to your firm's situation, your ADV brochure, and your fee schedule — none of which we know, and none of which we presume.

What Safe to Publish will — and won't — ever say

Our voice rules, in writing.

We will never say "approved"

No draft is "SEC-approved," "approved by compliance," or "cleared to publish." The tool makes recommendations; people make approvals.

We will never say "guaranteed compliant"

No tool can guarantee compliance. What we guarantee is that every flag we surface cites a real rule chunk you can defend in an exam.

We will never invent a citation

Flags that can't resolve to a corpus chunk are dropped. If you want to know what dropped, open the review and look at droppedHallucinations in the audit metadata.

We will never train on your drafts

Your content is processed by isolated inference in us-east-1. It is not used to train models, ours or anyone else's.

Trust

Audit posture.

ZDR
Anthropic Zero Data Retention
AES-256
At rest · TLS 1.3 in transit
7 yr
Retention · Rule 204-2
DPA
Available on every plan

Ready to publish with confidence?

Open the app →