After completing this topic, you can understand how Rules, Reports, Annotations, and Campaigns work together and know where to start when reviewing a Run.Documentation Index
Fetch the complete documentation index at: https://docs.siftstack.com/llms.txt
Use this file to discover all available pages before exploring further.
How Review works
Review in Sift is organized around a four-object pipeline: Rule → Report → Annotation → Campaign Each object has a distinct role. Understanding what each one does and why they are separate is the fastest way to orient yourself before starting a review.The review pipeline
Rules
A Rule defines what to look for in your telemetry. You write a logical condition using the Common Expression Language (CEL). For example, flagging when a channel exceeds a threshold or a log contains a specific string. Rules are reusable across Runs, versioned over time, and shared across your team. Rules are the starting point for anyone setting up automated detection.Reports
A Report evaluates one or more Rules against a specific Run and collects the results in one place. When you generate a Report, Sift runs each Rule’s expression against the Run’s telemetry and shows you which Rules passed and which generated issues. Reports are the primary workspace for reviewing a single Run. Most reviewers spend the majority of their time here.Annotations
When a Rule’s condition evaluates to true during a Run, Sift creates an Annotation: a timestamped marker linked to the specific channel values that triggered the Rule. Annotations can also be created manually in Explore. There are two types:- Phase Annotations: informational markers for milestones such as “Engine Ignition” or “Max-Q”. They have no status and cannot be assigned.
- Data Review Annotations: issue-tracking entries with a status workflow (
Open,Failed,Accepted), an assignee field, and a comment thread. Use these when data needs investigation.