Annotations

Overview

Annotations in Sift represent key events or findings identified during the evaluation of a Run. They are automatically or manually created to highlight moments of interest such as anomalies, threshold crossings, or behavioral changes, based on a Rule's evaluations or user observations.

Annotations are tightly integrated with Sift's analysis workflows. When a Rule's logical condition evaluates to true during a Run, it can automatically generate an Annotation, providing a timestamped marker linked to specific Channels and values. Users can also manually create Annotations during data review to capture observations, flag issues, or leave comments. Each Annotation captures detailed context, including the triggered Rule if applicable, Asset metadata, and relevant Channel values, enabling faster diagnostics and clearer storytelling.

Annotations can also be assigned to team members for follow-up and action, and are associated with states such as Open, Failed, or Accepted to track their progress through the review and resolution process.

Annotation 1

Reports consolidate Annotations generated during a Run, offering a comprehensive view of system performance. These Reports link each Annotation to its originating Rule and current status, providing engineers with actionable insights to improve operational reliability. By analyzing Reports, teams can quickly identify patterns, correlate system behaviors, assign ownership of issues, and refine maintenance strategies.

Annotation 2

On this page