Rule Evaluation
Create reports by evaluating report templates and rules against runs.
Service Endpoints
The Rule Evaluation Service provides two main endpoints:
- EvaluateRules: Evaluates rules against a run or asset and creates a report that contains the generated annotations.
- EvaluateRulesPreview: Performs a dry run evaluation, showing what annotations would be generated without actually creating them.
See the full API definitions for gRPC and REST.
Learn more about rules in the Sift support portal.
Evaluating Rules
To evaluate rules and generate annotations, you'll use the RuleEvaluationService.EvaluateRules
endpoint. This operation will create a report if rules are evaluated against a run.
Request Structure
The EvaluateRulesRequest
has the following definition:
The request consists of several components:
-
time: Specify either a run or assets with a time range to evaluate
run
: The resource identifier for a specific runassets
: A time range for assets to evaluate against
-
mode: Specify which rules to evaluate
rules
: Evaluate from current rule versionsrule_versions
: Evaluate from specific rule versionsreport_template
: Evaluate using rules from a report template
-
annotation_options: Options for creating annotations
-
organization_id: Only required if your user belongs to multiple organizations
-
report_name: Optional name for the generated report
Time vs. Mode
You must specify exactly one option from the time
oneof and exactly one option from the mode
oneof.
Using AssetsTimeRange
When evaluating rules against assets, you need to specify a time range:
Rule Evaluation Modes
You can evaluate rules in several ways:
From Current Rule Versions
From Report Template
From Rule Versions
Annotation Options
Specify tags for the annotations that will be created:
Response Structure
The EvaluateRulesResponse
has the following definition:
The response includes:
- created_annotation_count: Total number of annotations created by the rule evaluation
- report_id: ID of the generated report (if rules were evaluated against a run)
- job_id: ID of the asynchronous job (if the rule evaluation is being processed asynchronously)
Asynchronous Processing
For evaluations that may take longer to process, the service will return a job_id
indicating that the operation is being processed asynchronously. You can use this ID to check the status of the job in the job service.
Previewing Rule Evaluations
To see what annotations would be generated without actually creating them, use the RuleEvaluationService.EvaluateRulesPreview
endpoint:
Preview Request Structure
The EvaluateRulesPreviewRequest
has the following definition:
The preview request is similar to the standard evaluation request, with an additional mode option:
- rule_configs: Preview using rule configurations that haven't been saved yet
Preview Limitations
Currently, rule preview is only supported for runs, not for assets.
Preview Response Structure
The EvaluateRulesPreviewResponse
provides information about what would be created:
The response includes:
- created_annotation_count: How many annotations would be created
- dry_run_annotations: Preview of the annotations that would be created