Service Endpoints
The Rule Evaluation Service provides two main endpoints:- EvaluateRules: Evaluates rules against a run or asset and creates a report that contains the generated annotations.
- EvaluateRulesPreview: Performs a dry run evaluation, showing what annotations would be generated without actually creating them.
Evaluating Rules
To evaluate rules and generate annotations, you’ll use theRuleEvaluationService.EvaluateRules endpoint. This operation will create a report if rules are evaluated against a run.
Request Structure
TheEvaluateRulesRequest has the following definition:
-
time: Specify either a run or assets with a time range to evaluate
run: The resource identifier for a specific runassets: A time range for assets to evaluate against
-
mode: Specify which rules to evaluate
rules: Evaluate from current rule versionsrule_versions: Evaluate from specific rule versionsreport_template: Evaluate using rules from a report template
- annotation_options: Options for creating annotations
- organization_id: Only required if your user belongs to multiple organizations
- report_name: Optional name for the generated report
Time vs. ModeYou must specify exactly one option from the
time oneof and exactly one option from the mode oneof.Using AssetsTimeRange
When evaluating rules against assets, you need to specify a time range:Rule Evaluation Modes
You can evaluate rules in several ways:From Current Rule Versions
From Report Template
From Rule Versions
Annotation Options
Specify tags for the annotations that will be created:Response Structure
TheEvaluateRulesResponse has the following definition:
- created_annotation_count: Total number of annotations created by the rule evaluation
- report_id: ID of the generated report (if rules were evaluated against a run)
- job_id: ID of the asynchronous job (if the rule evaluation is being processed asynchronously)
Asynchronous ProcessingFor evaluations that may take longer to process, the service will return a
job_id indicating that the operation is being processed asynchronously. You can use this ID to check the status of the job in the job service.Previewing Rule Evaluations
To see what annotations would be generated without actually creating them, use theRuleEvaluationService.EvaluateRulesPreview endpoint:
Preview Request Structure
TheEvaluateRulesPreviewRequest has the following definition:
- rule_configs: Preview using rule configurations that haven’t been saved yet
Preview LimitationsCurrently, rule preview is only supported for runs, not for assets.
Preview Response Structure
TheEvaluateRulesPreviewResponse provides information about what would be created:
- created_annotation_count: How many annotations would be created
- dry_run_annotations: Preview of the annotations that would be created