Rule Evaluation

Create reports by evaluating report templates and rules against runs.

Service Endpoints

The Rule Evaluation Service provides two main endpoints:

  1. EvaluateRules: Evaluates rules against a run or asset and creates a report that contains the generated annotations.
  2. EvaluateRulesPreview: Performs a dry run evaluation, showing what annotations would be generated without actually creating them.

See the full API definitions for gRPC and REST.

Learn more about rules in the Sift support portal.

Evaluating Rules

To evaluate rules and generate annotations, you'll use the RuleEvaluationService.EvaluateRules endpoint. This operation will create a report if rules are evaluated against a run.

rpc EvaluateRules(EvaluateRulesRequest) returns (EvaluateRulesResponse)

Request Structure

The EvaluateRulesRequest has the following definition:

message EvaluateRulesRequest {
  oneof time {
    sift.common.type.v1.ResourceIdentifier run = 1;
    AssetsTimeRange assets = 2;
  }
  oneof mode {
    EvaluateRulesFromCurrentRuleVersions rules = 3;
    EvaluateRulesFromRuleVersions rule_versions = 4;
    EvaluateRulesFromReportTemplate report_template = 5;
  }
  EvaluateRulesAnnotationOptions annotation_options = 6;
  string organization_id = 7 [(google.api.field_behavior) = OPTIONAL];
  optional string report_name = 8 [(google.api.field_behavior) = OPTIONAL];
}

The request consists of several components:

  • time: Specify either a run or assets with a time range to evaluate

    • run: The resource identifier for a specific run
    • assets: A time range for assets to evaluate against
  • mode: Specify which rules to evaluate

    • rules: Evaluate from current rule versions
    • rule_versions: Evaluate from specific rule versions
    • report_template: Evaluate using rules from a report template
  • annotation_options: Options for creating annotations

  • organization_id: Only required if your user belongs to multiple organizations

  • report_name: Optional name for the generated report

Time vs. Mode

You must specify exactly one option from the time oneof and exactly one option from the mode oneof.

Using AssetsTimeRange

When evaluating rules against assets, you need to specify a time range:

message AssetsTimeRange {
  sift.common.type.v1.NamedResources assets = 1 [(google.api.field_behavior) = REQUIRED];
  google.protobuf.Timestamp start_time = 2 [(google.api.field_behavior) = REQUIRED];
  google.protobuf.Timestamp end_time = 3 [(google.api.field_behavior) = REQUIRED];
}

Rule Evaluation Modes

You can evaluate rules in several ways:

From Current Rule Versions

message EvaluateRulesFromCurrentRuleVersions {
  sift.common.type.v1.ResourceIdentifiers rules = 1 [(google.api.field_behavior) = REQUIRED];
}

From Report Template

message EvaluateRulesFromReportTemplate {
  sift.common.type.v1.ResourceIdentifier report_template = 1 [(google.api.field_behavior) = REQUIRED];
}

From Rule Versions

message EvaluateRulesFromRuleVersions {
  repeated string rule_version_ids = 1 [(google.api.field_behavior) = REQUIRED];
}

Annotation Options

Specify tags for the annotations that will be created:

message EvaluateRulesAnnotationOptions {
  sift.common.type.v1.NamedResources tags = 1 [(google.api.field_behavior) = REQUIRED];
}

Response Structure

The EvaluateRulesResponse has the following definition:

message EvaluateRulesResponse {
  int32 created_annotation_count = 1 [(google.api.field_behavior) = REQUIRED];
  optional string report_id = 2 [(google.api.field_behavior) = REQUIRED];
  optional string job_id = 3 [(google.api.field_behavior) = OPTIONAL];
}

The response includes:

  • created_annotation_count: Total number of annotations created by the rule evaluation
  • report_id: ID of the generated report (if rules were evaluated against a run)
  • job_id: ID of the asynchronous job (if the rule evaluation is being processed asynchronously)

Asynchronous Processing

For evaluations that may take longer to process, the service will return a job_id indicating that the operation is being processed asynchronously. You can use this ID to check the status of the job in the job service.

Previewing Rule Evaluations

To see what annotations would be generated without actually creating them, use the RuleEvaluationService.EvaluateRulesPreview endpoint:

rpc EvaluateRulesPreview(EvaluateRulesPreviewRequest) returns (EvaluateRulesPreviewResponse)

Preview Request Structure

The EvaluateRulesPreviewRequest has the following definition:

message EvaluateRulesPreviewRequest {
  oneof time {
    sift.common.type.v1.ResourceIdentifier run = 1;
  }
  oneof mode {
    EvaluateRulesFromCurrentRuleVersions rules = 3;
    EvaluateRulesFromRuleVersions rule_versions = 4;
    EvaluateRulesFromReportTemplate report_template = 5;
    EvaluateRulesFromRuleConfigs rule_configs = 6;
  }
  string organization_id = 7 [(google.api.field_behavior) = OPTIONAL];
}

The preview request is similar to the standard evaluation request, with an additional mode option:

  • rule_configs: Preview using rule configurations that haven't been saved yet
message EvaluateRulesFromRuleConfigs {
  repeated sift.rules.v1.UpdateRuleRequest configs = 1 [(google.api.field_behavior) = REQUIRED];
}

Preview Limitations

Currently, rule preview is only supported for runs, not for assets.

Preview Response Structure

The EvaluateRulesPreviewResponse provides information about what would be created:

message EvaluateRulesPreviewResponse {
  int32 created_annotation_count = 1 [(google.api.field_behavior) = REQUIRED];
  repeated sift.rules.v1.DryRunAnnotation dry_run_annotations = 2;
}

The response includes:

  • created_annotation_count: How many annotations would be created
  • dry_run_annotations: Preview of the annotations that would be created

Examples

Evaluating Rules Against a Run

curl -X POST -H "Authorization: Bearer $API_TOKEN" -H "Content-Type: application/json" $SIFT_REST_URL/api/v1/rules/evaluate-rules -d '{
  "run": {
    "id": "bf7cfa92-4d76-44ed-ab60-341c731edf78"
  },
  "rules": {
    "rules": { "ids": { "ids": ["d6ac32fc-9be2-4193-a593-2ab9f473f82d" ] } }
  },
  "annotation_options": {
    "tags": {
      "names": { "names": ["quality_check", "automated_review"] }
    }
  },
  "report_name": "Quality Analysis Report"
}'