Assets in Sift represent physical or virtual entities that generate time-series data through their Channels. They can represent a wide range of systems, from tangible objects such as vehicles and hardware testbeds to intangible systems like simulators or continuous integration (CI) pipelines. Assets are the sources of telemetry data, meaning any entities that produce structured measurements over time.Documentation Index
Fetch the complete documentation index at: https://docs.siftstack.com/llms.txt
Use this file to discover all available pages before exploring further.
Asset modeling
Asset modeling in Sift is about deciding which physical or virtual systems should be represented as Assets and how to structure them to reflect your test setup, telemetry flow, or workflow. Good modeling helps ensure that telemetry is organized clearly, supports analysis features like Rules and Runs, and enables long-term reuse and discoverability. The following table summarizes the two most common Asset modeling strategies in Sift.Naming conventions
When naming an Asset, aim to make the name clear enough that anyone can understand three key things just by looking at it: its type, a unique identifier, and a modifier that describes its environment or context. A common and recommended convention is to separate these parts with underscores for readability.- Do not include special characters in Asset names.
- Use hyphens to separate words or fill spaces within token names.
- Use lowercase letters in Asset names. This is not required, since Asset names are case insensitive, but it improves consistency and readability.
Rule evaluation scope
Rules in Sift are evaluated within the scope of a single Asset and its Channels. Any telemetry you want to analyze together using Rules must be grouped under the same Asset. It is acceptable for multiple clients or data sources to stream to the same Asset simultaneously, as long as they represent parts of the same system or workflow.Ingestion monitoring
sift_app is a built-in system Asset that provides real-time telemetry about your data pipeline’s performance and stability. It surfaces internal metrics through the same infrastructure you use for your own data, allowing you to build dashboards and set up alerts for specific Channels using familiar tools. The sift_app Asset is organized into four subsections. Each subsection represents a different layer of the ingestion journey, moving from your local client to Sift’s internal processing, error handling, and file-based data imports.data_import
The data_import subsection provides telemetry for data imported into Sift via file upload, such as CSV, Parquet, Ch10, or TDMS files. Metrics are emitted when a data import Job completes (successfully or with an error).Applicability: These metrics apply only to data imported via file upload (for example, through the UI, upload APIs, or Jobs that process uploaded files), and not to data ingested through the gRPC stream APIs.
dlq_ingestion
The dlq_ingestion subsection is dedicated to the Dead Letter Queue (DLQ) and is your primary resource for troubleshooting records that failed to ingest.ingest_grpc
The ingest_grpc subsection monitors the internal Sift components responsible for managing and observing your data streams.Applicability: These metrics apply only to data ingested through the gRPC stream APIs (
IngestWithConfigDataStream and IngestArbitraryProtobufDataStream), and not to data uploaded via the UI or upload APIs.Monitor data: Use
total_message_size_bytes to build ingestion volume dashboards, and messages_with_high_time_drift_count to monitor for ingestion anomalies.stream
The stream subsection provides visibility into the behavior of the Rustsift-stream client implementation.
Getting these metrics: These metrics are emitted by the stream client. To receive them, ensure you are using the latest versions of either the Rust crate
sift_stream or the Python library sift_client.