Skip to main content

Overview

Sift provides official client libraries for Python, Rust, and Go to simplify integration with its APIs. In this tutorial, you will use the Sift Python client library to stream telemetry from a Python script into Sift. You will run a simple example that simulates a robotic vehicle reporting both velocity and temperature in real time, and see how that telemetry appears in Sift. By the end of this tutorial, you will understand how to install the client library, configure authentication, define telemetry signals, and stream time series data into Sift using Python.

Prerequisites

Scenario

In this tutorial, you will simulate a simple system that produces telemetry. Imagine a robotic vehicle reporting both its current speed and its internal temperature every half second. Instead of connecting to real hardware, the script generates mock velocity and temperature values and streams them to Sift in real time. Each time you run the script, it creates a new Asset and starts a new session for collecting telemetry. The script sends velocity and temperature measurements every 0.5 seconds. You will view this telemetry in Sift and see how multiple Channels are grouped within a single Run, and more importantly, learn how to structure a Python script using the Sift Python client library to stream telemetry into Sift.

Step 1: Install the Sift client library

Before we install anything, let’s create a clean Python environment for this tutorial. This helps avoid dependency conflicts and keeps your setup isolated. In your terminal, create and activate a virtual environment, then install the Sift Python client library with streaming support along with python-dotenv, which we will use to securely load authentication values from a .env file.
python3 -m venv venv
source venv/bin/activate
pip install "sift-stack-py[sift-stream]"
pip install python-dotenv
We install sift-stack-py[sift-stream] instead of just sift-stack-py because streaming ingestion requires additional optional dependencies.
  • The sift-stream extra installs the gRPC streaming components needed to open and maintain a real-time ingestion stream.
  • If you install only sift-stack-py, the core client will work for non-ingestion APIs, but streaming ingestion will not be available. Using pip install "sift-stack-py[sift-stream]" ensures that streaming support is included.

Step 2: Configure authentication

Now that your environment is ready, we need to tell the script how to connect to your Sift environment. The Sift Python client library requires three values: a Sift API key, the Sift gRPC URL, and the Sift REST URL. Instead of placing these directly in the script, we will store them in a .env file. This keeps credentials secure and makes the script easier to reuse. Create a new file in the same directory as your script and name it:
.env
In the .env file, add the following values, replacing them with the ones from your Sift environment.
SIFT_API_KEY=your_api_key_here 
SIFT_GRPC_URL=https://your-grpc-url
SIFT_REST_URL=https://your-rest-url

Step 3: Create a Python script to stream telemetry using streaming ingestion

Now that your environment is ready and authentication is configured, let’s create a script that sends telemetry to Sift. This script will simulate a system that reports velocity and temperature measurements every 0.5 seconds and streams them to Sift indefinitely (until you stop the script with Ctrl + C). Create the following file (script) in your project directory and paste the following content into it:
stream.py
stream.py
#!/usr/bin/env python3
"""
Streams simulated vehicle velocity and temperature telemetry generated using random values to mimic onboard vehicle sensors to Sift indefinitely.

This example demonstrates the complete streaming ingestion lifecycle:
- Authenticate with Sift
- Define a telemetry schema (Flow + Channels)
- Create an Asset and Run
- Open a streaming ingestion session
- Send timestamped flows in real time

The program runs continuously until the user terminates it.
"""

# Import dependencies
# ---------------------------------------------------------------------
# Standard library modules for async execution, randomness, timing,
# and generating unique identifiers.
import asyncio
import random
import time
import uuid
from datetime import datetime, timezone

# Used to securely load environment variables from a .env file.
from dotenv import dotenv_values

# Core Sift client and connection configuration.
from sift_client import SiftClient, SiftConnectionConfig

# Sift ingestion types used to define telemetry structure and runs.
from sift_client.sift_types import (
    ChannelConfig,          # Defines an individual telemetry signal
    ChannelDataType,        # Specifies the signal's data type
    FlowConfig,             # Defines a group of related channels (schema)
    IngestionConfigCreate,  # Associates flows with an Asset
    RunCreate,              # Represents a telemetry collection session
)


# Define configuration constants
# ---------------------------------------------------------------------
# FLOW_NAME identifies the telemetry schema inside Sift.
# SEND_INTERVAL_SECONDS controls sampling frequency.
FLOW_NAME = "vehicle_metrics"
SEND_INTERVAL_SECONDS = 0.5


# Helper function to generate unique names
# ---------------------------------------------------------------------
# Sift Assets and Runs should have unique names.
# This helper creates a timestamp + short UUID suffix to prevent collisions.
def make_unique_suffix() -> str:
    ts = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%S")
    short_id = uuid.uuid4().hex[:8]
    return f"{ts}_{short_id}"


# Main entry point
# ---------------------------------------------------------------------
# All ingestion logic lives inside this async function.
# Streaming ingestion uses async gRPC under the hood.
async def main() -> None:

    # Create unique Asset and Run names
    # -----------------------------------------------------------------
    # An Asset represents the telemetry-producing system.
    # A Run represents a single data collection session for that Asset.
    suffix = make_unique_suffix()
    asset_name = f"robot_vehicle_{suffix}"
    run_name = f"{asset_name}_run"

    # Load authentication from .env
    # -----------------------------------------------------------------
    # We load credentials from a .env file instead of hardcoding them.
    # These values are required to establish authenticated communication
    # with both the REST and gRPC endpoints of your Sift environment.
    env_vars = dotenv_values(".env")
    api_key = env_vars.get("SIFT_API_KEY")
    grpc_url = env_vars.get("SIFT_GRPC_URL")
    rest_url = env_vars.get("SIFT_REST_URL")

    if not api_key or not grpc_url or not rest_url:
        raise RuntimeError("Missing Sift credentials in .env")

    # Create a client connection to Sift
    # -----------------------------------------------------------------
    # SiftConnectionConfig holds authentication and endpoint details.
    # SiftClient is your primary entry point for interacting with Sift.
    # Streaming ingestion uses the gRPC endpoint defined here.
    connection_config = SiftConnectionConfig(
        api_key=api_key,
        grpc_url=grpc_url,
        rest_url=rest_url,
    )

    client = SiftClient(connection_config=connection_config)

    # Define telemetry signals (Channels) within a Flow
    # -----------------------------------------------------------------
    # A FlowConfig defines the telemetry schema.
    # Each ChannelConfig defines:
    #   - name (signal identifier)
    #   - unit (measurement unit)
    #   - data_type (numeric, string, etc.)
    #   - description (a human-readable explanation of what the Channel (signal) represents and how it should be interpreted)

    # All telemetry sent to Sift must conform to this schema.
    flow_config = FlowConfig(
        name=FLOW_NAME,
        channels=[
            ChannelConfig(
                name="velocity",
                unit="m/s",
                data_type=ChannelDataType.DOUBLE,
                description="The velocity Channel streams real-time speed measurements of the vehicle in meters per second (m/s) as double-precision numeric values.",
            ),
            ChannelConfig(
                name="temperature",
                unit="C",
                data_type=ChannelDataType.DOUBLE,
                description="The temperature Channel streams real-time temperature readings of the vehicle in degrees Celsius (°C) as double-precision numeric values.",
            ),
        ],
    )

    # Create ingestion configuration 
    # -----------------------------------------------------------------
    # IngestionConfigCreate associates:
    #   - An Asset
    #   - One or more Flow definitions
    #
    # RunCreate defines the session that will group all incoming flows.
    ingestion_config = IngestionConfigCreate(
        asset_name=asset_name,
        flows=[flow_config],
    )

    # Create Run
    # -----------------------------------------------------------------
    # RunCreate defines the session that will group all incoming flows.
    # While not strictly necessary for ingestion, Runs are useful for organizing
    # data from one or more Assets for a given period of time (such as a specific test,
    # or daily ops)
    run = RunCreate(name=run_name)

    # Open a streaming ingestion client
    # -----------------------------------------------------------------
    # This creates a gRPC streaming session tied to:
    #   - The ingestion configuration (Asset + Flows)
    #   - The Run
    #
    # All telemetry sent within this context will appear inside
    # this Run in Sift.
    async with await client.async_.ingestion.create_ingestion_config_streaming_client(
        ingestion_config=ingestion_config,
        run=run,
    ) as ingest_client:

        # Continue streaming until the user terminates the program
        while True:
            now = datetime.now(timezone.utc)

            # Generate mock telemetry values
            # ---------------------------------------------------------
            # In a real system, these would come from sensors,
            # hardware interfaces, or production metrics.
            velocity = random.uniform(0, 10)
            temperature = random.uniform(20, 40)

            # Create a Flow object that matches the FlowConfig schema
            # ---------------------------------------------------------
            # flow_config.as_flow():
            #   - Attaches a timestamp
            #   - Maps channel names to values
            #   - Ensures schema conformity
            flow = flow_config.as_flow(
                timestamp=now,
                values={
                    "velocity": velocity,
                    "temperature": temperature,
                },
            )

            # Send telemetry to Sift over the open gRPC stream
            # ---------------------------------------------------------
            # Each call transmits a structured, timestamped flow.
            await ingest_client.send(flow=flow)

            print(
                f"[SENT {now.isoformat()}] "
                f"velocity={velocity:.2f} m/s | "
                f"temperature={temperature:.2f} C"
            )

            # Control sampling rate
            await asyncio.sleep(SEND_INTERVAL_SECONDS)

    print("Streaming session closed.")


# Standard Python entry point
# ---------------------------------------------------------------------
# asyncio.run() starts the async ingestion workflow.
if __name__ == "__main__":
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("\nStreaming stopped by user.")
Now that the file has been created, we will execute it in the next step and then view the data in Sift. Afterward, we will analyze the script in more detail to better understand how it works.

Step 4: Run the script and view streamed data in Sift

In this step, you will execute the Python script from your terminal and then open Sift to observe the newly created Asset, Run, and Channels. As the script executes, you should see velocity and temperature measurements updating live in the Sift interface (Explore).
  1. In your virtual environment, execute the script with:
    python stream.py
    
    As the script executes, you should see output similar to (a new line will appear every 0.5 seconds):
    [SENT 2026-02-13T21:22:28.637390+00:00] velocity=5.30 m/s | temperature=30.27 C
    
  2. In Sift, locate the Run name or description field and enter the exact Run name shown in the terminal (for example, robot_vehicle_…_…_run).
  3. In the Runs table, click the Run name (for example, robot_vehicle_…_…_run).
  4. Click Explore 2.
  5. Click Live.
  6. In the Channels tab, click the following Channels:
    • temperature
    • velocity

Step 5: Understand the ingestion workflow

The ingestion process follows a clear sequence. The steps below summarize how telemetry is structured, configured, and streamed to Sift.
1

Configure authentication and define the telemetry schema

At a high level, streaming telemetry to Sift involves defining structure first, then sending timestamped data that conforms to that structure. In this script, authentication is configured using SiftConnectionConfig, and a SiftClient is created to communicate with your Sift environment.
2

Define the telemetry schema

A FlowConfig defines the telemetry schema, and each ChannelConfig declares an individual signal with its name, unit, and data type.
3

Create the ingestion context

An IngestionConfigCreate associates that schema with an Asset, and a RunCreate defines the session that will group all incoming telemetry.
4

Open a streaming ingestion session and send timestamped flows

Once this ingestion context is established, the script opens a streaming ingestion session over gRPC and begins sending timestamped flows in real time. Each flow includes a timestamp and structured Channel values created using flow_config.as_flow(), ensuring the data matches the defined schema. These flows are transmitted over the open stream and immediately appear in Sift under the specified Run.
Understanding this pattern of defining a schema, creating the ingestion context, opening a stream, and sending timestamped flows is the foundation for integrating real telemetry.

Conclusion

In this tutorial, you learned how to stream live telemetry to Sift using the Sift Python client library. You configured authentication, defined a telemetry schema using FlowConfig and ChannelConfig, created an Asset and Run, and opened a streaming ingestion session to send timestamped flows in real time. You also saw how streamed data immediately appears in Sift and how Channels are organized within a Run. With this foundation, you can adapt the same ingestion pattern to real systems instead of simulated data. By defining clear flow schemas and using the streaming ingestion API correctly, you can integrate Sift into production pipelines and continuously stream structured time series telemetry into your environment.