Fetch the latest forecast
The most common workflow with the pd4castr SDK is fetching the latest forecast output from a model. This guide walks you through the full happy path, from finding your model to iterating over the forecast data.
Set up the client
Import the Client class and create an instance with your credentials. The SDK
handles authentication automatically.
from pd4castr_api_sdk import Client
client = Client(
client_id="your-client-id",
client_secret="your-client-secret",
)If you haven’t set up credentials yet, see Authentication.
Fetch the latest forecast
You can go from zero to forecast data in just a few lines. Find your model group, get the latest model revision, and fetch its most recent completed run output.
from pd4castr_api_sdk import Client
with Client(
client_id="your-client-id",
client_secret="your-client-secret",
) as client:
# Find your model group
groups = client.get_model_groups()
group = next(g for g in groups if g.name == "My DA Price Model")
# Get the latest revision of the model
model = client.get_latest_model(model_group_id=group.id)
# Fetch the latest completed run's output
result = client.get_latest_model_run_output(model_id=model.id)
# Iterate over the forecast data
for row in result.run.data:
print(row.forecast_datetime, row.model_dump())Understand the output structure
The get_latest_model_run_output() method returns a ModelRunOutputResult with
two top-level fields.
result.run— aModelRunWithOutputcontaining the forecast run and its data.result.comparison_runs— a dictionary of model ID toModelRunWithOutput. This is empty unless you’ve requested a comparison (see Compare models).
Each ModelRunWithOutput contains the following fields:
| Field | Type | Description |
|---|---|---|
id | str | The model run ID |
run_datetime | str | ISO 8601 timestamp of the run |
sensitivity | Sensitivity or None | The sensitivity used for this run, if any |
model | ModelSummary | Summary of the model (ID, display name, timezone) |
colours | dict[str, str | None] | Map of column name to hex colour for charting |
data | list[ModelRunOutputData] | The forecast rows |
Work with forecast data
Each entry in result.run.data is a ModelRunOutputData object. Every row has
a forecast_datetime field, plus additional columns that are specific to the
model.
result = client.get_latest_model_run_output(model_id=model.id)
for row in result.run.data:
# forecast_datetime is always present
print(row.forecast_datetime)
# Access dynamic columns using model_dump()
values = row.model_dump()
for column, value in values.items():
if column != "forecast_datetime":
print(f" {column}: {value}")The dynamic columns vary by model. You can check what columns a model produces
by inspecting model.output_specification:
model = client.get_latest_model(model_group_id=group.id)
for col in model.output_specification:
print(f"{col.name} ({col.type})")The colours dictionary on the run tells you the hex colour assigned to each
output column, which is useful if you’re building charts.
Fetch a sensitivity forecast
By default, get_latest_model_run_output() and get_latest_model_run() return
the latest base run, excluding sensitivity runs. To fetch the latest
forecast for a specific sensitivity, pass its ID:
# List available sensitivities for the model
sensitivities = client.get_model_sensitivities(model_id=model.id)
for s in sensitivities:
print(f"{s.id} {s.name}")
# Fetch the latest output for a specific sensitivity
result = client.get_latest_model_run_output(
model_id=model.id,
sensitivity=sensitivities[0].id,
)
for row in result.run.data:
print(row.forecast_datetime, row.model_dump())Step by step
If you need more control over the process, for example to inspect the run metadata before fetching output, you can split the operation into two calls.
First, fetch the latest completed base run to see its metadata:
run = client.get_latest_model_run(model_id=model.id)
print(f"Run ID: {run.id}")
print(f"Run datetime: {run.run_datetime}")
print(f"Status: {run.status}")
print(f"Completed at: {run.completed_at}")Or fetch the latest run for a specific sensitivity:
run = client.get_latest_model_run(
model_id=model.id,
sensitivity="your-sensitivity-id",
)
print(f"Sensitivity: {run.sensitivity.name}")Then, fetch the output for that specific run:
result = client.get_model_run_output(
model_id=model.id,
model_run_id=run.id,
)
for row in result.run.data:
print(row.forecast_datetime)This two-step approach is useful when you want to check run details or log run metadata before pulling the full output.
Next steps
- Browse historical runs — fetch and compare past model runs
- Compare models — use comparison runs to evaluate models side by side