Configuration File
The .pd4castrrc.json file is the central configuration for your model project.
It lives at the root of your model directory and defines everything the platform
needs to know about your model: its identity, inputs, outputs, sensitivities,
and scheduling behaviour. The CLI validates this file against a schema on every
command.
File Location and Format
The configuration file must be named .pd4castrrc.json and placed at the root
of your model project. It uses standard JSON format. You can generate a starter
config by running pd4castr init, which scaffolds a new project from a
template.
You can point the CLI to a different config file using the -c or --config
flag on any command:
pd4castr test -c .pd4castrrc.variant.jsonComplete Field Reference
Top-Level Fields
These fields define your model’s identity and global behaviour.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | string | Yes | — | Display name for the model. |
forecastVariable | string | Yes | — | The variable being forecast. Currently only "price" is supported. |
timeHorizon | string | Yes | — | Forecast time horizon: "day_ahead", "week_ahead", "quarterly", or "historical". |
displayTimezone | string | No | Australia/Brisbane | IANA timezone string used for display in the pd4castr UI. |
public | boolean | No | false | Whether the model is visible to other organisations. |
runMode | string | No | "AUTOMATIC" | How runs are triggered: "AUTOMATIC" or "ON_DEMAND". See Run modes. |
outputFileFormat | string | No | "json" | Format the model outputs: "json", "csv", or "parquet". See Model outputs. |
runDatetimeQuery | string or null | No | null | Path to a SQL file for custom run datetime. See Custom run datetime. |
metadata | object | No | {} | Freeform key-value metadata (for example, description, resolution, feature lists). |
Inputs
The inputs array defines the data your model consumes. Each entry has the
following fields:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
key | string | Yes | — | Identifier for this input. Becomes INPUT_<KEY>_URL in the container. |
trigger | string | Yes | — | "WAIT_FOR_LATEST_FILE" or "USE_MOST_RECENT_FILE". |
inputSource | string | No | — | ID of your desired input source. By default, the platform’s shared source is used. |
uploadFileFormat | string | No | "json" | Format of the uploaded file: "json", "csv", or "parquet". |
targetFileFormat | string | No | "json" | Format served to the container. Converted automatically if different from uploadFileFormat. |
For inputs with automatic data fetching, add a fetcher block:
| Field | Type | Required | Description |
|---|---|---|---|
fetcher.type | string | Yes | The data source type, for example, "AEMO_MMS". |
fetcher.checkInterval | number | Yes | Polling interval in seconds. Minimum 60. |
fetcher.config.checkQuery | string | Yes | Path to a SQL file containing a query that returns a “check”, used for determining whether new data is available. |
fetcher.config.fetchQuery | string | Yes | Path to a SQL file containing a query that retrieves the data when new data is detected. |
See Model inputs for detailed explanations.
Outputs
The outputs array defines the schema of your model’s forecast data.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | string | Yes | — | Display name for this output variable. This will be shown in the pd4castr UI. |
key | string | Yes | — | Stable identifier for this output variable. Must be lowercase alphanumeric (with hyphens/underscores). Must be unique within the model. |
type | string | Yes | — | Data type: "float", "integer", "string", "date", "boolean". |
seriesKey | boolean | Yes | — | If true, this column is a categorical series key for chart grouping. |
colour | string | No | — | Hex colour code (#RRGGBB) for this series in the forecast chart. |
See Model outputs for detailed explanations.
Sensitivities
The sensitivities array defines alternative scenario runs. Each entry has the
following fields:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | string | Yes | — | Display name for the scenario, shown in the pd4castr UI. |
key | string | Yes | — | Stable identifier for this sensitivity. Must be lowercase alphanumeric (with hyphens/underscores). Must be unique within the model. |
query | string | Yes | — | Path to a SQL file that transforms the input data. |
See Sensitivities for detailed explanations.
Input Aggregations
The inputAggregations array defines summary views of input data displayed
below the forecast chart.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | string | Yes | — | Display name for the aggregation chart. |
query | string | Yes | — | Path to a SQL file that aggregates input data. |
description | string | No | "" | Tooltip text shown in the pd4castr UI when hovering over the chart title. |
colours | string[] | No | [] | Array of hex colour strings for chart series. |
See Input aggregations for detailed explanations.
SQL File Paths
All query paths in the configuration (fetcher queries, sensitivity queries,
input aggregation queries, and runDatetimeQuery) are relative to the project
root.
Full Example
Here’s a complete .pd4castrrc.json for a day-ahead price forecast model with
fetched inputs, output series, and input aggregations:
{
"name": "Day Ahead Price Forecast",
"forecastVariable": "price",
"timeHorizon": "day_ahead",
"displayTimezone": "Australia/Brisbane",
"public": false,
"runMode": "AUTOMATIC",
"outputFileFormat": "json",
"runDatetimeQuery": "queries/run-datetime.sql",
"metadata": {
"resolution": "30min",
"description": "30-minute day-ahead electricity price forecast"
},
"inputs": [
{
"key": "dispatch_price",
"trigger": "WAIT_FOR_LATEST_FILE",
"uploadFileFormat": "json",
"targetFileFormat": "json",
"fetcher": {
"type": "AEMO_MMS",
"checkInterval": 300,
"config": {
"checkQuery": "queries/data-fetchers/dispatch-price-check.sql",
"fetchQuery": "queries/data-fetchers/dispatch-price-fetch.sql"
}
}
},
{
"key": "regional_boundaries",
"trigger": "USE_MOST_RECENT_FILE",
"uploadFileFormat": "csv",
"targetFileFormat": "csv"
}
],
"outputs": [
{
"name": "NSW1",
"key": "nsw1",
"type": "float",
"seriesKey": true,
"colour": "#84EDDC"
},
{
"name": "QLD1",
"key": "qld1",
"type": "float",
"seriesKey": true,
"colour": "#FD4E4E"
},
{
"name": "SA1",
"key": "sa1",
"type": "float",
"seriesKey": true,
"colour": "#FED600"
},
{
"name": "TAS1",
"key": "tas1",
"type": "float",
"seriesKey": true,
"colour": "#40A967"
},
{
"name": "VIC1",
"key": "vic1",
"type": "float",
"seriesKey": true,
"colour": "#1965C6"
}
],
"sensitivities": [
{
"name": "High Demand (+10%)",
"key": "high-demand",
"query": "queries/sensitivities/high-demand.sql"
}
],
"inputAggregations": [
{
"name": "Native Demand",
"query": "queries/input-aggregations/native-demand.sql",
"description": "Regional demand by demand_and_nonshedgen",
"colours": ["#008000", "#009900", "#00B300", "#00CC00", "#00E600"]
}
]
}Next Steps
- Model inputs — Detailed guide to input configuration and data fetchers.
- Model outputs — How to define your output schema.
- Sensitivities — Scenario analysis and input aggregations.
- Testing your model — Local validation workflow.
- Publishing — How to ship your model.
- Run modes and scheduling — Automatic and on-demand triggering.