Curtailment Events with ukpyn¶
This tutorial covers working with curtailment data from UK Power Networks using the curtailment orchestrator.
What you'll learn:
- Understanding curtailment - what it is and why it happens
- Using the curtailment orchestrator
- Listing available datasets
- Fetching curtailment events with filtering
- Using the generic
get()function - Exporting curtailment data
Prerequisites:
- Complete 01-getting-started.ipynb first
- Your UKPN API key set as
UKPN_API_KEYenvironment variable - These tutorials require additional dependencies. Install them with
pip install "ukpyn[all]"— see Tutorial 01 for full setup instructions
1. Introduction to Curtailment¶
What is Curtailment?¶
Curtailment refers to the deliberate reduction of electricity generation or consumption at specific sites on the distribution network. This is a critical tool for Distribution Network Operators (DNOs) like UK Power Networks to manage grid constraints and maintain network stability.
When Does Curtailment Occur?¶
Curtailment events typically happen when:
- Thermal constraints: Network equipment (cables, transformers) would exceed safe operating temperatures
- Voltage constraints: Voltage levels would go outside acceptable limits
- Fault level constraints: Short-circuit current would exceed equipment ratings
- Reverse power flow: Excessive generation causing power to flow "backwards" on the network
- Planned outages: During maintenance windows when network capacity is reduced
Who Gets Curtailed?¶
Sites with flexible connections are typically subject to curtailment. These are generation sites (solar farms, wind farms, battery storage) or demand sites that have agreed to reduce output/consumption when required, often in exchange for faster or cheaper connection agreements.
Why Track Curtailment?¶
Understanding curtailment data is valuable for:
- Project developers: Assessing connection quality at different locations
- Asset operators: Understanding historical curtailment patterns
- Researchers: Studying grid constraint patterns and the energy transition
- Flexibility providers: Identifying opportunities for network services
import ukpyn
ukpyn.check_api_key()
print("API key configured!")
from ukpyn import curtailment
print("Curtailment orchestrator imported successfully!")
# We have a convenient way to check that the orchestrator is working, by printing the methods on the object.
print(repr(curtailment))
The curtailment module provides module-level convenience functions that you can use directly without creating a client or orchestrator instance:
curtailment.get()- Generic data retrievalcurtailment.get_events()- Get curtailment events with filteringcurtailment.export()- Export data to various formatscurtailment.available_datasets- List available datasets
3. Listing Available Datasets¶
The curtailment orchestrator provides access to UK Power Networks curtailment datasets through friendly names.
# View available datasets
print("Available curtailment datasets:")
print("-" * 40)
for dataset_name in curtailment.available_datasets:
print(f" - {dataset_name}")
# Expected output:
# Available curtailment datasets:
# ----------------------------------------
# - events
# - site_specific
Dataset Descriptions¶
| Dataset Name | Description |
|---|---|
events |
Site-specific curtailment events with details on when and why curtailment occurred |
site_specific |
Alias for events - same dataset, alternative name |
The curtailment events dataset contains records of individual curtailment events including:
- Site identifier
- Date and time of curtailment
- Driver/reason for curtailment
- Duration and energy curtailed
# Get the first 10 curtailment events
events = curtailment.get_events(limit=10)
print(f"Total events available: {events.total_count}")
print(f"Events returned: {len(events.records)}")
print("\n" + "=" * 60)
# Display the first few events
for i, record in enumerate(events.records[:5], 1):
print(f"\nEvent {i} (ID: {record.id})")
print("-" * 40)
if record.fields:
for key, value in record.fields.items():
print(f" {key}: {value}")
# Expected output:
# Total events available: 15234
# Events returned: 10
#
# ============================================================
#
# Event 1 (ID: abc123...)
# ----------------------------------------
# site_id: SITE001
# date: 2024-01-15
# driver: thermal
# energy_mwh: 12.5
# ...
Filtering by Site ID¶
Retrieve curtailment events for a specific site:
# Get events for a specific site
# Replace 'SITE001' with an actual site ID from your data
SITE_ID = "SITE001" # Example - adjust based on actual site IDs
try:
site_events = curtailment.get_events(site_id=SITE_ID, limit=20)
print(f"Curtailment events for site '{SITE_ID}':")
print(f"Total events: {site_events.total_count}")
print(f"Retrieved: {len(site_events.records)}")
if site_events.records:
print("\nFirst event:")
if site_events.records[0].fields:
for key, value in site_events.records[0].fields.items():
print(f" {key}: {value}")
else:
print("\nNo events found for this site.")
print("Tip: Try a different site_id from the available data.")
except Exception as e:
print(f"Error: {e}")
# Expected output:
# Curtailment events for site 'SITE001':
# Total events: 47
# Retrieved: 20
#
# First event:
# site_id: SITE001
# date: 2024-03-15
# driver: thermal
# ...
Filtering by Date Range¶
Retrieve curtailment events within a specific time period.
The date parameters accept:
- ISO format strings:
'2024-01-01' - Python
dateobjects:date(2024, 1, 1) - Python
datetimeobjects:datetime(2024, 1, 1)
# Get curtailment events for Q1 2024
q1_events = curtailment.get_events(
start_date="2024-01-01", end_date="2024-03-31", limit=50
)
print("Curtailment events in Q1 2024:")
print(f"Total events: {q1_events.total_count}")
print(f"Retrieved: {len(q1_events.records)}")
# Expected output:
# Curtailment events in Q1 2024:
# Total events: 3421
# Retrieved: 50
# Using Python date objects
from datetime import date
# Get events from the last month using date objects
events_with_dates = curtailment.get_events(
start_date=date(2024, 6, 1), end_date=date(2024, 6, 30), limit=25
)
print("Curtailment events in June 2024:")
print(f"Total events: {events_with_dates.total_count}")
# Expected output:
# Curtailment events in June 2024:
# Total events: 892
Filtering by Driver/Reason¶
Filter curtailment events by the reason (driver) for the curtailment:
# Get constraint curtailment events
constraint_events = curtailment.get_events(driver="Constraint", limit=20)
print("Constraint curtailment events:")
print(f"Total events: {constraint_events.total_count}")
print(f"Retrieved: {len(constraint_events.records)}")
# Show a sample event
if constraint_events.records:
print("\nSample constraint event:")
first = constraint_events.records[0]
if first.fields:
for key, value in first.fields.items():
print(f" {key}: {value}")
# Expected output:
# Constraint curtailment events:
# Total events: 8523
# Retrieved: 20
#
# Sample constraint event:
# site_id: SITE042
# date: 2024-07-15
# driver: constraint
# ...
Combining Multiple Filters¶
You can combine filters to create more specific queries:
# Combine site, date range, and driver filters
filtered_events = curtailment.get_events(
site_id="SITE001", # Specific site
start_date="2024-01-01", # From January 2024
end_date="2024-06-30", # To June 2024
driver="Non-constraint", # Only Non-constraint events
limit=100,
)
print("Combined filter query:")
print(" Site: SITE001")
print(" Date range: 2024-01-01 to 2024-06-30")
print(" Driver: Non-constraint")
print(f"\nTotal matching events: {filtered_events.total_count}")
print(f"Retrieved: {len(filtered_events.records)}")
# Expected output:
# Combined filter query:
# Site: SITE001
# Date range: 2024-01-01 to 2024-06-30
# Driver: Non-constraint
#
# Total matching events: 12
# Retrieved: 12
Pagination¶
For large result sets, use offset to paginate through records:
# Paginate through results
page_size = 50
page = 0 # 0-indexed
# Get first page
page1 = curtailment.get_events(limit=page_size, offset=page * page_size)
total_records = page1.total_count
total_pages = (total_records + page_size - 1) // page_size
print("Pagination info:")
print(f" Total records: {total_records}")
print(f" Page size: {page_size}")
print(f" Total pages: {total_pages}")
print(f" Current page: {page + 1}")
print(f" Records on this page: {len(page1.records)}")
# Get second page
page2 = curtailment.get_events(limit=page_size, offset=1 * page_size)
print(f"\nPage 2 records: {len(page2.records)}")
# Expected output:
# Pagination info:
# Total records: 15234
# Page size: 50
# Total pages: 305
# Current page: 1
# Records on this page: 50
#
# Page 2 records: 50
5. Using the Generic get() Function¶
The curtailment.get() function provides a more flexible way to query any curtailment dataset with full control over query parameters.
# Basic usage of get() function
data = curtailment.get(
dataset="events", # or "site_specific" (they're the same)
limit=10,
)
print(f"Retrieved {len(data.records)} records")
print(f"Total available: {data.total_count}")
# Expected output:
# Retrieved 10 records
# Total available: 15234
# Using get() with ODSQL where clause
# This gives you full control over filtering
# Note: Use actual field names from the ODP schema
data_with_where = curtailment.get(
dataset="events",
limit=20,
where="start_time_local >= '2024-01-01' AND start_time_local <= '2024-03-31'",
)
print(f"Q1 2024 events (using where clause): {data_with_where.total_count}")
# Expected output:
# Q1 2024 events (using where clause): 3421
# Using get() with field selection
# Note: Use actual field names from the ODP schema
selected_data = curtailment.get(
dataset="events",
limit=5,
select="der_name, start_time_local, driver", # Actual field names
)
print("Records with selected fields only:")
print("-" * 40)
for record in selected_data.records:
if record.fields:
print(record.fields)
# Expected output:
# Records with selected fields only:
# ----------------------------------------
# {'der_name': 'SITE001', 'start_time_local': '2024-07-15T...', 'driver': 'thermal'}
# ...
# Using get() with sorting
# Note: Use actual field names from the ODP schema
sorted_data = curtailment.get(
dataset="events",
limit=5,
order_by="-start_time_local", # Sort by date descending (most recent first)
)
print("Most recent curtailment events:")
print("-" * 40)
for record in sorted_data.records:
if record.fields:
date_val = record.fields.get("start_time_local", "N/A")
site_val = record.fields.get("der_name", "N/A")
print(f" {date_val} - {site_val}")
# Expected output:
# Most recent curtailment events:
# ----------------------------------------
# 2024-07-15T... - SITE001
# 2024-07-15T... - SITE042
# ...
Comparison: get_events() vs get()¶
| Feature | get_events() |
get() |
|---|---|---|
| Purpose | Specialized for events | Generic access |
| Filtering | Built-in parameters (site_id, start_date, etc.) |
Manual where clause |
| Ease of use | Higher | Lower |
| Flexibility | Lower | Higher |
| Best for | Quick, common queries | Complex or custom queries |
6. Exporting Curtailment Data¶
Use the export() function to download curtailment data in various formats.
# Export to CSV
csv_data = curtailment.export(
dataset="events",
format="csv",
limit=100, # Limit for this example
)
print(f"Exported {len(csv_data)} bytes of CSV data")
# Preview first few lines
print("\nCSV Preview (first 500 characters):")
print("-" * 40)
print(csv_data.decode("utf-8")[:500])
# Expected output:
# Exported 15234 bytes of CSV data
#
# CSV Preview (first 500 characters):
# ----------------------------------------
# site_id;date;driver;energy_mwh;...
# SITE001;2024-01-15;Constraint;12.5;...
# ...
# Save CSV to file (optional)
from pathlib import Path
csv_data = curtailment.export(
dataset="events",
format="csv",
limit=500,
)
save_dir = None # Set to a directory (e.g. "exports") to enable writing files.
if save_dir:
output_file = Path(save_dir) / "curtailment_events.csv"
output_file.parent.mkdir(parents=True, exist_ok=True)
with open(output_file, "wb") as f:
f.write(csv_data)
print(f"Saved {output_file}")
else:
print("File save skipped; set save_dir to enable writing.")
# Expected output:
# Saved curtailment_events.csv
# Export to JSON
import json
json_data = curtailment.export(dataset="events", format="json", limit=5)
# Parse and display
data = json.loads(json_data)
print(f"Exported {len(data)} records as JSON")
print("\nFirst record:")
print(json.dumps(data[0], indent=2))
# Expected output:
# Exported 5 records as JSON
#
# First record:
# {
# "site_id": "SITE001",
# "date": "2024-07-15",
# "driver": "Constraint",
# ...
# }
# Export to Excel (optional)
from pathlib import Path
xlsx_data = curtailment.export(
dataset="events",
format="xlsx",
limit=100,
)
save_dir = None # Set to a directory (e.g. "exports") to enable writing files.
if save_dir:
output_file = Path(save_dir) / "curtailment_events.xlsx"
output_file.parent.mkdir(parents=True, exist_ok=True)
with open(output_file, "wb") as f:
f.write(xlsx_data)
print(f"Saved {output_file} ({len(xlsx_data)} bytes)")
print("Open the file in Excel to view the data.")
else:
print(
f"Exported {len(xlsx_data)} bytes (file save skipped; set save_dir to enable writing)."
)
# Expected output:
# Saved curtailment_events.xlsx (12543 bytes)
# Open the file in Excel to view the data.
Loading Exported Data into pandas¶
# Load exported CSV into pandas
try:
from io import BytesIO
import pandas as pd
# Export and load directly into pandas
csv_data = curtailment.export(dataset="events", format="csv", limit=200)
# Note: OpenDataSoft CSV uses semicolon separator
df = pd.read_csv(BytesIO(csv_data), sep=";")
print(f"DataFrame shape: {df.shape}")
print(f"\nColumns: {list(df.columns)}")
print("\nFirst 5 rows:")
display(df.head())
except ImportError:
print("pandas not installed. Install with: pip install pandas")
# Expected output:
# DataFrame shape: (200, 8)
#
# Columns: ['site_id', 'date', 'driver', 'energy_mwh', ...]
#
# First 5 rows:
# site_id date driver energy_mwh ...
# 0 SITE001 2024-07-15 Constraint 12.5 ...
# 1 SITE042 2024-07-14 Non-constraint 8.3 ...
# ...
Available Export Formats¶
| Format | Description | Use Case |
|---|---|---|
csv |
Comma-separated values (actually semicolon-separated) | General analysis, Excel |
json |
JSON format | Web applications, APIs |
xlsx |
Microsoft Excel | Sharing with non-technical users |
geojson |
GeoJSON format | Geographic visualization |
parquet |
Apache Parquet | Big data workflows |
kml |
Keyhole Markup Language | Google Earth |
shapefile |
Esri Shapefile | GIS applications |
Summary¶
You've learned how to:
- Understand curtailment - What it is, when it occurs, and why it matters
- Import the orchestrator -
from ukpyn import curtailment - List available datasets -
curtailment.available_datasets - Get curtailment events with filtering - Using
curtailment.get_events()with:site_id- Filter by specific sitestart_date/end_date- Filter by date rangedriver- Filter by curtailment reasonlimit/offset- Pagination
- Use the generic get function -
curtailment.get()with full ODSQL support - Export data -
curtailment.export()to CSV, JSON, Excel, and more
Quick Reference¶
from ukpyn import curtailment
# List datasets
curtailment.available_datasets
# Get events with filtering
events = curtailment.get_events(
site_id="SITE001",
start_date="2024-01-01",
end_date="2024-12-31",
driver="Constraint",
limit=100
)
# Generic get with ODSQL
data = curtailment.get(
dataset="events",
where="driver = 'Constraint'",
order_by="-date",
limit=50
)
# Export to file
csv_bytes = curtailment.export("events", format="csv")
Next Steps¶
- Explore the 03-analysis-patterns.ipynb tutorial for data analysis workflows
- Check the UK Power Networks Open Data Portal for dataset documentation
- Look at the examples folder for community contributions