LTDS Network Planning with ukpyn¶
This tutorial covers the Long Term Development Statement (LTDS) orchestrator for accessing network planning data from UK Power Networks.
What you'll learn:
- What is LTDS and why it's useful
- Using the LTDS orchestrator with simple imports
- Listing available datasets
- Getting Table 3a (observed peak demand)
- Getting Table 2a/2b (transformer data)
- Getting Table 5 (generation data)
- Getting Table 6 (connection interest)
- Getting infrastructure projects
- Exporting data to CSV
Prerequisites: Complete 01-getting-started.ipynb first.
- These tutorials require additional dependencies. Install them with
pip install "ukpyn[all]"— see Tutorial 01 for full setup instructions
1. What is LTDS?¶
The Long Term Development Statement (LTDS) is a key regulatory document published by UK Power Networks twice yearly. It provides detailed information about the distribution network that is essential for:
- Network planners understanding infrastructure capacity
- Developers assessing connection opportunities
- Researchers analysing grid constraints and generation patterns
- Local authorities planning decarbonisation initiatives
LTDS Tables Overview¶
| Table | Description | Key Use Cases |
|---|---|---|
| Table 2a/2b | Transformer specifications | Capacity planning, asset modelling |
| Table 3a | Observed peak demand | Load forecasting, headroom analysis |
| Table 5 | Generation capacity | Renewable energy mapping, curtailment analysis |
| Table 6 | Connection interest | Development pipeline, constraint identification |
| Projects | Infrastructure schemes | Investment planning, reinforcement tracking |
Licence Areas¶
UK Power Networks operates across three licence areas:
- EPN - Eastern Power Networks (East of England)
- LPN - London Power Networks (Greater London)
- SPN - South Eastern Power Networks (South East England)
2. Using the LTDS Orchestrator¶
The LTDS orchestrator provides a simple, ergonomic interface for accessing LTDS data. Import it directly from ukpyn:
import ukpyn
ukpyn.check_api_key()
print("API key configured!")
API key configured!
# Import the LTDS orchestrator
from ukpyn import ltds
print("LTDS orchestrator imported successfully!")
# We have a convenient way to check that the orchestrator is working, by printing the methods on the object.
print(repr(ltds))
LTDS orchestrator imported successfully! LTDSOrchestrator(methods=[export, get, get_async, get_cim, get_projects, get_table_1, get_table_2a, get_table_2b, get_table_3a, get_table_3b, get_table_4a, get_table_4b, get_table_5, get_table_6, get_table_7, get_table_8])
3. Listing Available Datasets¶
The ltds.available_datasets property shows all datasets you can access through the orchestrator.
# List all available LTDS datasets
print("Available LTDS datasets:")
print("-" * 40)
for dataset in ltds.available_datasets:
print(f" - {dataset}")
print(f"\nTotal: {len(ltds.available_datasets)} datasets")
# Expected output:
# Available LTDS datasets:
# ----------------------------------------
# - table_2a
# - transformer_2w
# - table_2b
# - transformer_3w
# - table_3a
# - observed_peak_demand
# - table_3a_transposed
# - table_5
# - generation
# - table_6
# - connection_interest
# - infrastructure_projects
# - projects
# - cim
#
# Total: 14 datasets
Available LTDS datasets: ---------------------------------------- - table_1 - circuit_data - table_2a - transformer_2w - table_2b - transformer_3w - table_3a - observed_demand - observed_peak_demand - table_3a_transposed - table_3b - forecast_demand - table_4a - fault_level_3ph - table_4b - fault_level_earth - table_5 - generation - table_6 - connection_interest - table_7 - restrictions - table_8 - fault_data - projects - infrastructure_projects - cim Total: 27 datasets
Using the Generic get() Function¶
The ltds.get() function allows you to fetch any LTDS dataset by name:
# Fetch data using the generic get() function
# Import the LTDS orchestrator
from ukpyn import ltds
response = ltds.get("table_3a", limit=5)
data = response.records
print("Dataset: table_3a")
print(f"Total records: {response.total_count}")
print(f"Records returned: {len(response.records)}")
# Show first record fields
if response.records:
print("\nSample record fields:")
for key, value in list(response.records[0].fields.items())[:5]:
print(f" {key}: {value}")
# Expected output:
# Dataset: table_3a
# Total records: 1523
# Records returned: 5
#
# Sample record fields:
# licence_area: EPN
# substation: ALDEBURGH PRIMARY
# year: 2023
# peak_demand_mw: 12.5
# ...
Dataset: table_3a Total records: 1964 Records returned: 5 Sample record fields: gridsupplypoint: Amersham substation: Coldharbour Farm 11kV season: Summer maximum_demand_24_25_mw: 2.9 maximum_demand_24_25_pf: 0.96
4. Table 3a - Observed Peak Demand¶
Table 3a contains observed load data for primary substations. This is essential for:
- Understanding current network utilisation
- Identifying substations approaching capacity
- Load forecasting and trend analysis
Use the dedicated get_table_3a() function for convenient filtering:
# Get Table 3a data for a specific licence area
epn_demand = ltds.get_table_3a(licence_area="EPN", limit=10)
print("EPN Observed Peak Demand Data")
print(f"Total records: {epn_demand.total_count}")
print(f"Records returned: {len(epn_demand.records)}")
print("-" * 60)
for record in epn_demand.records[:5]:
fields = record.fields
substation = fields.get("substation", "Unknown")
peak = fields.get(
"maximum_demand_24_25_mw", "N/A"
) # Adjust field name based on actual dataset
season = fields.get("season", "N/A")
print(f" {substation}: {peak} MW (Season: {season})")
# Expected output:
# EPN Observed Peak Demand Data
# Total records: 523
# Records returned: 10
# ------------------------------------------------------------
# ALDEBURGH PRIMARY: 12.5 MW (Season: Winter)
# ALTON WATER PRIMARY: 8.3 MW (Season: Winter)
# AMPTON PRIMARY: 15.2 MW (Season: Winter)
# ...
EPN Observed Peak Demand Data Total records: 1110 Records returned: 10 ------------------------------------------------------------ Ilmer Primary 11kV: 7.0 MW (Season: Summer) Thame Primary 11kV: 9.4 MW (Season: Summer) Wendover Primary 11kV: 11.4 MW (Season: Winter) Romford Primary 11kV: 23.2 MW (Season: Summer) Selinas Ln Primary 11kV: 9.5 MW (Season: Summer)
# Filter by licence area
lpn_2023 = ltds.get_table_3a(
licence_area="London Power Networks (LPN)", limit=10
) # add other filters as needed
print("London Power Networks - 2023 Peak Demand")
print(f"Total records: {lpn_2023.total_count}")
print("-" * 60)
for record in lpn_2023.records[:5]:
fields = record.fields
print(
f" {fields.get('substation', 'Unknown')}: {fields.get('maximum_demand_24_25_mw', 'N/A')} MW"
)
# Expected output:
# London Power Networks - 2023 Peak Demand
# Total records: 187
# ------------------------------------------------------------
# ABBEY ROAD PRIMARY: 22.1 MW
# ACTON LANE PRIMARY: 18.7 MW
# ...
London Power Networks - 2023 Peak Demand Total records: 258 ------------------------------------------------------------ Axe St 11kV: 31.9 MW Barking West 11kV: 14.2 MW Whiston Rd 11kV: 27.9 MW Churchfields 11kV: 34.4 MW Forest Hill 11kV: 14.7 MW
5. Table 2a/2b - Transformer Data¶
Tables 2a and 2b contain transformer specifications:
- Table 2a: Two-winding transformers (1x HV, 1x LV)
- Table 2b: Three-winding transformers (1x HV, 2x LV)
This data is useful for capacity planning and network modelling.
# Get Table 2a - Two-winding transformer data
transformers_2w = ltds.get_table_2a(licence_area="EPN", limit=10)
print("EPN Two-Winding Transformers (Table 2a)")
print(f"Total records: {transformers_2w.total_count}")
print("-" * 60)
for record in transformers_2w.records[:5]:
fields = record.fields
substation = fields.get("gridsupplypoint", "Unknown")
winter_rating = fields.get("transformer_rating_mva_winter", "N/A")
summer_rating = fields.get("transformer_rating_mva_summer", "N/A")
voltage_hv = fields.get("voltage_hv", "N/A")
voltage_lv = fields.get("voltage_lv", "N/A")
print(
f" {substation}: {winter_rating}/{summer_rating} MVA ({voltage_hv}/{voltage_lv} kV)"
)
# Expected output:
# EPN Two-Winding Transformers (Table 2a)
# Total records: 234
# ------------------------------------------------------------
# Amersham: 15/12 MVA (33/11.5 kV)
# Barking 132kV: 117/90 MVA (132/33 kV)
# ...
EPN Two-Winding Transformers (Table 2a) Total records: 1153 ------------------------------------------------------------ Amersham: 15/12 MVA (33/11.5 kV) Amersham: 18/15 MVA (33/11 kV) Barking 132kV: 117/90 MVA (132/33 kV) Barking 132kV: 13/10 MVA (33/11 kV) Barking West 33kV: 24/18 MVA (33/11 kV)
# Get Table 2b - Three-winding transformer data
transformers_3w = ltds.get_table_2b(licence_area="EPN", limit=10)
print("EPN Three-Winding Transformers (Table 2b)")
print(f"Total records: {transformers_3w.total_count}")
print("-" * 60)
for record in transformers_3w.records[:5]:
fields = record.fields
substation = fields.get("hv_substation", "Unknown")
summer_rating = fields.get("transformer_rating_mva_summer_hv", "N/A")
winter_rating = fields.get("transformer_rating_mva_winter_hv", "N/A")
print(f" {substation}: {winter_rating}/{summer_rating} MVA")
# Expected output:
# EPN Three-Winding Transformers (Table 2b)
# Total records: 45
# ------------------------------------------------------------
# BRAMFORD GRID: 240 MVA
# BURWELL GRID: 180 MVA
# ...
EPN Three-Winding Transformers (Table 2b) Total records: 31 ------------------------------------------------------------ Chelmsford North Grid 132kV: 78/60 MVA Chelmsford North Grid 132kV: 78/60 MVA Waltham Park 132kV: 78/60 MVA Bushey Mill Grid 132kV: 78/60 MVA EPN 0205: 117/90 MVA
# Search for a specific substation by name
# Transformer table data has hv and lv substations, so we may need to adjust the search field based on actual dataset structure
# search for 'Ashford' in hv_sustation or lv_substation as needed based on actual dataset structure
search_results = ltds.get(
"table_2a",
refine={
"licencearea": "South Eastern Power Networks (SPN)"
}, # Exact match filter here
where="transformer_rating_mva_winter > 50", # Numeric filter
limit=20,
)
print("SPN Transformers > 50 MVA")
print(f"Found: {search_results.total_count} records")
print("-" * 60)
for record in search_results.records[:10]:
fields = record.fields
gsp = fields.get("gridsupplypoint", "N/A")
winter_rating = fields.get("transformer_rating_mva_winter", "N/A")
summer_rating = fields.get("transformer_rating_mva_summer", "N/A")
voltage_hv = fields.get("voltage_hv", "N/A")
voltage_lv = fields.get("voltage_lv", "N/A")
print(
f" {gsp}: {winter_rating}/{summer_rating} MVA ({voltage_hv}/{voltage_lv} kV)"
)
# Expected output:
# SPN Transformers > 50 MVA
# Found: 15 records
# ------------------------------------------------------------
# Ashford Grid 33kV: 90/75 MVA (132/33 kV)
# Canterbury Grid: 120/100 MVA (132/33 kV)
# ...
SPN Transformers > 50 MVA Found: 0 records ------------------------------------------------------------
6. Table 5 - Generation Data¶
Table 5 shows the capacity of existing distributed generation connected at each primary substation. This includes:
- Solar PV installations
- Wind generation
- Battery storage
- Other distributed energy resources
Essential for understanding renewable energy penetration and export constraints.
# Get generation data for a licence area
generation = ltds.get_table_5(licence_area="EPN", limit=10)
print("EPN Generation Capacity (Table 5)")
print(f"Total records: {generation.total_count}")
print("-" * 60)
for record in generation.records[:5]:
fields = record.fields
gridsupplypoint = fields.get("gridsupplypoint", "Unknown")
tech = fields.get("fuel_type", "Unknown")
capacity = fields.get("installedcapacity_mva", "N/A")
print(f" {gridsupplypoint} ({tech}): {capacity} MVA")
# Expected output:
# EPN Generation Capacity (Table 5)
# Total records: 1247
# ------------------------------------------------------------
# ALDEBURGH PRIMARY (Solar): 5.2 MW
# ALTON WATER PRIMARY (Wind): 12.0 MW
# ...
EPN Generation Capacity (Table 5) Total records: 493 ------------------------------------------------------------ Amersham (Energy Storage (>=1MW)): 210.5 MVA Amersham (Energy Storage (>=1MW)): 104.2 MVA BIGGLESWADE 132KV (Photovoltaic (>=1MW)): 157.7 MVA Bramford (Energy Storage (>=1MW)): 147.4 MVA Brimsdown (EPN) (Photovoltaic (>=1MW)): 157.8 MVA
# Filter by technology type - Solar generation
solar = ltds.get("table_5", where="fuel_type LIKE '%Photovoltaic%'", limit=10)
print("Solar Generation Across All Licence Areas")
print(f"Total records: {solar.total_count}")
print("-" * 60)
for record in solar.records[:5]:
fields = record.fields
area = fields.get("licencearea", "?")
substation = fields.get("substation", "Unknown")
capacity = fields.get("installedcapacity_mva", "N/A")
print(f" [{area}] {substation}: {capacity} MVA")
# Expected output:
# Solar Generation Across All Licence Areas
# Total records: 892
# ------------------------------------------------------------
# [South Eastern Power Networks (SPN)] None: 5.7 MVA
# [South Eastern Power Networks (SPN)] Walton 11kV: 1.3 MVA
# ...
Solar Generation Across All Licence Areas Total records: 292 ------------------------------------------------------------ [South Eastern Power Networks (SPN)] None: 5.7 MVA [South Eastern Power Networks (SPN)] Walton 11kV: 1.3 MVA [Eastern Power Networks (EPN)] None: 157.7 MVA [Eastern Power Networks (EPN)] None: 157.8 MVA [Eastern Power Networks (EPN)] None: 404.0 MVA
# Combine filters: Wind generation in SPN
spn_wind = ltds.get(
"table_5",
where='licencearea="South Eastern Power Networks (SPN)" AND fuel_type LIKE "%Wind%"',
limit=10,
)
print("Wind Generation in South Eastern Power Networks")
print(f"Total records: {spn_wind.total_count}")
print("-" * 60)
for record in spn_wind.records:
fields = record.fields
print(f" {fields.get('substation')}: {fields.get('installedcapacity_mva')} MVA")
# Expected output:
# Wind Generation in South Eastern Power Networks
# Total records: 23
# ------------------------------------------------------------
# DUNGENESS PRIMARY: 25.0 MW
# RYE PRIMARY: 8.5 MW
# ...
Wind Generation in South Eastern Power Networks Total records: 6 ------------------------------------------------------------ None: 331.6 MVA Herne Bay Grid 33kV: 146.8 MVA None: 67.4 MVA Eastchurch Prison 6.6kV: 4.8 MVA Sheerness Grid 33kV: 20.5 MVA Polegate Grid 33kV: 7.5 MVA
7. Table 6 - Connection Interest¶
Table 6 indicates the level of new connection interest at each primary substation. This shows the pipeline of:
- Queued generation connections
- Queued demand connections
- Accepted but not yet connected projects
Critical for understanding future network constraints and development hotspots.
# Get connection interest data
connections = ltds.get_table_6(licence_area="LPN", limit=10)
print("London Power Networks - Connection Interest (Table 6)")
print(f"Total records: {connections.total_count}")
print("-" * 60)
for record in connections.records[:5]:
fields = record.fields
substation = fields.get("substation", "Unknown")
gen_queue = fields.get("generation_numbers_received_total_capacity", "N/A")
demand_queue = fields.get("demand_numbers_received_total_capacity", "N/A")
print(f" {substation}:")
print(f" Generation queue: {gen_queue} MW")
print(f" Demand queue: {demand_queue} MW")
# Expected output:
# London Power Networks - Connection Interest (Table 6)
# Total records: 187
# ------------------------------------------------------------
# ABBEY ROAD PRIMARY:
# Generation queue: 2.5 MW
# Demand queue: 15.0 MW
# ...
London Power Networks - Connection Interest (Table 6)
Total records: 133
------------------------------------------------------------
Barking 132kV GIS:
Generation queue: None MW
Demand queue: 50.0 MW
Nelson St 11kV:
Generation queue: None MW
Demand queue: 20.2 MW
Sydenham Park 11kV:
Generation queue: None MW
Demand queue: 1.8 MW
City Road 132kV:
Generation queue: None MW
Demand queue: 1.5 MW
Devonshire Square 11kV:
Generation queue: None MW
Demand queue: 5.8 MW
# Search for connection interest at specific substations
battersea = ltds.get("table_6", where='substation LIKE "%Romford%"')
print("Connection Interest - Substations matching 'ROMFORD'")
print(f"Found: {battersea.total_count} records")
print("-" * 60)
for record in battersea.records:
fields = record.fields
print(f"\n {fields.get('substation')}")
for key, value in fields.items():
if key != "substation" and value is not None:
print(f" {key}: {value}")
# Expected output:
# Connection Interest - Substations matching 'ROMFORD'
# Found: 2 records
# ------------------------------------------------------------
# ROMFORD PRIMARY
# licence_area: LPN
# generation_queue_mw: 5.0
# demand_queue_mw: 25.0
# ...
Connection Interest - Substations matching 'ROMFORD'
Found: 2 records
------------------------------------------------------------
Romford Primary 11kV
gridsupplypoint: Barking 132kV
proposed_connection_voltage_kv: 11
status_of_connection: Connection offers made (not yet accepted by customer)
demand_numbers_received_total_number: 1.0
demand_numbers_received_total_capacity: 4.4
spatial_coordinates: {'lon': 0.181853, 'lat': 51.574901}
sitefunctionallocation: EPN-S0000000H7150
licencearea: Eastern Power Networks (EPN)
Romford Primary 11kV
gridsupplypoint: Barking 132kV
proposed_connection_voltage_kv: 11
status_of_connection: Budget Estimates Provided
demand_numbers_received_total_number: 2.0
demand_numbers_received_total_capacity: 2.8
spatial_coordinates: {'lon': 0.181853, 'lat': 51.574901}
sitefunctionallocation: EPN-S0000000H7150
licencearea: Eastern Power Networks (EPN)
8. Infrastructure Projects¶
The infrastructure projects dataset contains information about UK Power Networks' network investment and reinforcement schemes, including:
- Network reinforcement projects
- New connection schemes
- Asset replacement programmes
- Innovation projects
# Get infrastructure projects with convenience function
projects = ltds.get_projects(limit=10)
print("UKPN Infrastructure Projects")
print(f"Total records: {projects.total_count}")
print("-" * 60)
for record in projects.records[:5]:
fields = record.fields
name = fields.get("ltds_name", "Unknown")
dno = fields.get("dno", "N/A")
asset_type = fields.get("asset_type_or_quantity", "Unknown")
completion_year = fields.get("expected_completion_year", "TBD")
justification = fields.get("justification_for_the_need", "N/A")
print(f" {name}")
print(f" DNO: {dno}")
print(f" Asset: {asset_type}")
print(f" Justification: {justification}")
print(f" Expected Completion: {completion_year}")
print()
# Expected output:
# UKPN Infrastructure Projects
# Total records: 342
# ------------------------------------------------------------
# Burwell Local Grid 33kV
# DNO: EPN
# Asset: 1 x 33kV CB, 2 x grid transformers uprated
# Justification: Load related reinforcement
# Expected Completion: 2026
#
# Canterbury Grid Upgrade
# DNO: SPN
# Asset: 2 x 132/33kV transformers
# Justification: Asset replacement
# Expected Completion: 2025
# ...
UKPN Infrastructure Projects
Total records: 41
------------------------------------------------------------
Burwell Local Grid 33kV
DNO: EPN
Asset: 1 x 33kV CB, 2 x grid transformers uprated
Justification: Load related reinforcement
Expected Completion: 2026
East Cambridge Grid
DNO: EPN
Asset: Establish new Grid substation
Justification: Load related reinforcement
Expected Completion: 2027
Fulbourn Grid - Sawston 33kV
DNO: EPN
Asset: Additional 33kV Circuit
Justification: Load related reinforcement
Expected Completion: 2026
Radnor Primary
DNO: EPN
Asset: Installation of a third 33kV circuit and third primary transformer
Justification: Load related reinforcement
Expected Completion: 2026
Brockenhurst Primary 11kV/Mill Hill Primary 11kV
DNO: EPN
Asset: Asset replacement. Increase in Transformer Capacity and 11kV Network Reinforcement
Justification: Asset replacement
Expected Completion: 2026
# Filter projects by local authority using convenience function
islington_projects = ltds.get_projects(local_authority="Islington", limit=10)
print("Islington Infrastructure Projects")
print(f"Total: {islington_projects.total_count}")
print("-" * 60)
for record in islington_projects.records[:5]:
fields = record.fields
name = fields.get("ltds_name", "Unknown")
asset_type = fields.get("asset_type_or_quantity", "Unknown")
completion = fields.get("expected_completion_year", "TBD")
substation = fields.get("substation_or_circuit_ple_name", "Unknown")
justification = fields.get("justification_for_the_need", "N/A")
print(f" {name}")
print(f" Substation: {substation}")
print(f" Asset: {asset_type}")
print(f" Justification: {justification}")
print(f" Expected completion: {completion}")
print()
# Expected output:
# Islington Infrastructure Projects
# Total: 5
# ------------------------------------------------------------
# Holloway Road Primary Reinforcement
# Substation: Holloway Road Primary
# Asset: 1 x 11kV transformer
# Justification: Load related reinforcement
# Expected completion: 2026
# ...
Islington Infrastructure Projects
Total: 1
------------------------------------------------------------
Islington 132kV
Substation: Islington 132kV
Asset: Replacement of Super Grid Transformers supplying site
Justification: Asset replacement
Expected completion: 2028
9. Exporting Data to CSV¶
The LTDS orchestrator includes an export() function for downloading data in various formats including CSV, JSON, and Excel.
# Export Table 3a to CSV
from pathlib import Path
csv_data = ltds.export("table_3a", format="csv", limit=100)
# Save to file only when save_dir is set
save_dir = None # Set to a directory (e.g. "exports") to enable writing files.
if save_dir:
output_file = Path(save_dir) / "ltds_table_3a.csv"
output_file.parent.mkdir(parents=True, exist_ok=True)
with open(output_file, "wb") as f:
f.write(csv_data)
print(f"Exported {len(csv_data)} bytes to {output_file}")
else:
print(
f"Exported {len(csv_data)} bytes (file save skipped; set save_dir to enable writing)."
)
# Preview first few lines
print("\nPreview (first 500 characters):")
print(csv_data.decode("utf-8")[:500])
# Expected output:
# Exported 12453 bytes
#
# Preview (first 500 characters):
# licence_area;substation;year;peak_demand_mw;...
# EPN;ALDEBURGH PRIMARY;2023;12.5;...
# ...
Exported 15099 bytes (file save skipped; set save_dir to enable writing). Preview (first 500 characters): gridsupplypoint;substation;season;maximum_demand_24_25_mw;maximum_demand_24_25_pf;forecast_m_d_mw_25_26;forecast_m_d_mw_26_27;forecast_m_d_mw_27_28;forecast_m_d_mw_28_29;forecast_m_d_mw_29_30;firm_capacity_mw;minimum_load_scaling_factor;unutilised_capacity_percent;functional_location;licencearea;id Amersham;Coldharbour Farm 11kV;Summer;2.9;0.96;2.9;2.9;2.9;2.8;2.8;11.5;28.8%;74.78260869565217;EPN-S0000000H8009;Eastern Power Networks (EPN);4 Barking 132kV;Chase Cross Primary 11kV;Winter;14.6;0
# Export generation data to CSV
from pathlib import Path
gen_csv = ltds.export("table_5", format="csv", limit=200)
save_dir = None # Set to a directory (e.g. "exports") to enable writing files.
if save_dir:
output_file = Path(save_dir) / "ltds_generation.csv"
output_file.parent.mkdir(parents=True, exist_ok=True)
with open(output_file, "wb") as f:
f.write(gen_csv)
print(f"Exported generation data: {len(gen_csv)} bytes to {output_file}")
else:
print(
f"Exported generation data: {len(gen_csv)} bytes (file save skipped; set save_dir to enable writing)."
)
# Expected output:
# Exported generation data: 24891 bytes
Exported generation data: 26627 bytes (file save skipped; set save_dir to enable writing).
# Export to JSON format
import json
json_data = ltds.export("table_6", format="json", limit=10)
# Parse and pretty-print
data = json.loads(json_data)
print(f"Exported {len(data)} records as JSON")
print("\nFirst record:")
print(json.dumps(data[0], indent=2))
# Expected output:
# Exported 10 records as JSON
#
# First record:
# {
# "licence_area": "EPN",
# "substation": "ALDEBURGH PRIMARY",
# "generation_queue_mw": 1.5,
# ...
# }
Exported 10 records as JSON
First record:
{
"gridsupplypoint": "Barking 132kV",
"substation": "Romford Primary 11kV",
"proposed_connection_voltage_kv": "11",
"status_of_connection": "Connection offers made (not yet accepted by customer)",
"demand_numbers_received_total_number": 1.0,
"demand_numbers_received_total_capacity": 4.4,
"generation_numbers_received_total_number": null,
"generation_numbers_received_total_capacity": null,
"spatial_coordinates": {
"lon": 0.181853,
"lat": 51.574901
},
"sitefunctionallocation": "EPN-S0000000H7150",
"licencearea": "Eastern Power Networks (EPN)",
"id": 4
}
Loading CSV into pandas¶
# Load exported CSV into pandas for analysis
try:
from io import BytesIO
import pandas as pd
# Export and load directly
csv_bytes = ltds.export("table_3a", format="csv", limit=500)
df = pd.read_csv(BytesIO(csv_bytes), sep=";")
print(f"Loaded DataFrame: {df.shape[0]} rows x {df.shape[1]} columns")
print(f"\nColumns: {list(df.columns)}")
print("\nFirst 5 rows:")
display(df.head())
except ImportError:
print("pandas not installed. Install with: pip install pandas")
# Expected output:
# Loaded DataFrame: 500 rows x 12 columns
#
# Columns: ['licence_area', 'substation', 'year', 'peak_demand_mw', ...]
#
# First 5 rows:
# licence_area substation year peak_demand_mw ...
# 0 EPN ALDEBURGH PRIMARY 2023 12.5 ...
# ...
Loaded DataFrame: 500 rows x 16 columns Columns: ['gridsupplypoint', 'substation', 'season', 'maximum_demand_24_25_mw', 'maximum_demand_24_25_pf', 'forecast_m_d_mw_25_26', 'forecast_m_d_mw_26_27', 'forecast_m_d_mw_27_28', 'forecast_m_d_mw_28_29', 'forecast_m_d_mw_29_30', 'firm_capacity_mw', 'minimum_load_scaling_factor', 'unutilised_capacity_percent', 'functional_location', 'licencearea', 'id'] First 5 rows:
| gridsupplypoint | substation | season | maximum_demand_24_25_mw | maximum_demand_24_25_pf | forecast_m_d_mw_25_26 | forecast_m_d_mw_26_27 | forecast_m_d_mw_27_28 | forecast_m_d_mw_28_29 | forecast_m_d_mw_29_30 | firm_capacity_mw | minimum_load_scaling_factor | unutilised_capacity_percent | functional_location | licencearea | id | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | Amersham | Coldharbour Farm 11kV | Summer | 2.9 | 0.96 | 2.9 | 2.9 | 2.9 | 2.8 | 2.8 | 11.5 | 28.8% | 74.782609 | EPN-S0000000H8009 | Eastern Power Networks (EPN) | 4 |
| 1 | Barking 132kV | Chase Cross Primary 11kV | Winter | 14.6 | 0.96 | 14.8 | 15.6 | 15.8 | 16.2 | 16.8 | 18.5 | NaN | 21.081081 | EPN-S0000000H7142 | Eastern Power Networks (EPN) | 15 |
| 2 | Barking 132kV | Crowlands Grid 33kV | Winter | 54.6 | 0.96 | 54.8 | 76.1 | 88.3 | 100.8 | 113.5 | 109.7 | NaN | 50.227894 | EPN-S0000000D7124 | Eastern Power Networks (EPN) | 17 |
| 3 | Barking 132kV | Romford Primary 11kV | Winter | 28.8 | 0.96 | 28.7 | 30.3 | 30.8 | 31.5 | 32.3 | 42.3 | NaN | 31.914894 | EPN-S0000000H7150 | Eastern Power Networks (EPN) | 19 |
| 4 | Barking 132kV | Selinas Ln Primary 11kV | Winter | 13.5 | 0.96 | 13.6 | 14.7 | 15.3 | 16.0 | 16.7 | 25.0 | NaN | 46.000000 | EPN-S0000000H7133 | Eastern Power Networks (EPN) | 21 |
Working with Async (Advanced)¶
For Jupyter notebooks or async applications, you can use the async versions of the functions:
# Async version - works natively in Jupyter
response = await ltds.get_async("table_3a", limit=5)
print("Async fetch complete!")
print(f"Records: {len(response.records)}")
# Expected output:
# Async fetch complete!
# Records: 5
Async fetch complete! Records: 5
Summary¶
You've learned how to:
- Import the LTDS orchestrator with
from ukpyn import ltds - List available datasets using
ltds.available_datasets - Fetch data using the generic
ltds.get()or table-specific functions - Filter by licence area (EPN, LPN, SPN) and other parameters
- Access Table 3a for peak demand data
- Access Tables 2a/2b for transformer specifications
- Access Table 5 for generation capacity by technology
- Access Table 6 for connection interest pipelines
- Get infrastructure projects and filter by status
- Export data to CSV, JSON, and other formats
Function Reference¶
| Function | Description |
|---|---|
ltds.get(dataset, ...) |
Generic fetch for any LTDS dataset |
ltds.get_table_2a(...) |
Transformer data (2-winding) |
ltds.get_table_2b(...) |
Transformer data (3-winding) |
ltds.get_table_3a(...) |
Observed peak demand |
ltds.get_table_5(...) |
Generation capacity |
ltds.get_table_6(...) |
Connection interest |
ltds.get_projects(...) |
Infrastructure projects |
ltds.get_cim(...) |
Common Information Model data |
ltds.export(...) |
Export data to CSV/JSON/Excel |
Next Steps¶
- Explore the 03-analysis-patterns.ipynb for data analysis with pandas
- Check out the examples folder for community contributions
- Visit the UK Power Networks Open Data Portal for more datasets