Sovereign AI infrastructure, built in Newcastle.

We operate frontier GPU hardware in the Hunter region, run the latest open-weights AI on top, and build vertical products for industries that need their data to stay in Australia. Locally owned. Tier III facility. Cutting-edge, today.

Mayfield WestHunter region, NSW Tier IIILeading Edge Data Centres 9 modelsOpen-weights AI, live now < 48 hoursTypical deployment lag from release
The thesis

Three layers. One vertically-integrated stack.

The economic moat in AI isn't selling GPU-hours — that's a commodity. It's compounding cutting-edge software on cutting-edge hardware in sovereign infrastructure, then building products on top that hyperscalers structurally can't sell into Australia.

Layer 1 · Foundation

GPU infrastructure

B200, H200, H100 and RTX PRO 6000 Blackwell racks at Mayfield West. Per-minute billing, Australian residency, no hyperscaler dependencies. Necessary, not sufficient — the foundation everything else stands on.

Layer 2 · Platform

Latest open AI, sovereign

The latest open-weights models — Qwen-3, DeepSeek-V3, Llama 3.3, Mistral Large — deployed within hours of release. OpenAI-compatible API. Fine-tuning service. Same sovereignty commitments at the model layer.

Layer 3 · Products

Vertical AI for regulated AU industries

Sovereign AI products for sectors that can't use US clouds — government, health, legal, defence-aligned, mining. Where Newcastle Compute's structural advantages translate into premium pricing.

Why this works

Hyperscalers can't run at our cadence. We can't run at their scale. The arbitrage is real.

AWS Bedrock takes months to add a new open-weights model. Azure runs at the speed of enterprise contracts. Even other neoclouds are infrastructure-first — they leave the model layer to customers. There's a gap between "model exists" and "model is usable on a major cloud" — and it's measured in weeks to months.

A small Australian team that runs on cycles measured in hours-to-days closes that gap structurally. Combined with the only thing hyperscalers literally cannot offer — Australian ownership and on-shore jurisdiction — it's a position that compounds.

GPU servers in a rack with copper coolant tubing visible against deep navy shadow
Proof of cadence

Recently shipped.

The work, dated. This is how you can tell whether we're still moving.

2026-05-11 · Today

Newcastle Compute v1.0 — soft launch

Initial cohort of users on the API. B200 and AMD MI325X capacity online on request; H200, H100 and RTX PRO 6000 Blackwell available immediately.

2026-05-07 · 4 days ago

Qwen-3 235B Instruct deployed

Frontier MoE chat model live on H200. Multi-node tensor-parallel deployment with vLLM. Benchmarks published on the changelog.

2026-05-05 · 6 days ago

DeepSeek-V3.1 added

671B MoE serving from a multi-node H200 cluster. Strong reasoning and code performance — already a customer favourite for agentic workflows.

2026-05-02 · 9 days ago

Mayfield West cluster online

First production GPU capacity commissioned at Leading Edge Data Centres. Direct-to-chip liquid cooling, 50 kW racks, two carrier-diverse fibre paths.

Full changelog →
Why Newcastle, why now

The same GPU you rent from a US hyperscaler can sit thirty kilometres from where you live — billed in your currency, run by people you can put in a room.

Newcastle Compute is part of the Newcastle Rising initiative — a deliberate effort to build digital and physical infrastructure in regional Australia rather than re-exporting the work to hyperscaler regions abroad. AI sovereignty became a real procurement requirement for Australian government, health, and regulated industries in 2025–26. We exist because nobody else is filling that gap from the Hunter.

Phase 1
Live: infrastructure + platform
2026 H2
Vertical products in build
AUD
Locally owned. AU billing.
NSW
Data residency. Sovereign jurisdiction.