About

Why Newcastle, why now.

The short version: world-class AI infrastructure can be operated from regional Australia rather than re-exported to American cloud regions. We're putting that into practice from a Tier III facility in Mayfield West.

The context

Newcastle Rising

Newcastle Rising is the broader civic project this site belongs to: a deliberate effort to build digital and physical infrastructure in the Hunter that gets used by the people who live here, rather than serving as a node in someone else's network. The thesis is straightforward — most of what makes a city a good place to live and work is local; AI compute should be too.

Compute is one strand of that work. There are others — housing, civic data, transport, energy — that Newcastle Rising touches on or partners with. Newcastle Compute is the strand that puts a working business on the table: GPU infrastructure that is locally owned, locally accountable, and earns its way by being competitive on the merits.

Exterior of the Leading Edge Data Centre at Mayfield West — navy corrugated steel facade with a copper accent band, viewed at dusk with the Hunter River beyond
The facility

Where the hardware lives

The compute runs from Leading Edge Data Centres at Mayfield West, on Ausgrid land. The site has substantial existing power and fibre that historically served other workloads and has been re-purposed for higher-density AI use. Tier III design — N+1 power and cooling, two carrier-diverse fibre paths, 24/7 staffed access control.

The mechanical and electrical design has been re-worked for AI workload density: direct-to-chip liquid cooling on the highest-power trays, rear-door heat exchangers elsewhere, an upgraded power distribution that supports 50 kW+ racks where the workload calls for it.

None of this is novel — every hyperscaler has equivalents — but most of it does not yet exist at this scale outside Sydney or Melbourne in NSW. That gap is what we're closing.

The thesis

Why we think this works

Three things compound, stacked: cutting-edge software, cutting-edge hardware, and sovereign Australian collocation. Each is replicable in isolation. Stacked, they form a position hyperscalers cannot occupy and small competitors cannot match.

The software axis is the actual moat. Most clouds take months to add new open-weights models — they run at the speed of enterprise contracts and internal QA committees. A small Australian team deploys within hours. By the time AWS Bedrock has whatever shipped this week, our customers have been on it for two months.

The hardware axis is table stakes. B200, H200, H100, RTX PRO 6000 Blackwell. Necessary, not differentiating in isolation. It exists to make the software fast.

The collocation axis is the regulatory moat. Australian-owned operator in a Tier III Australian facility is something no US hyperscaler can offer. Narrow but durable — and increasingly valuable as sovereign-AI procurement requirements harden across government, health, and regulated finance.

For a longer read on strategic position, opportunity, and what we're looking for: the partners page.

Governance and ownership

How decisions get made

Newcastle Compute is locally owned and not venture-funded. That has practical consequences: pricing is set cost-plus rather than to maximise capture, surplus is reinvested in infrastructure rather than distributed to remote shareholders, and the people making decisions live within walking distance of the facility.

It also imposes constraints. We can't subsidise loss-leader pricing to capture market share, and we won't take on workloads that compromise the residency commitments we've made to existing customers. We'd rather grow slower than break either of those.

Where it matters — government work, regulated industries, anyone with a real data residency requirement — we can produce on-paper attestations of where the workload lives, who has access, and what backup arrangements look like. None of that is unusual; we just make it readable.

Roadmap

What's next

Phase 1 — now

GPU rental (B200, H200, H100, RTX PRO 6000 Blackwell; AMD MI325X on request), Qwen-3 inference and fine-tuning, custom engineering engagements. Invitation-based access while we shake out the self-serve flow.

Phase 2 — 6 to 12 months

Self-serve account creation and payments. Capacity expansion as utilisation justifies it. Embeddings and image-generation APIs. SDKs hardened. Status page connected to real telemetry instead of the placeholder it is today.

Phase 3 — 1 to 2 years

Possible expansion to adjacent regions (Central Coast, Northern Rivers) if the demand shape justifies it. Exploration of a cooperative / membership ownership model — making the locally-owned commitment durable rather than dependent on one operator's continued goodwill.

Open questions

Things we're still working out

It would be dishonest to pretend this is a finished thing. Some of what's still open:

  • The fine-tuning service is at a small enough scale that pricing reflects bespoke handling more than steady-state economics. We'll lower prices once the flow is automated.
  • The status page on this site reports infrastructure-level signals only. Per-instance health observability is on the roadmap.
  • We don't yet support multi-region failover. If you need that, talk to us — there are workarounds, just not push-button ones.

If something we've written here turns out to be wrong, we'll change it and say what changed.