Prerequisites
Before creating a Hetzner cluster, you need two credentials:Hetzner API Credential
A Hetzner Cloud API token with read/write permissions. See Hetzner Credentials.
SSH Key Credentials
One or more SSH public keys for server access. You can provide your own or let Ankra generate one. Multiple keys can be attached to a single cluster. See SSH Key Credentials.
Creating a Hetzner Cluster
Via the Platform UI
Configure Cluster
Fill in the cluster configuration:
- Name — A unique name for your cluster
- Hetzner Credential — Select your Hetzner API credential
- SSH Keys — Select one or more SSH key credentials
- Location — Hetzner datacenter (e.g.,
fsn1,nbg1,hel1) - Control Plane — Count and server type (e.g.,
cx33) - Workers — Count and server type
- Distribution — Kubernetes distribution (
k3s)
Via the CLI
Via the API
The
worker_count and worker_server_type fields are still accepted for backward compatibility. If node_groups is provided, it takes precedence.
The singular ssh_key_credential_id field is also still accepted. If both fields are provided, they are merged.Cluster Configuration Options
| Parameter | Default | Description |
|---|---|---|
name | required | Unique cluster name |
credential_id | required | Hetzner API credential ID |
ssh_key_credential_ids | required | Array of SSH key credential IDs |
ssh_key_credential_id | — | Single SSH key credential ID (backward compatible, use ssh_key_credential_ids for multiple) |
location | required | Hetzner datacenter location |
network_ip_range | 10.0.0.0/16 | Private network IP range |
subnet_range | 10.0.1.0/24 | Subnet range within the network |
bastion_server_type | cx23 | Server type for the bastion host |
control_plane_count | 1 | Number of control plane nodes |
control_plane_server_type | cx33 | Server type for control planes |
worker_count | 1 | Number of worker nodes (legacy, use node_groups instead) |
worker_server_type | cx33 | Server type for workers (legacy, use node_groups instead) |
node_groups | — | Array of node group definitions (see Node Groups) |
distribution | k3s | Kubernetes distribution |
kubernetes_version | latest | Kubernetes version (optional) |
Hetzner Locations
| Location | Region |
|---|---|
fsn1 | Falkenstein, Germany |
nbg1 | Nuremberg, Germany |
hel1 | Helsinki, Finland |
ash | Ashburn, USA |
hil | Hillsboro, USA |
sin | Singapore |
Access Settings
The Access tab in cluster settings provides SSH access commands and SSH key management for Hetzner clusters.SSH Access
The Access page displays copy-pasteable commands for connecting to your cluster: SSH to the control plane via the bastion host:kubectl access:
kubectl to use https://localhost:6443.
The Access page also shows the network topology with the bastion host and all control plane nodes.
Managing SSH Keys
You can add or remove SSH key credentials from a running cluster in Settings > Access. Changes are synced to all servers on the next reconciliation — SSH keys are registered with the Hetzner API andauthorized_keys is updated on all nodes.
SSH keys can also be managed via the API:
| Endpoint | Method | Description |
|---|---|---|
/api/v1/clusters/hetzner/{id}/ssh-keys | GET | List current and available SSH keys |
/api/v1/clusters/hetzner/{id}/ssh-keys | PUT | Update SSH keys on the cluster |
/api/v1/clusters/hetzner/{id}/access-info | GET | Get bastion and control plane IPs |
Node Groups
Node groups let you organize worker nodes into logical groups with independent instance types, counts, labels, and taints. Each group can be scaled, upgraded, and configured independently.Via the Platform UI
Navigate to cluster Settings > Nodes to manage node groups. From this tab you can:- View all node groups with their instance type, count, labels, and taints
- Add new node groups with a name, instance type, count, and optional labels/taints
- Scale individual groups up or down (0–100 nodes)
- Upgrade the instance type (upgrade only — see Instance Type Changes)
- Edit labels and taints per group
- Delete a node group and all its nodes
List Node Groups
Add a Node Group
Scale a Node Group
Instance Type Changes
Delete a Node Group
Update Labels and Taints
Node Group API Reference
| Endpoint | Method | Description |
|---|---|---|
/api/v1/clusters/hetzner/{id}/node-groups | GET | List all node groups |
/api/v1/clusters/hetzner/{id}/node-groups | POST | Add a node group |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/scale | PUT | Scale a node group |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/instance-type | PUT | Upgrade instance type |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/labels | PUT | Update labels |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/taints | PUT | Update taints |
/api/v1/clusters/hetzner/{id}/node-groups/{name} | DELETE | Delete a node group |
Legacy Worker Scaling
The legacyscale-workers and worker-count endpoints still work for backward compatibility. They operate on all workers as a single pool.
For new clusters, prefer using Node Groups for more granular control.
Upgrading Kubernetes Version
You can upgrade the Kubernetes (k3s) version on all nodes in a Hetzner cluster. Upgrades are applied to control plane nodes first, then workers.Check Current Version
Upgrade Version
Deprovisioning
Deprovisioning deletes all Hetzner resources (servers, networks, SSH keys) and removes the cluster from Ankra.Architecture
A Hetzner cluster provisions the following infrastructure:| Component | Description |
|---|---|
| Bastion Host | Jump server for secure SSH access to cluster nodes |
| Private Network | Isolated network for inter-node communication |
| Control Plane(s) | Kubernetes control plane nodes |
| Worker(s) | Kubernetes worker nodes organized in node groups, each with independent instance types, labels, and taints |
| SSH Keys | Deployed to all servers for access (multiple keys supported) |
| External Cloud Provider | k3s is configured with cloud-provider=external for Hetzner CCM compatibility |
External Cloud Provider
Hetzner clusters are provisioned with--kubelet-arg=cloud-provider=external and --disable-cloud-controller. This configures k3s to delegate node initialization to an external Cloud Controller Manager (CCM).
The node.cloudprovider.kubernetes.io/uninitialized taint is automatically removed during provisioning so the Ankra Agent can schedule immediately.
Hetzner Cloud Controller Manager (hcloud-ccm)
The Hetzner Cloud Controller Manager integrates your cluster with Hetzner Cloud APIs to provide:- Node metadata — automatic zone, region, and instance type labels on nodes
- Load Balancers — Kubernetes
LoadBalancerservices backed by Hetzner Cloud Load Balancers - Node lifecycle — automatic removal of deleted nodes from the cluster
- Route management — pod network routes via Hetzner Cloud Networks
Deploy with AI
The fastest way to set up hcloud-ccm is to ask the Ankra AI Assistant. Open the chat (⌘+J) on your cluster and prompt:
- A Kubernetes Secret manifest with your Hetzner API token (placeholder value for you to fill in)
- The hcloud-ccm Helm chart configured for your cluster’s network
Review the Draft
The AI creates the stack as a draft in the Stack Builder. Review the node diagram — you’ll see the Secret and the hcloud-ccm chart with a dependency arrow (the chart depends on the secret).
Add Your Hetzner Token
Click the Secret node in the diagram and replace the placeholder token value with your actual Hetzner Cloud API token (the same one used to provision the cluster, or a separate read/write token).
Manual Setup
If you prefer to configure it manually, add the hcloud-ccm Helm chart from thehttps://charts.hetzner.cloud registry:
- Add the Hetzner Helm registry in Registries (URL:
https://charts.hetzner.cloud) - Create a stack with:
- A Secret in the
kube-systemnamespace namedhcloudwith keytokencontaining your Hetzner API token - The hcloud-cloud-controller-manager chart from the Hetzner registry, deployed to
kube-system, with values:
- A Secret in the
Hetzner CSI Driver (hcloud-csi)
The Hetzner CSI Driver provides persistent storage for your workloads using Hetzner Cloud Volumes:- Dynamic provisioning — create Hetzner Cloud Volumes on demand via
PersistentVolumeClaim - Volume expansion — resize volumes without downtime
- Storage classes —
hcloud-volumesStorageClass available out of the box
Deploy with AI
Open the chat (⌘+J) and prompt:
hcloud secret for CCM, the AI will reference it automatically.
Manual Setup
Add the hcloud-csi chart from the Hetzner Helm registry (https://charts.hetzner.cloud) to a stack:
- Deploy to the
kube-systemnamespace - The chart reads the Hetzner API token from the same
hcloudsecret used by CCM - Values:
storageClassName: hcloud-volumes:
Recommended Stack: CCM + CSI Together
For a production Hetzner cluster, deploy both hcloud-ccm and hcloud-csi together. Ask the AI:- hcloud Secret — your Hetzner API token
- hcloud-ccm — Cloud Controller Manager (depends on the secret)
- hcloud-csi — CSI Driver (depends on the secret)
Troubleshooting
Common Issues
| Issue | Solution |
|---|---|
| Cluster stuck in provisioning | Check Hetzner API token permissions and quota |
| Cannot scale workers | Ensure no operations are running |
| Invalid API token | Re-validate at Hetzner Console |
| Server type unavailable | Try a different location or server type |
| Cannot downgrade instance type | Hetzner disks cannot be shrunk. Create a new node group with the desired type and delete the old one |
412: error during placement | Hetzner capacity issue at that location. Retry later or try a different server type |
Node has uninitialized taint | The taint is removed automatically; if it persists, deploy hcloud-ccm or trigger a reconciliation |
LoadBalancer service stuck in Pending | Deploy the Hetzner Cloud Controller Manager |
PVCs stuck in Pending | Deploy the Hetzner CSI Driver |
CCM/CSI pods in CrashLoopBackOff | Check that the hcloud secret in kube-system contains a valid Hetzner API token |
Hetzner API Quota
Hetzner Cloud has default resource limits per project. If provisioning fails, check your quotas in the Hetzner Console:- Servers
- Networks
- SSH Keys