Skip to main content
Ankra supports provisioning fully managed Kubernetes clusters on Hetzner Cloud. You can create clusters with configurable control planes, workers, and networking — then scale workers up or down as needed.

Prerequisites

Before creating a Hetzner cluster, you need two credentials:

Hetzner API Credential

A Hetzner Cloud API token with read/write permissions. See Hetzner Credentials.

SSH Key Credentials

One or more SSH public keys for server access. You can provide your own or let Ankra generate one. Multiple keys can be attached to a single cluster. See SSH Key Credentials.

Creating a Hetzner Cluster

Via the Platform UI

1

Navigate to Clusters

Go to Clusters in the Ankra dashboard and click Create Cluster.
2

Select Hetzner

Choose Hetzner Cloud as the provider.
3

Configure Cluster

Fill in the cluster configuration:
  • Name — A unique name for your cluster
  • Hetzner Credential — Select your Hetzner API credential
  • SSH Keys — Select one or more SSH key credentials
  • Location — Hetzner datacenter (e.g., fsn1, nbg1, hel1)
  • Control Plane — Count and server type (e.g., cx33)
  • Workers — Count and server type
  • Distribution — Kubernetes distribution (k3s)
4

Create

Click Create to start provisioning. The cluster will appear with an offline state until provisioning completes.

Via the CLI

# Create credentials first
ankra credentials hetzner create --name my-hetzner-token  # securely prompts for token
ankra credentials hetzner ssh-key create --name my-ssh-key --generate

# Create the cluster with one SSH key
ankra cluster hetzner create \
  --name my-cluster \
  --credential-id <hetzner-credential-id> \
  --ssh-key-credential-id <ssh-key-credential-id> \
  --location fsn1 \
  --control-plane-count 1 \
  --control-plane-server-type cx33 \
  --worker-count 2 \
  --worker-server-type cx33

# Or with multiple SSH keys
ankra cluster hetzner create \
  --name my-cluster \
  --credential-id <hetzner-credential-id> \
  --ssh-key-credential-ids <key-id-1>,<key-id-2>,<key-id-3> \
  --location fsn1 \
  --control-plane-count 1 \
  --control-plane-server-type cx33 \
  --worker-count 2 \
  --worker-server-type cx33

Via the API

curl -X POST https://platform.ankra.app/api/v1/clusters/hetzner \
  -H "Authorization: Bearer $ANKRA_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "my-cluster",
    "credential_id": "<hetzner-credential-id>",
    "ssh_key_credential_ids": ["<key-id-1>", "<key-id-2>"],
    "location": "fsn1",
    "control_plane_count": 1,
    "control_plane_server_type": "cx33",
    "node_groups": [
      {"name": "default", "instance_type": "cx33", "count": 2},
      {"name": "gpu-workers", "instance_type": "ccx33", "count": 1, "labels": {"gpu": "true"}, "taints": [{"key": "gpu", "value": "true", "effect": "NoSchedule"}]}
    ],
    "distribution": "k3s"
  }'
The worker_count and worker_server_type fields are still accepted for backward compatibility. If node_groups is provided, it takes precedence. The singular ssh_key_credential_id field is also still accepted. If both fields are provided, they are merged.

Cluster Configuration Options

ParameterDefaultDescription
namerequiredUnique cluster name
credential_idrequiredHetzner API credential ID
ssh_key_credential_idsrequiredArray of SSH key credential IDs
ssh_key_credential_idSingle SSH key credential ID (backward compatible, use ssh_key_credential_ids for multiple)
locationrequiredHetzner datacenter location
network_ip_range10.0.0.0/16Private network IP range
subnet_range10.0.1.0/24Subnet range within the network
bastion_server_typecx23Server type for the bastion host
control_plane_count1Number of control plane nodes
control_plane_server_typecx33Server type for control planes
worker_count1Number of worker nodes (legacy, use node_groups instead)
worker_server_typecx33Server type for workers (legacy, use node_groups instead)
node_groupsArray of node group definitions (see Node Groups)
distributionk3sKubernetes distribution
kubernetes_versionlatestKubernetes version (optional)

Hetzner Locations

LocationRegion
fsn1Falkenstein, Germany
nbg1Nuremberg, Germany
hel1Helsinki, Finland
ashAshburn, USA
hilHillsboro, USA
sinSingapore

Access Settings

The Access tab in cluster settings provides SSH access commands and SSH key management for Hetzner clusters.

SSH Access

The Access page displays copy-pasteable commands for connecting to your cluster: SSH to the control plane via the bastion host:
ssh -J root@<bastion-ip> root@<control-plane-ip>
Port-forward the Kubernetes API for local kubectl access:
ssh -L 6443:<control-plane-ip>:6443 -N root@<bastion-ip>
After running the port-forward command, configure kubectl to use https://localhost:6443. The Access page also shows the network topology with the bastion host and all control plane nodes.

Managing SSH Keys

You can add or remove SSH key credentials from a running cluster in Settings > Access. Changes are synced to all servers on the next reconciliation — SSH keys are registered with the Hetzner API and authorized_keys is updated on all nodes. SSH keys can also be managed via the API:
EndpointMethodDescription
/api/v1/clusters/hetzner/{id}/ssh-keysGETList current and available SSH keys
/api/v1/clusters/hetzner/{id}/ssh-keysPUTUpdate SSH keys on the cluster
/api/v1/clusters/hetzner/{id}/access-infoGETGet bastion and control plane IPs

Node Groups

Node groups let you organize worker nodes into logical groups with independent instance types, counts, labels, and taints. Each group can be scaled, upgraded, and configured independently.

Via the Platform UI

Navigate to cluster Settings > Nodes to manage node groups. From this tab you can:
  • View all node groups with their instance type, count, labels, and taints
  • Add new node groups with a name, instance type, count, and optional labels/taints
  • Scale individual groups up or down (0–100 nodes)
  • Upgrade the instance type (upgrade only — see Instance Type Changes)
  • Edit labels and taints per group
  • Delete a node group and all its nodes

List Node Groups

ankra cluster hetzner node-group list <cluster_id>
Response:
{
  "node_groups": [
    {
      "name": "default",
      "instance_type": "cx33",
      "count": 2,
      "min": 0,
      "max": 100,
      "labels": {},
      "taints": []
    }
  ]
}

Add a Node Group

ankra cluster hetzner node-group add <cluster_id> \
  --name gpu-workers \
  --instance-type ccx33 \
  --count 3

Scale a Node Group

ankra cluster hetzner node-group scale <cluster_id> default 4
Node groups can be scaled to 0 nodes. This keeps the group definition but removes all servers.

Instance Type Changes

Instance type upgrades are irreversible. Once upgraded, the server disk is enlarged and cannot be shrunk. You cannot downgrade a node group to a smaller instance type.To use a smaller instance type, create a new node group with the desired type and delete the old one.
ankra cluster hetzner node-group upgrade <cluster_id> default cx43
Each node is powered off, resized, and powered back on. This causes brief downtime for workloads on those nodes.

Delete a Node Group

ankra cluster hetzner node-group delete <cluster_id> gpu-workers
Deleting a node group removes all its servers. Workloads running on those nodes will be evicted.

Update Labels and Taints

# Update labels
curl -X PUT https://platform.ankra.app/api/v1/clusters/hetzner/<cluster_id>/node-groups/default/labels \
  -H "Authorization: Bearer $ANKRA_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"labels": {"env": "production", "tier": "backend"}}'

# Update taints
curl -X PUT https://platform.ankra.app/api/v1/clusters/hetzner/<cluster_id>/node-groups/default/taints \
  -H "Authorization: Bearer $ANKRA_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"taints": [{"key": "dedicated", "value": "ml", "effect": "NoSchedule"}]}'

Node Group API Reference

EndpointMethodDescription
/api/v1/clusters/hetzner/{id}/node-groupsGETList all node groups
/api/v1/clusters/hetzner/{id}/node-groupsPOSTAdd a node group
/api/v1/clusters/hetzner/{id}/node-groups/{name}/scalePUTScale a node group
/api/v1/clusters/hetzner/{id}/node-groups/{name}/instance-typePUTUpgrade instance type
/api/v1/clusters/hetzner/{id}/node-groups/{name}/labelsPUTUpdate labels
/api/v1/clusters/hetzner/{id}/node-groups/{name}/taintsPUTUpdate taints
/api/v1/clusters/hetzner/{id}/node-groups/{name}DELETEDelete a node group

Legacy Worker Scaling

The legacy scale-workers and worker-count endpoints still work for backward compatibility. They operate on all workers as a single pool.
ankra cluster hetzner workers <cluster_id>
ankra cluster hetzner scale <cluster_id> 4
For new clusters, prefer using Node Groups for more granular control.

Upgrading Kubernetes Version

You can upgrade the Kubernetes (k3s) version on all nodes in a Hetzner cluster. Upgrades are applied to control plane nodes first, then workers.
  • Only k3s clusters are supported for version upgrades.
  • Downgrades are not supported — k3s downgrades require an etcd snapshot restore.
  • You can only upgrade one minor version at a time (e.g., v1.33.x to v1.34.x, not v1.33.x to v1.35.x).
  • The cluster must be online with no active operations.

Check Current Version

ankra cluster hetzner k8s-version <cluster_id>
Response:
{
  "current_version": "v1.34.4+k3s1",
  "distribution": "k3s"
}

Upgrade Version

ankra cluster hetzner upgrade <cluster_id> v1.35.1+k3s1
Response:
{
  "previous_version": "v1.34.4+k3s1",
  "new_version": "v1.35.1+k3s1",
  "nodes_affected": 3
}

Deprovisioning

Deprovisioning deletes all Hetzner resources (servers, networks, SSH keys) and removes the cluster from Ankra.
This action is irreversible. All data on the cluster will be permanently deleted.
ankra cluster hetzner deprovision <cluster_id>

Architecture

A Hetzner cluster provisions the following infrastructure:
ComponentDescription
Bastion HostJump server for secure SSH access to cluster nodes
Private NetworkIsolated network for inter-node communication
Control Plane(s)Kubernetes control plane nodes
Worker(s)Kubernetes worker nodes organized in node groups, each with independent instance types, labels, and taints
SSH KeysDeployed to all servers for access (multiple keys supported)
External Cloud Providerk3s is configured with cloud-provider=external for Hetzner CCM compatibility
All nodes are deployed within a private Hetzner network. The bastion host provides the only external SSH access point.

External Cloud Provider

Hetzner clusters are provisioned with --kubelet-arg=cloud-provider=external and --disable-cloud-controller. This configures k3s to delegate node initialization to an external Cloud Controller Manager (CCM). The node.cloudprovider.kubernetes.io/uninitialized taint is automatically removed during provisioning so the Ankra Agent can schedule immediately.

Hetzner Cloud Controller Manager (hcloud-ccm)

The Hetzner Cloud Controller Manager integrates your cluster with Hetzner Cloud APIs to provide:
  • Node metadata — automatic zone, region, and instance type labels on nodes
  • Load Balancers — Kubernetes LoadBalancer services backed by Hetzner Cloud Load Balancers
  • Node lifecycle — automatic removal of deleted nodes from the cluster
  • Route management — pod network routes via Hetzner Cloud Networks

Deploy with AI

The fastest way to set up hcloud-ccm is to ask the Ankra AI Assistant. Open the chat (⌘+J) on your cluster and prompt:
Set up the Hetzner Cloud Controller Manager for this cluster with a Hetzner API token secret and the hcloud-ccm Helm chart.
The AI will create a draft stack containing:
  1. A Kubernetes Secret manifest with your Hetzner API token (placeholder value for you to fill in)
  2. The hcloud-ccm Helm chart configured for your cluster’s network
1

Review the Draft

The AI creates the stack as a draft in the Stack Builder. Review the node diagram — you’ll see the Secret and the hcloud-ccm chart with a dependency arrow (the chart depends on the secret).
2

Add Your Hetzner Token

Click the Secret node in the diagram and replace the placeholder token value with your actual Hetzner Cloud API token (the same one used to provision the cluster, or a separate read/write token).
3

Deploy

Click Deploy to publish the draft. Ankra deploys the secret first, then the CCM chart.
You can also ask the AI more specific questions like “Set up hcloud-ccm with load balancer support using my existing Hetzner token secret” or “Configure CCM with the network name from my cluster.”

Manual Setup

If you prefer to configure it manually, add the hcloud-ccm Helm chart from the https://charts.hetzner.cloud registry:
  1. Add the Hetzner Helm registry in Registries (URL: https://charts.hetzner.cloud)
  2. Create a stack with:
    • A Secret in the kube-system namespace named hcloud with key token containing your Hetzner API token
    • The hcloud-cloud-controller-manager chart from the Hetzner registry, deployed to kube-system, with values:
      networking:
        enabled: true
        clusterCIDR: "10.244.0.0/16"
      env:
        HCLOUD_TOKEN:
          valueFrom:
            secretKeyRef:
              name: hcloud
              key: token
      

Hetzner CSI Driver (hcloud-csi)

The Hetzner CSI Driver provides persistent storage for your workloads using Hetzner Cloud Volumes:
  • Dynamic provisioning — create Hetzner Cloud Volumes on demand via PersistentVolumeClaim
  • Volume expansion — resize volumes without downtime
  • Storage classeshcloud-volumes StorageClass available out of the box

Deploy with AI

Open the chat (⌘+J) and prompt:
Set up the Hetzner CSI driver for persistent storage on this cluster.
The AI will create a draft stack with the hcloud-csi Helm chart configured to use the same Hetzner API token secret as the CCM. If you’ve already deployed the hcloud secret for CCM, the AI will reference it automatically.

Manual Setup

Add the hcloud-csi chart from the Hetzner Helm registry (https://charts.hetzner.cloud) to a stack:
  • Deploy to the kube-system namespace
  • The chart reads the Hetzner API token from the same hcloud secret used by CCM
  • Values:
    storageClasses:
      - name: hcloud-volumes
        defaultStorageClass: true
        reclaimPolicy: Delete
    
Once deployed, create PersistentVolumeClaims with storageClassName: hcloud-volumes:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: hcloud-volumes
  resources:
    requests:
      storage: 10Gi

For a production Hetzner cluster, deploy both hcloud-ccm and hcloud-csi together. Ask the AI:
Set up a complete Hetzner cloud integration stack with CCM for load balancers and CSI for persistent volumes.
The AI creates a single stack draft with three components in the correct dependency order:
  1. hcloud Secret — your Hetzner API token
  2. hcloud-ccm — Cloud Controller Manager (depends on the secret)
  3. hcloud-csi — CSI Driver (depends on the secret)
Review the draft in the Stack Builder, add your token to the secret, and deploy. The dependency DAG ensures the secret is created before either chart is installed.
The AI Assistant has full context about your cluster’s configuration (network name, location, node groups) and uses it to pre-fill Helm values correctly. This is the recommended way to set up Hetzner integrations.

Troubleshooting

Common Issues

IssueSolution
Cluster stuck in provisioningCheck Hetzner API token permissions and quota
Cannot scale workersEnsure no operations are running
Invalid API tokenRe-validate at Hetzner Console
Server type unavailableTry a different location or server type
Cannot downgrade instance typeHetzner disks cannot be shrunk. Create a new node group with the desired type and delete the old one
412: error during placementHetzner capacity issue at that location. Retry later or try a different server type
Node has uninitialized taintThe taint is removed automatically; if it persists, deploy hcloud-ccm or trigger a reconciliation
LoadBalancer service stuck in PendingDeploy the Hetzner Cloud Controller Manager
PVCs stuck in PendingDeploy the Hetzner CSI Driver
CCM/CSI pods in CrashLoopBackOffCheck that the hcloud secret in kube-system contains a valid Hetzner API token

Hetzner API Quota

Hetzner Cloud has default resource limits per project. If provisioning fails, check your quotas in the Hetzner Console:
  • Servers
  • Networks
  • SSH Keys
Contact Hetzner support to increase limits if needed.