Skip to main content
Ankra supports provisioning fully managed Kubernetes clusters on Hetzner Cloud. You can create clusters with configurable control planes, workers, and networking then scale workers up or down as needed.

Prerequisites

Before creating a Hetzner cluster, you need two credentials:

Hetzner API Credential

A Hetzner Cloud API token with read/write permissions. See Hetzner Credentials.

SSH Key Credentials

One or more SSH public keys for server access. You can provide your own or let Ankra generate one. Multiple keys can be attached to a single cluster. See SSH Key Credentials.

Creating a Hetzner Cluster

Via the Platform UI

1

Navigate to Clusters

Go to Clusters in the Ankra dashboard and click Create Cluster.
2

Select Hetzner

Choose Hetzner Cloud as the provider.
3

Configure Cluster

Fill in the cluster configuration: Name A unique name for your cluster Hetzner Credential Select your Hetzner API credential SSH Keys Select one or more SSH key credentials Location Hetzner datacenter (e.g., fsn1, nbg1, hel1) Control Plane Count and server type (e.g., cx33) Workers Count and server type Distribution Kubernetes distribution (k3s) Include Ingress Optionally deploy an ingress stack (ingress-nginx, cert-manager, Let’s Encrypt)
4

Create

Click Create to start provisioning. The cluster will appear with an offline state until provisioning completes.

Via the CLI

# Create credentials first
ankra credentials hetzner create-name my-hetzner-token  # securely prompts for token
ankra credentials hetzner ssh-key create-name my-ssh-key-generate

# Create the cluster with one SSH key
ankra cluster hetzner create \
-name my-cluster \
-credential-id <hetzner-credential-id> \
-ssh-key-credential-id <ssh-key-credential-id> \
-location fsn1 \
-control-plane-count 1 \
-control-plane-server-type cx33 \
-worker-count 2 \
-worker-server-type cx33

# Or with multiple SSH keys
ankra cluster hetzner create \
-name my-cluster \
-credential-id <hetzner-credential-id> \
-ssh-key-credential-ids <key-id-1>,<key-id-2>,<key-id-3> \
-location fsn1 \
-control-plane-count 1 \
-control-plane-server-type cx33 \
-worker-count 2 \
-worker-server-type cx33

Via the API

curlX POST https://platform.ankra.app/api/v1/clusters/hetzner \
H "Authorization: Bearer $ANKRA_API_TOKEN" \
H "Content-Type: application/json" \
d '{
    "name": "my-cluster",
    "credential_id": "<hetzner-credential-id>",
    "ssh_key_credential_ids": ["<key-id-1>", "<key-id-2>"],
    "location": "fsn1",
    "control_plane_count": 1,
    "control_plane_server_type": "cx33",
    "node_groups": [
      {"name": "default", "instance_type": "cx33", "count": 2},
      {"name": "gpu-workers", "instance_type": "ccx33", "count": 1, "labels": {"gpu": "true"}, "taints": [{"key": "gpu", "value": "true", "effect": "NoSchedule"}]}
    ],
    "distribution": "k3s",
    "include_ingress": true
  }'
The worker_count and worker_server_type fields are still accepted for backward compatibility. If node_groups is provided, it takes precedence. The singular ssh_key_credential_id field is also still accepted. If both fields are provided, they are merged.

Cluster Configuration Options

ParameterDefaultDescription
namerequiredUnique cluster name
credential_idrequiredHetzner API credential ID
ssh_key_credential_idsrequiredArray of SSH key credential IDs
ssh_key_credential_idSingle SSH key credential ID (backward compatible, use ssh_key_credential_ids for multiple)
locationrequiredHetzner datacenter location
network_ip_range10.0.0.0/16Private network IP range
subnet_range10.0.1.0/24Subnet range within the network
bastion_server_typecx23Server type for the bastion host
control_plane_count1Number of control plane nodes
control_plane_server_typecx33Server type for control planes
worker_count1Number of worker nodes (legacy, use node_groups instead)
worker_server_typecx33Server type for workers (legacy, use node_groups instead)
node_groupsArray of node group definitions (see Node Groups)
distributionk3sKubernetes distribution
kubernetes_versionlatestKubernetes version (optional)
include_ingressfalseDeploy the ingress stack (ingress-nginx + cert-manager + Let’s Encrypt)
gitops_credential_nameGitHub credential name for GitOps integration
gitops_repositoryGitHub repository for GitOps (e.g., org/repo)
gitops_branchmasterBranch for GitOps pushes

Hetzner Locations

LocationRegion
fsn1Falkenstein, Germany
nbg1Nuremberg, Germany
hel1Helsinki, Finland
ashAshburn, USA
hilHillsboro, USA
sinSingapore

Access Settings

The Access tab in cluster settings provides SSH access commands and SSH key management for Hetzner clusters.

SSH Access

The Access page displays copy-pasteable commands for connecting to your cluster: SSH to the control plane via the bastion host:
sshJ root@<bastion-ip> root@<control-plane-ip>
Port-forward the Kubernetes API for local kubectl access:
sshL 6443:<control-plane-ip>:6443N root@<bastion-ip>
After running the port-forward command, configure kubectl to use https://localhost:6443. The Access page also shows the network topology with the bastion host and all control plane nodes.

Managing SSH Keys

You can add or remove SSH key credentials from a running cluster in Settings > Access. Changes are synced to all servers on the next reconciliation SSH keys are registered with the Hetzner API and authorized_keys is updated on all nodes. SSH keys can also be managed via the API:
EndpointMethodDescription
/api/v1/clusters/hetzner/{id}/ssh-keysGETList current and available SSH keys
/api/v1/clusters/hetzner/{id}/ssh-keysPUTUpdate SSH keys on the cluster
/api/v1/clusters/hetzner/{id}/access-infoGETGet bastion and control plane IPs

Node Groups

Node groups let you organize worker nodes into logical groups with independent instance types, counts, labels, and taints. Each group can be scaled, upgraded, and configured independently.

Via the Platform UI

Navigate to cluster Settings > Nodes to manage node groups. From this tab you can:
  • View all node groups with their instance type, count, labels, and taints
  • Add new node groups with a name, instance type, count, and optional labels/taints
  • Scale individual groups up or down (0–100 nodes)
  • Upgrade the instance type (upgrade only see Instance Type Changes)
  • Edit labels and taints per group
  • Delete a node group and all its nodes

List Node Groups

ankra cluster hetzner node-group list <cluster_id>
Response:
{
  "node_groups": [
    {
      "name": "default",
      "instance_type": "cx33",
      "count": 2,
      "min": 0,
      "max": 100,
      "labels": {},
      "taints": []
    }
  ]
}

Add a Node Group

ankra cluster hetzner node-group add <cluster_id> \
-name gpu-workers \
-instance-type ccx33 \
-count 3

Scale a Node Group

ankra cluster hetzner node-group scale <cluster_id> default 4
Node groups can be scaled to 0 nodes. This keeps the group definition but removes all servers.

Instance Type Changes

Instance type upgrades are irreversible. Once upgraded, the server disk is enlarged and cannot be shrunk. You cannot downgrade a node group to a smaller instance type.To use a smaller instance type, create a new node group with the desired type and delete the old one.
ankra cluster hetzner node-group upgrade <cluster_id> default cx43
Each node is powered off, resized, and powered back on. This causes brief downtime for workloads on those nodes.

Delete a Node Group

ankra cluster hetzner node-group delete <cluster_id> gpu-workers
Deleting a node group removes all its servers. Workloads running on those nodes will be evicted.

Update Labels and Taints

# Update labels
curlX PUT https://platform.ankra.app/api/v1/clusters/hetzner/<cluster_id>/node-groups/default/labels \
H "Authorization: Bearer $ANKRA_API_TOKEN" \
H "Content-Type: application/json" \
d '{"labels": {"env": "production", "tier": "backend"}}'

# Update taints
curlX PUT https://platform.ankra.app/api/v1/clusters/hetzner/<cluster_id>/node-groups/default/taints \
H "Authorization: Bearer $ANKRA_API_TOKEN" \
H "Content-Type: application/json" \
d '{"taints": [{"key": "dedicated", "value": "ml", "effect": "NoSchedule"}]}'

Node Group API Reference

EndpointMethodDescription
/api/v1/clusters/hetzner/{id}/node-groupsGETList all node groups
/api/v1/clusters/hetzner/{id}/node-groupsPOSTAdd a node group
/api/v1/clusters/hetzner/{id}/node-groups/{name}/scalePUTScale a node group
/api/v1/clusters/hetzner/{id}/node-groups/{name}/instance-typePUTUpgrade instance type
/api/v1/clusters/hetzner/{id}/node-groups/{name}/labelsPUTUpdate labels
/api/v1/clusters/hetzner/{id}/node-groups/{name}/taintsPUTUpdate taints
/api/v1/clusters/hetzner/{id}/node-groups/{name}DELETEDelete a node group

Legacy Worker Scaling

The legacy scale-workers and worker-count endpoints still work for backward compatibility. They operate on all workers as a single pool.
ankra cluster hetzner workers <cluster_id>
ankra cluster hetzner scale <cluster_id> 4
For new clusters, prefer using Node Groups for more granular control.

Upgrading Kubernetes Version

You can upgrade the Kubernetes (k3s) version on all nodes in a Hetzner cluster. Upgrades are applied to control plane nodes first, then workers.
  • Only k3s clusters are supported for version upgrades.
  • Downgrades are not supported k3s downgrades require an etcd snapshot restore.
  • You can only upgrade one minor version at a time (e.g., v1.33.x to v1.34.x, not v1.33.x to v1.35.x).
  • The cluster must be online with no active operations.

Check Current Version

ankra cluster hetzner k8s-version <cluster_id>
Response:
{
  "current_version": "v1.34.4+k3s1",
  "distribution": "k3s"
}

Upgrade Version

ankra cluster hetzner upgrade <cluster_id> v1.35.1+k3s1
Response:
{
  "previous_version": "v1.34.4+k3s1",
  "new_version": "v1.35.1+k3s1",
  "nodes_affected": 3
}

Deprovisioning

Deprovisioning deletes all Hetzner resources (servers, networks, SSH keys) and removes the cluster from Ankra.
This action is irreversible. All data on the cluster will be permanently deleted.
Clean up Hetzner Cloud resources before deprovisioning. The Hetzner CCM and CSI driver create resources in your Hetzner Cloud project (Load Balancers, Volumes) that Ankra does not manage or track. These resources will not be automatically deleted when you deprovision the cluster and will continue to incur charges.Before deprovisioning, delete any Kubernetes resources that created Hetzner Cloud objects:
  • Delete all Service resources of type LoadBalancer (these create Hetzner Cloud Load Balancers via the CCM)
  • Delete all PersistentVolumeClaim resources using the hcloud-volumes StorageClass (these create Hetzner Cloud Volumes via the CSI driver)
  • Delete any addons or Helm releases that create LoadBalancer services or PVCs (e.g., ingress-nginx, databases, monitoring stacks)
Alternatively, check your Hetzner Console after deprovisioning and manually delete any orphaned Load Balancers and Volumes associated with the cluster.
ankra cluster hetzner deprovision <cluster_id>

Architecture

A Hetzner cluster provisions the following infrastructure:
ComponentDescription
Bastion HostJump server for secure SSH access to cluster nodes
Private NetworkIsolated network for inter-node communication
Control Plane(s)Kubernetes control plane nodes
Worker(s)Kubernetes worker nodes organized in node groups, each with independent instance types, labels, and taints
SSH KeysDeployed to all servers for access (multiple keys supported)
External Cloud Providerk3s is configured with cloud-provider=external for Hetzner CCM compatibility
All nodes are deployed within a private Hetzner network. The bastion host provides the only external SSH access point.

Automatic Cloud Integration (hcloud Stack)

Ankra automatically deploys a hcloud stack during cluster provisioning. This stack includes:
  1. hcloud namespace — dedicated namespace for Hetzner cloud components
  2. hcloud-token secret — contains your Hetzner API token and network ID, sourced from the credential used to create the cluster
  3. hcloud-cloud-controller-manager — integrates the cluster with Hetzner Cloud APIs (node metadata, load balancers, node lifecycle)
  4. hcloud-csi — provides persistent storage using Hetzner Cloud Volumes
The hcloud stack is deployed automatically after the Ankra Agent is installed. The CCM and CSI charts both depend on the hcloud-token secret, and Ankra ensures the correct dependency order.
You do not need to manually set up the CCM or CSI driver — they are provisioned as part of cluster creation using the same Hetzner API credential you provided.

External Cloud Provider

Hetzner clusters are provisioned with --kubelet-arg=cloud-provider=external and --disable-cloud-controller. This configures k3s to delegate node initialization to the Hetzner Cloud Controller Manager. When nodes first join the cluster, they carry a node.cloudprovider.kubernetes.io/uninitialized taint that prevents workload scheduling. The CCM removes this taint after initializing each node with its Hetzner provider ID, zone labels, and instance metadata. The Ankra Agent tolerates this taint so it can schedule immediately and begin managing the cluster before the CCM is fully running.

Hetzner Cloud Controller Manager (hcloud-ccm)

The Hetzner Cloud Controller Manager is automatically deployed as part of the hcloud stack and provides:
  • Node metadata — automatic zone, region, and instance type labels on nodes
  • Load Balancers — Kubernetes LoadBalancer services backed by Hetzner Cloud Load Balancers
  • Node lifecycle — automatic removal of deleted nodes from the cluster
  • Route management — pod network routes via Hetzner Cloud Networks
The CCM is deployed in the hcloud namespace with 3 replicas and a PodDisruptionBudget. It reads the Hetzner API token from the hcloud-token secret that Ankra creates automatically.

CCM Configuration

The default CCM values configured by Ankra:
replicaCount: 3
env:
  HCLOUD_TOKEN:
    valueFrom:
      secretKeyRef:
        name: hcloud-token
  HCLOUD_NETWORK_ROUTES_ENABLED:
    value: "false"
  HCLOUD_LOAD_BALANCERS_ENABLED:
    value: "true"
  HCLOUD_LOAD_BALANCERS_USE_PRIVATE_IP:
    value: "true"
  HCLOUD_LOAD_BALANCERS_DISABLE_PRIVATE_INGRESS:
    value: "true"
  HCLOUD_LOAD_BALANCERS_LOCATION:
    value: "<your-cluster-location>"
networking:
  enabled: true
  clusterCIDR: "10.0.0.0/16"
  network:
    valueFrom:
      secretKeyRef:
        name: hcloud-token
podDisruptionBudget:
  enabled: true
  minAvailable: 1
To customize the CCM configuration after creation, edit the hcloud-cloud-controller-manager addon in the hcloud stack via the Stack Builder.
The CCM creates Hetzner Cloud Load Balancers when you create Kubernetes Service resources of type LoadBalancer. These Load Balancers are managed by Hetzner Cloud, not by Ankra. Delete LoadBalancer services before deprovisioning to avoid orphaned resources and unexpected charges.

Hetzner CSI Driver (hcloud-csi)

The Hetzner CSI Driver is automatically deployed as part of the hcloud stack and provides:
  • Dynamic provisioning — create Hetzner Cloud Volumes on demand via PersistentVolumeClaim
  • Volume expansion — resize volumes without downtime
  • Storage classeshcloud-volumes StorageClass available out of the box

CSI Configuration

The default CSI values configured by Ankra:
controller:
  replicaCount: 3
  hcloudToken:
    existingSecret:
      name: hcloud-token
  podDisruptionBudget:
    enabled: true
node:
  hostNetwork: true
  hcloudToken:
    existingSecret:
      name: hcloud-token
      key: token
storageClasses:
  - name: hcloud-volumes
    defaultStorageClass: true
    reclaimPolicy: Retain
Once deployed, create PersistentVolumeClaims with storageClassName: hcloud-volumes:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: hcloud-volumes
  resources:
    requests:
      storage: 10Gi
The CSI driver creates Hetzner Cloud Volumes when you create PersistentVolumeClaims using the hcloud-volumes StorageClass. These Volumes are managed by Hetzner Cloud, not by Ankra. The default reclaimPolicy is Retain, meaning Hetzner Volumes are not deleted when PVCs are removed. Delete PVCs and their backing Hetzner Volumes before deprovisioning to avoid orphaned resources and unexpected charges.

Ingress Stack (Optional)

When include_ingress is enabled during cluster creation, Ankra deploys an ingress stack alongside the hcloud stack. The ingress stack includes:
  1. ingress-nginx — NGINX-based Ingress controller with a Hetzner Cloud Load Balancer
  2. cert-manager — automated TLS certificate management
  3. Let’s Encrypt ClusterIssuer — a letsencrypt-prod ClusterIssuer configured for HTTP-01 validation

Ingress Configuration

The ingress-nginx controller is pre-configured with Hetzner Load Balancer annotations:
controller:
  replicaCount: 2
  service:
    annotations:
      load-balancer.hetzner.cloud/location: "<your-cluster-location>"
      load-balancer.hetzner.cloud/use-private-ip: "true"
  podDisruptionBudget:
    enabled: true
    minAvailable: 1
cert-manager is deployed with CRDs enabled and 2 replicas:
crds:
  enabled: true
replicaCount: 2
podDisruptionBudget:
  enabled: true
  minAvailable: 1

Using Ingress with TLS

Once the ingress stack is deployed, create Ingress resources with automatic TLS:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - app.example.com
      secretName: my-app-tls
  rules:
    - host: app.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app
                port:
                  number: 80
Point your DNS record for app.example.com to the Hetzner Load Balancer IP (created automatically by ingress-nginx). cert-manager will handle the Let’s Encrypt certificate issuance and renewal.

GitOps Integration

Hetzner clusters support optional GitOps integration with GitHub. When configured, Ankra pushes the cluster’s stack state to a Git repository, enabling version-controlled infrastructure. To enable GitOps during cluster creation, provide:
ParameterDescription
gitops_credential_nameName of a GitHub credential registered in Ankra
gitops_repositoryGitHub repository (e.g., my-org/my-cluster-config)
gitops_branchBranch to push to (default: master)
curlX POST https://platform.ankra.app/api/v1/clusters/hetzner \
H "Authorization: Bearer $ANKRA_API_TOKEN" \
H "Content-Type: application/json" \
d '{
    "name": "my-cluster",
    "credential_id": "<hetzner-credential-id>",
    "ssh_key_credential_ids": ["<ssh-key-id>"],
    "location": "fsn1",
    "control_plane_count": 3,
    "control_plane_server_type": "cx33",
    "node_groups": [
      {"name": "default", "instance_type": "cx33", "count": 2}
    ],
    "include_ingress": true,
    "gitops_credential_name": "my-github-token",
    "gitops_repository": "my-org/cluster-state",
    "gitops_branch": "main"
  }'
When GitOps is enabled, Ankra commits the hcloud stack (and ingress stack, if included) to the repository after creation.

Troubleshooting

Common Issues

IssueSolution
Cluster stuck in provisioningCheck Hetzner API token permissions and quota
Cannot scale workersEnsure no operations are running
Invalid API tokenRe-validate at Hetzner Console
Server type unavailableTry a different location or server type
Cannot downgrade instance typeHetzner disks cannot be shrunk. Create a new node group with the desired type and delete the old one
412: error during placementHetzner capacity issue at that location. Retry later or try a different server type
Node has uninitialized taintThe CCM removes this taint automatically after initializing the node. If it persists, check that the CCM pods are running in the hcloud namespace
LoadBalancer service stuck in PendingVerify the CCM is running: kubectl get pods -n hcloud. Check CCM logs for Hetzner API errors
PVCs stuck in PendingVerify the CSI driver is running: kubectl get pods -n hcloud. Check that the hcloud-volumes StorageClass exists
CCM/CSI pods in CrashLoopBackOffCheck that the hcloud-token secret in the hcloud namespace contains a valid Hetzner API token: kubectl get secret -n hcloud hcloud-token -o yaml
Ankra Agent pod PendingCheck node taints — the agent tolerates the uninitialized taint but if all nodes are unschedulable for other reasons, verify node status with kubectl describe nodes

Hetzner API Quota

Hetzner Cloud has default resource limits per project. If provisioning fails, check your quotas in the Hetzner Console:
  • Servers
  • Networks
  • SSH Keys
Contact Hetzner support to increase limits if needed.