Prerequisites
Before creating a Hetzner cluster, you need two credentials:Hetzner API Credential
A Hetzner Cloud API token with read/write permissions. See Hetzner Credentials.
SSH Key Credentials
One or more SSH public keys for server access. You can provide your own or let Ankra generate one. Multiple keys can be attached to a single cluster. See SSH Key Credentials.
Creating a Hetzner Cluster
Via the Platform UI
Configure Cluster
Fill in the cluster configuration:
Name A unique name for your cluster
Hetzner Credential Select your Hetzner API credential
SSH Keys Select one or more SSH key credentials
Location Hetzner datacenter (e.g.,
fsn1, nbg1, hel1)
Control Plane Count and server type (e.g., cx33)
Workers Count and server type
Distribution Kubernetes distribution (k3s)
Include Ingress Optionally deploy an ingress stack (ingress-nginx, cert-manager, Let’s Encrypt)Via the CLI
Via the API
The
worker_count and worker_server_type fields are still accepted for backward compatibility. If node_groups is provided, it takes precedence.
The singular ssh_key_credential_id field is also still accepted. If both fields are provided, they are merged.Cluster Configuration Options
| Parameter | Default | Description |
|---|---|---|
name | required | Unique cluster name |
credential_id | required | Hetzner API credential ID |
ssh_key_credential_ids | required | Array of SSH key credential IDs |
ssh_key_credential_id | Single SSH key credential ID (backward compatible, use ssh_key_credential_ids for multiple) | |
location | required | Hetzner datacenter location |
network_ip_range | 10.0.0.0/16 | Private network IP range |
subnet_range | 10.0.1.0/24 | Subnet range within the network |
bastion_server_type | cx23 | Server type for the bastion host |
control_plane_count | 1 | Number of control plane nodes |
control_plane_server_type | cx33 | Server type for control planes |
worker_count | 1 | Number of worker nodes (legacy, use node_groups instead) |
worker_server_type | cx33 | Server type for workers (legacy, use node_groups instead) |
node_groups | Array of node group definitions (see Node Groups) | |
distribution | k3s | Kubernetes distribution |
kubernetes_version | latest | Kubernetes version (optional) |
include_ingress | false | Deploy the ingress stack (ingress-nginx + cert-manager + Let’s Encrypt) |
gitops_credential_name | GitHub credential name for GitOps integration | |
gitops_repository | GitHub repository for GitOps (e.g., org/repo) | |
gitops_branch | master | Branch for GitOps pushes |
Hetzner Locations
| Location | Region |
|---|---|
fsn1 | Falkenstein, Germany |
nbg1 | Nuremberg, Germany |
hel1 | Helsinki, Finland |
ash | Ashburn, USA |
hil | Hillsboro, USA |
sin | Singapore |
Access Settings
The Access tab in cluster settings provides SSH access commands and SSH key management for Hetzner clusters.SSH Access
The Access page displays copy-pasteable commands for connecting to your cluster: SSH to the control plane via the bastion host:kubectl access:
kubectl to use https://localhost:6443.
The Access page also shows the network topology with the bastion host and all control plane nodes.
Managing SSH Keys
You can add or remove SSH key credentials from a running cluster in Settings > Access. Changes are synced to all servers on the next reconciliation SSH keys are registered with the Hetzner API andauthorized_keys is updated on all nodes.
SSH keys can also be managed via the API:
| Endpoint | Method | Description |
|---|---|---|
/api/v1/clusters/hetzner/{id}/ssh-keys | GET | List current and available SSH keys |
/api/v1/clusters/hetzner/{id}/ssh-keys | PUT | Update SSH keys on the cluster |
/api/v1/clusters/hetzner/{id}/access-info | GET | Get bastion and control plane IPs |
Node Groups
Node groups let you organize worker nodes into logical groups with independent instance types, counts, labels, and taints. Each group can be scaled, upgraded, and configured independently.Via the Platform UI
Navigate to cluster Settings > Nodes to manage node groups. From this tab you can:- View all node groups with their instance type, count, labels, and taints
- Add new node groups with a name, instance type, count, and optional labels/taints
- Scale individual groups up or down (0–100 nodes)
- Upgrade the instance type (upgrade only see Instance Type Changes)
- Edit labels and taints per group
- Delete a node group and all its nodes
List Node Groups
Add a Node Group
Scale a Node Group
Instance Type Changes
Delete a Node Group
Update Labels and Taints
Node Group API Reference
| Endpoint | Method | Description |
|---|---|---|
/api/v1/clusters/hetzner/{id}/node-groups | GET | List all node groups |
/api/v1/clusters/hetzner/{id}/node-groups | POST | Add a node group |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/scale | PUT | Scale a node group |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/instance-type | PUT | Upgrade instance type |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/labels | PUT | Update labels |
/api/v1/clusters/hetzner/{id}/node-groups/{name}/taints | PUT | Update taints |
/api/v1/clusters/hetzner/{id}/node-groups/{name} | DELETE | Delete a node group |
Legacy Worker Scaling
The legacyscale-workers and worker-count endpoints still work for backward compatibility. They operate on all workers as a single pool.
For new clusters, prefer using Node Groups for more granular control.
Upgrading Kubernetes Version
You can upgrade the Kubernetes (k3s) version on all nodes in a Hetzner cluster. Upgrades are applied to control plane nodes first, then workers.Check Current Version
Upgrade Version
Deprovisioning
Deprovisioning deletes all Hetzner resources (servers, networks, SSH keys) and removes the cluster from Ankra.Architecture
A Hetzner cluster provisions the following infrastructure:| Component | Description |
|---|---|
| Bastion Host | Jump server for secure SSH access to cluster nodes |
| Private Network | Isolated network for inter-node communication |
| Control Plane(s) | Kubernetes control plane nodes |
| Worker(s) | Kubernetes worker nodes organized in node groups, each with independent instance types, labels, and taints |
| SSH Keys | Deployed to all servers for access (multiple keys supported) |
| External Cloud Provider | k3s is configured with cloud-provider=external for Hetzner CCM compatibility |
Automatic Cloud Integration (hcloud Stack)
Ankra automatically deploys a hcloud stack during cluster provisioning. This stack includes:hcloudnamespace — dedicated namespace for Hetzner cloud componentshcloud-tokensecret — contains your Hetzner API token and network ID, sourced from the credential used to create the cluster- hcloud-cloud-controller-manager — integrates the cluster with Hetzner Cloud APIs (node metadata, load balancers, node lifecycle)
- hcloud-csi — provides persistent storage using Hetzner Cloud Volumes
hcloud-token secret, and Ankra ensures the correct dependency order.
You do not need to manually set up the CCM or CSI driver — they are provisioned as part of cluster creation using the same Hetzner API credential you provided.
External Cloud Provider
Hetzner clusters are provisioned with--kubelet-arg=cloud-provider=external and --disable-cloud-controller. This configures k3s to delegate node initialization to the Hetzner Cloud Controller Manager.
When nodes first join the cluster, they carry a node.cloudprovider.kubernetes.io/uninitialized taint that prevents workload scheduling. The CCM removes this taint after initializing each node with its Hetzner provider ID, zone labels, and instance metadata. The Ankra Agent tolerates this taint so it can schedule immediately and begin managing the cluster before the CCM is fully running.
Hetzner Cloud Controller Manager (hcloud-ccm)
The Hetzner Cloud Controller Manager is automatically deployed as part of the hcloud stack and provides:- Node metadata — automatic zone, region, and instance type labels on nodes
- Load Balancers — Kubernetes
LoadBalancerservices backed by Hetzner Cloud Load Balancers - Node lifecycle — automatic removal of deleted nodes from the cluster
- Route management — pod network routes via Hetzner Cloud Networks
hcloud namespace with 3 replicas and a PodDisruptionBudget. It reads the Hetzner API token from the hcloud-token secret that Ankra creates automatically.
CCM Configuration
The default CCM values configured by Ankra:hcloud-cloud-controller-manager addon in the hcloud stack via the Stack Builder.
The CCM creates Hetzner Cloud Load Balancers when you create Kubernetes
Service resources of type LoadBalancer. These Load Balancers are managed by Hetzner Cloud, not by Ankra. Delete LoadBalancer services before deprovisioning to avoid orphaned resources and unexpected charges.Hetzner CSI Driver (hcloud-csi)
The Hetzner CSI Driver is automatically deployed as part of the hcloud stack and provides:- Dynamic provisioning — create Hetzner Cloud Volumes on demand via
PersistentVolumeClaim - Volume expansion — resize volumes without downtime
- Storage classes —
hcloud-volumesStorageClass available out of the box
CSI Configuration
The default CSI values configured by Ankra:storageClassName: hcloud-volumes:
The CSI driver creates Hetzner Cloud Volumes when you create PersistentVolumeClaims using the
hcloud-volumes StorageClass. These Volumes are managed by Hetzner Cloud, not by Ankra. The default reclaimPolicy is Retain, meaning Hetzner Volumes are not deleted when PVCs are removed. Delete PVCs and their backing Hetzner Volumes before deprovisioning to avoid orphaned resources and unexpected charges.Ingress Stack (Optional)
Wheninclude_ingress is enabled during cluster creation, Ankra deploys an ingress stack alongside the hcloud stack. The ingress stack includes:
- ingress-nginx — NGINX-based Ingress controller with a Hetzner Cloud Load Balancer
- cert-manager — automated TLS certificate management
- Let’s Encrypt ClusterIssuer — a
letsencrypt-prodClusterIssuer configured for HTTP-01 validation
Ingress Configuration
The ingress-nginx controller is pre-configured with Hetzner Load Balancer annotations:Using Ingress with TLS
Once the ingress stack is deployed, create Ingress resources with automatic TLS:GitOps Integration
Hetzner clusters support optional GitOps integration with GitHub. When configured, Ankra pushes the cluster’s stack state to a Git repository, enabling version-controlled infrastructure. To enable GitOps during cluster creation, provide:| Parameter | Description |
|---|---|
gitops_credential_name | Name of a GitHub credential registered in Ankra |
gitops_repository | GitHub repository (e.g., my-org/my-cluster-config) |
gitops_branch | Branch to push to (default: master) |
Troubleshooting
Common Issues
| Issue | Solution |
|---|---|
| Cluster stuck in provisioning | Check Hetzner API token permissions and quota |
| Cannot scale workers | Ensure no operations are running |
| Invalid API token | Re-validate at Hetzner Console |
| Server type unavailable | Try a different location or server type |
| Cannot downgrade instance type | Hetzner disks cannot be shrunk. Create a new node group with the desired type and delete the old one |
412: error during placement | Hetzner capacity issue at that location. Retry later or try a different server type |
Node has uninitialized taint | The CCM removes this taint automatically after initializing the node. If it persists, check that the CCM pods are running in the hcloud namespace |
LoadBalancer service stuck in Pending | Verify the CCM is running: kubectl get pods -n hcloud. Check CCM logs for Hetzner API errors |
PVCs stuck in Pending | Verify the CSI driver is running: kubectl get pods -n hcloud. Check that the hcloud-volumes StorageClass exists |
CCM/CSI pods in CrashLoopBackOff | Check that the hcloud-token secret in the hcloud namespace contains a valid Hetzner API token: kubectl get secret -n hcloud hcloud-token -o yaml |
Ankra Agent pod Pending | Check node taints — the agent tolerates the uninitialized taint but if all nodes are unschedulable for other reasons, verify node status with kubectl describe nodes |
Hetzner API Quota
Hetzner Cloud has default resource limits per project. If provisioning fails, check your quotas in the Hetzner Console:- Servers
- Networks
- SSH Keys