Skip to main content
Ankra supports provisioning fully managed Kubernetes clusters on OVH Cloud. You can create clusters with configurable control planes, workers, and networking — then scale workers up or down as needed.

Prerequisites

Before creating an OVH cluster, you need two credentials:

OVH API Credential

OVH Cloud API credentials (application key, application secret, consumer key, and project ID). See OVH API Credentials.

SSH Key Credential

An SSH public key for server access. You can provide your own or let Ankra generate one. See SSH Key Credentials.

Creating an OVH Cluster

Via the Platform UI

A guided wizard walks you through creating an OVH cluster — select credentials, pick a region, choose instance flavors (general purpose, CPU-optimized, or RAM-optimized), set control plane and worker counts, and launch.
1

Navigate to Clusters

Go to Clusters in the Ankra dashboard and click Create Cluster.
2

Select OVH Cloud

Choose OVH Cloud as the provider.
3

Select Credentials

Pick your OVH API credential and SSH key credential from the dropdowns. You can also create new credentials directly from the wizard.
4

Choose Region

Select an OVH Cloud region (e.g., Gravelines, Strasbourg, Beauharnois, Warsaw, London, Frankfurt). Each region shows the location and country.
5

Configure Nodes

Set your cluster topology:
  • Gateway — Instance flavor for the SSH gateway (e.g., b2-7)
  • Control Plane — Count and flavor (e.g., 1x b2-15)
  • Workers — Count and flavor (e.g., 2x b2-15)
The wizard shows vCPUs, RAM, disk, and hourly cost for each flavor to help you choose.
6

Create & Track Progress

Click Create to start provisioning. A live progress view tracks every step — network creation, gateway setup, control plane provisioning, worker provisioning, k3s installation, and Ankra Agent setup. The cluster appears with an offline state until provisioning completes, then transitions to online.

Managing from the Dashboard

Once your OVH cluster is online, you can manage it directly from the Ankra dashboard:
  • Scale workers — Go to Cluster SettingsGeneral to scale worker nodes up or down
  • Upgrade Kubernetes — Upgrade the k3s version from cluster settings
  • Deprovision — Delete the cluster and all OVH resources from the Danger Zone in cluster settings

Via the CLI

# Create credentials first
ankra credentials ovh create --name my-ovh-cred --project-id <project-id>
# You will be prompted for application key, application secret, and consumer key

ankra credentials ovh ssh-key create --name my-ssh-key --generate

# Create the cluster
ankra cluster ovh create \
  --name my-cluster \
  --credential-id <ovh-credential-id> \
  --ssh-key-credential-id <ssh-key-credential-id> \
  --region GRA7 \
  --control-plane-count 1 \
  --control-plane-flavor-id b2-15 \
  --worker-count 2 \
  --worker-flavor-id b2-15

Via the API

curl -X POST https://platform.ankra.app/api/v1/clusters/ovh \
  -H "Authorization: Bearer $ANKRA_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "my-cluster",
    "credential_id": "<ovh-credential-id>",
    "ssh_key_credential_id": "<ssh-key-credential-id>",
    "region": "GRA7",
    "control_plane_count": 1,
    "control_plane_flavor_id": "b2-15",
    "worker_count": 2,
    "worker_flavor_id": "b2-15",
    "distribution": "k3s"
  }'

Cluster Configuration Options

ParameterDefaultDescription
namerequiredUnique cluster name
credential_idrequiredOVH API credential ID
ssh_key_credential_idrequiredSSH key credential ID
regionrequiredOVH Cloud region
network_vlan_id0VLAN ID for the private network
subnet_cidr10.0.1.0/24Subnet CIDR range
dhcp_start10.0.1.100DHCP allocation range start
dhcp_end10.0.1.200DHCP allocation range end
gateway_flavor_idb2-7Instance flavor for the gateway
control_plane_count1Number of control plane nodes
control_plane_flavor_idb2-15Instance flavor for control planes
worker_count1Number of worker nodes (1–10)
worker_flavor_idb2-15Instance flavor for workers
distributionk3sKubernetes distribution
kubernetes_versionlatestKubernetes version (optional)

OVH Cloud Regions

RegionLocation
GRA7Gravelines, France
GRA9Gravelines, France
GRA11Gravelines, France
SBG5Strasbourg, France
BHS5Beauharnois, Canada
WAW1Warsaw, Poland
DE1Frankfurt, Germany
UK1London, United Kingdom
SGP1Singapore
SYD1Sydney, Australia

OVH Instance Flavors

FlavorvCPUsRAMDescription
b2-727 GBSuitable for gateways
b2-15415 GBGeneral purpose, good for control planes and workers
b2-30830 GBHigher performance workloads
b2-601660 GBMemory-intensive workloads
Available flavors vary by region. Check the OVH Cloud catalog for your region’s offerings.

Scaling Workers

You can scale worker nodes between 1 and 10 for any online OVH cluster. Scaling up adds new instances and installs Kubernetes. Scaling down removes workers starting from the highest index.
The cluster must be online with no active operations before scaling.

Via the Dashboard

Go to your cluster → SettingsGeneral and adjust the worker count. The new count is applied immediately and you can track progress in the Operations tab.

Check Current Workers

ankra cluster ovh workers <cluster_id>
Response:
{
  "worker_count": 2,
  "min": 1,
  "max": 10
}

Scale Workers

ankra cluster ovh scale <cluster_id> 4
Response:
{
  "previous_count": 2,
  "new_count": 4
}

Upgrading Kubernetes Version

You can upgrade the Kubernetes (k3s) version on all nodes in an OVH cluster. Upgrades are applied to control plane nodes first, then workers.
  • Only k3s clusters are supported for version upgrades.
  • Downgrades are not supported — k3s downgrades require an etcd snapshot restore.
  • You can only upgrade one minor version at a time (e.g., v1.33.x to v1.34.x, not v1.33.x to v1.35.x).
  • The cluster must be online with no active operations.

Via the Dashboard

Go to your cluster → SettingsGeneral to see the current k3s version and trigger an upgrade.

Check Current Version

ankra cluster ovh k8s-version <cluster_id>
Response:
{
  "current_version": "v1.34.4+k3s1",
  "distribution": "k3s"
}

Upgrade Version

ankra cluster ovh upgrade <cluster_id> v1.35.1+k3s1
Response:
{
  "previous_version": "v1.34.4+k3s1",
  "new_version": "v1.35.1+k3s1",
  "nodes_affected": 3
}

Deprovisioning

Deprovisioning deletes all OVH resources (instances, networks, SSH keys) and removes the cluster from Ankra.
This action is irreversible. All data on the cluster will be permanently deleted.

Via the Dashboard

Go to your cluster → SettingsGeneralDanger Zone and click Deprovision Cluster. You will be asked to confirm before the operation begins.

Via CLI or API

ankra cluster ovh deprovision <cluster_id>

Architecture

An OVH cluster provisions the following infrastructure:
ComponentDescription
GatewayJump server for secure SSH access to cluster nodes
Private NetworkIsolated VLAN for inter-node communication
Control Plane(s)Kubernetes control plane instances
Worker(s)Kubernetes worker instances for running workloads
SSH KeysDeployed to all instances for access
All nodes are deployed within a private OVH network. The gateway provides the only external SSH access point.

Troubleshooting

Common Issues

IssueSolution
Cluster stuck in provisioningCheck OVH API credentials and project quota
Cannot scale workersEnsure cluster is online and no operations are running
Invalid API credentialsRe-validate at OVH API Console
Flavor unavailableTry a different region or flavor

OVH Cloud Quotas

OVH Cloud has default resource limits per project. If provisioning fails, check your quotas in the OVH Control Panel:
  • Instances
  • Networks / VLANs
  • SSH Keys
Contact OVH support to increase limits if needed.