Skip to main content
Press ⌘+J (Mac) or Ctrl+J (Windows/Linux) to open the AI Assistant from anywhere in Ankra.
AI Assistant interface
The AI Assistant combines everything Ankra knows about your cluster-logs, Kubernetes manifests, Stack deployments, resource states, and events-into a unified context for intelligent troubleshooting. This makes it exceptionally powerful for incident triangulation, connecting symptoms across multiple layers to find root causes fast.

Page-Aware

Automatically focuses on what you’re viewing-open a pod and the AI knows its logs, manifest, and status.

Unified Context

Correlates logs, manifests, Stack configurations, and resource states in one conversation.

Incident Triangulation

Connects symptoms across pods, services, and deployments to pinpoint root causes.

Stack-Aware

Understands your CD pipeline-which Stacks deployed what, when, and with which values.

Page-Aware Context

The AI Assistant automatically knows what you’re looking at. When you open the chat, it focuses on your current view:
You’re ViewingAI Automatically Knows
A PodIts logs, manifest, events, resource usage, and parent deployment
A DeploymentAll replicas, rollout status, associated services, and recent changes
A StackInstalled add-ons, Helm values, deployment history, and dependencies
Logs ViewThe filtered logs, error patterns, and related resources
A ServiceEndpoints, selectors, connected pods, and ingress rules
This means you don’t need to explain context-just ask your question:
  • Looking at a crashing pod: “Why is this failing?” → AI already sees the logs and events
  • Viewing a deployment: “Scale this to 5 replicas” → AI knows which deployment
  • On the Stack page: “Add Redis to this stack” → AI knows the current stack configuration
The AI focuses on what you focus on. Navigate to a resource before asking about it for the most relevant answers.

The Superpower: Combined Context

What makes Ankra’s AI different is the unified environment. When you ask a question, the AI has access to:
Context LayerWhat the AI Sees
Pod LogsReal-time and historical container logs with error patterns
Kubernetes ManifestsYour actual deployed YAML-not just documentation
Stack DeploymentsWhich Helm charts were deployed, their values, and versions
Resource StatesCurrent status, events, conditions, and health
RelationshipsService → Deployment → Pod → Container dependencies
TimelineWhen deployments happened and what changed
This combined view enables questions like:
  • “The API is returning 500 errors. Was anything deployed recently that could cause this?”
  • “Compare the current nginx config to what was running yesterday”
  • “Which Stack change caused the database connection failures?”

Building Stacks with AI

The AI Assistant helps you build and optimize Stacks:
1

Describe What You Need

Tell the AI what you want to deploy: “I need a monitoring stack with Prometheus, Grafana, and alerting”
2

Get Recommendations

The AI suggests add-ons, configurations, and dependencies based on best practices.
3

Configure Values

Ask for help with Helm values: “How should I configure Prometheus retention for a 50-node cluster?”
4

Troubleshoot Deployments

When something fails: “The Grafana pod won’t start after I deployed the monitoring stack”

Example Stack Building Conversations

You: “I need to set up a production-ready ingress with TLS”AI: “I recommend adding these components to your Stack:
  1. cert-manager - For automatic TLS certificate management
  2. ingress-nginx - Production-grade ingress controller
Here’s the dependency order and suggested values for your cluster size…”
You: “My monitoring stack is using too much memory”AI: “Looking at your current Prometheus configuration in the ‘observability’ Stack, I see retention is set to 30 days with no resource limits. Based on your cluster’s 200 pods, I recommend:
  • Set prometheus.retention to 15d
  • Add resources.limits.memory: 4Gi
  • Enable remote_write to offload historical data…”

Incident Triangulation

When something goes wrong, the AI correlates signals across your entire stack:
1

Identify Symptoms

“Users are reporting slow API responses”
2

Cross-Reference Logs

The AI checks pod logs for errors, timeouts, and latency patterns.
3

Check Recent Deployments

“I see the ‘backend’ Stack was updated 2 hours ago with a new database connection pool setting…”
4

Analyze Resource States

“The postgres pod is showing high CPU and connection queue buildup…”
5

Provide Root Cause

“The connection pool was reduced from 100 to 10 in the last Stack deployment, causing connection exhaustion under load.”

What the AI Triangulates

SignalHow It’s Used
Error LogsPattern matching across all pods in the affected service chain
Stack HistoryRecent deployments and value changes that correlate with incident timing
Resource EventsKubernetes events showing restarts, OOMs, and scheduling failures
DependenciesService mesh, database connections, and external integrations
Configuration DriftDifferences between current manifests and last known good state

What Can You Ask?

  • “Why is the checkout service timing out?”
  • “What changed in the last hour that could cause this?”
  • “Compare pod logs before and after the deployment”
  • “Which upstream service is causing the 503 errors?”
  • “Help me create a logging stack with Loki and Promtail”
  • “What’s the best way to configure ingress for multiple domains?”
  • “How should I set up database backups in my Stack?”
  • “Add monitoring to my existing application Stack”
  • “Is my resource limit configuration correct for this workload?”
  • “Why is this HPA not scaling?”
  • “Explain the network policies affecting this service”
  • “What secrets does this deployment need?”
  • “Why did this pod get OOMKilled?”
  • “What’s causing intermittent connection resets?”
  • “The deployment rollout is stuck-what’s blocking it?”
  • “Why are requests failing only to certain pod replicas?”

Getting Started

1

Open the AI Assistant

Click the chat icon in the bottom-right corner of any cluster page, or use the Command Palette (⌘+K / Ctrl+K) and search for “AI Chat”.
2

Ask Your Question

Type your question in natural language. The assistant understands context about your current cluster and can help with:
  • “Why is my pod crashing?”
  • “Explain this deployment configuration”
  • “How do I set up ingress?”
  • “What’s wrong with this service?”
3

Review the Response

The AI provides detailed explanations, code examples, and actionable steps. You can ask follow-up questions to dive deeper.
4

Provide Feedback

Use the thumbs up/down buttons to rate responses. Your feedback helps improve the assistant over time.

Key Features

Context-Aware

The assistant understands your current cluster, namespace, and the resources you’re viewing for more relevant answers.

Troubleshooting Mode

Click “Troubleshoot” on any failing resource to get an AI-powered analysis of what’s wrong and how to fix it.

Chat History

Your conversations are saved and searchable. Access previous chats from the Command Palette or chat panel.

Multiple AI Models

Choose from different AI models based on your needs-faster responses or more detailed analysis.

Troubleshooting Resources

When viewing a Kubernetes resource (pod, deployment, service, etc.), you can click the Troubleshoot button to start an AI-assisted diagnosis:
  1. Navigate to the resource in the Kubernetes browser
  2. Click Troubleshoot in the resource details
  3. The AI will analyze:
    • Current resource state and events
    • Related resources and dependencies
    • Recent changes and logs
  4. Receive actionable recommendations

Chat History

Access your previous conversations:
  • Command Palette: Press ⌘+K and search for “Chat History”
  • Chat Panel: Click the history icon in the chat header
  • Search: Find past conversations by keyword
Conversations are organized by date and can be resumed at any time.

Best Practices

Be Specific: Include error messages, resource names, and namespaces for more accurate help.
Use Context: When you’re viewing a specific resource and open the chat, the AI already knows what you’re looking at.
Ask Follow-ups: The AI remembers the conversation context, so you can ask clarifying questions.

Privacy & Data

  • Conversations are stored securely and associated with your account
  • Cluster metadata may be shared for context (resource names, states, events)
  • Sensitive data like secrets or credentials are never sent to the AI
  • You can delete your chat history at any time

Still have questions? Join our Slack community and we’ll help out.