Skip to main content
The Logs feature provides real-time log streaming and analysis for all your Kubernetes workloads, helping you debug issues and monitor application behavior.

Overview

Ankra’s Logs feature allows you to:
  • Stream real-time logs from any workload in your cluster
  • View logs from multiple pods simultaneously
  • Filter and search within log output
  • Switch between containers in multi-container pods
  • Customize display with font size, line wrapping, and color highlighting

Accessing Logs

There are two ways to access logs in Ankra:

Cluster Logs

Navigate to Logs in the cluster sidebar to access the cluster-wide log viewer. This allows you to:
  1. Select any workload (Deployment, StatefulSet, or DaemonSet)
  2. View logs from all pods belonging to that workload
  3. Switch between pods without leaving the page
Cluster Logs view with workload selector

Pod Logs

Navigate to Kubernetes → Pods, click on a pod, and select the Logs tab to view logs for that specific pod.

Log Viewer Features

Workload Selection

In the Cluster Logs view, use the workload selector to choose which application’s logs to view:
Workload TypeBadge ColorDescription
DeploymentBlueStateless applications
StatefulSetPurpleStateful applications
DaemonSetGreenNode-level services

Pod Selection

When viewing a workload with multiple pods, you can:
  • View a single pod Select one pod from the dropdown
  • View all pods Select multiple pods to see combined logs
  • Identify log sources Each log line shows which pod it came from

Container Selection

For pods with multiple containers:
  • Use the container dropdown to switch between containers
  • Select multiple containers to view combined logs
  • Init containers and sidecar containers are also available
Pod and container selection dropdowns

Streaming Controls

Play/Pause

ButtonDescription
PauseStop receiving new log lines (existing logs remain visible)
PlayResume streaming new log lines

Auto-scroll

The log viewer automatically scrolls to show new logs as they arrive. When you scroll up to view older logs:
  • Auto-scroll disables immediately and your scroll position is preserved
  • New logs continue to arrive in the background without disturbing your view
  • A Scroll to bottom button appears at the bottom of the viewer
  • Click the button to jump to the latest logs and resume auto-scrolling
  • Auto-scroll only re-engages when you explicitly click the button or resume streaming
This works reliably even under extremely high-throughput streams with logs arriving every millisecond. Your scroll position is never forced back to the bottom while you’re investigating.

Load Older Logs

Scroll to the top of the log viewer to automatically load older logs. The viewer will:
  1. Fetch the previous time window of logs
  2. Prepend them to the current view
  3. Maintain your scroll position

Log viewer with search and level filters Use the search box to filter logs:
  • Type any text to filter log lines in real-time
  • Only matching lines are displayed both existing and newly arriving logs are filtered
  • Match count is displayed
  • Clearing the search restores all log lines
Click the .* button to enable regex mode:
  • Use regular expressions for advanced filtering
  • Syntax errors are shown inline
  • Common patterns: error|warn, \d{3}, "status":\s*\d+

Log Level Filtering

Use the level filter buttons to show/hide logs by severity:
LevelColorExamples
ErrorRedERROR, FATAL, CRITICAL
WarnYellowWARN, WARNING
InfoBlueINFO
DebugGrayDEBUG, TRACE

Display Settings

Click the Settings (gear icon) to customize the log viewer: Log viewer settings panel

Tail Lines

Choose how many recent log lines to fetch when connecting:
  • 100 lines (default)
  • 500 lines
  • 1000 lines
  • 5000 lines
  • All available logs

Font Size

Adjust the log text size:
  • 10px 20px range
  • Use the slider or +/- buttons

Line Wrapping

Toggle line wrapping on/off:
  • On Long lines wrap to multiple lines
  • Off Long lines scroll horizontally

Color Levels

Toggle syntax highlighting for log levels:
  • On Log levels are color-coded
  • Off Plain text display

Timestamps

Toggle timestamp display:
  • On Show Kubernetes timestamps at the start of each line
  • Off Hide timestamps for cleaner output

Fullscreen Mode

Click the Fullscreen button (expand icon) to enter fullscreen mode:
  • Log viewer fills the entire screen
  • All controls remain accessible
  • Press Escape or click the button again to exit

Downloading Logs

Click the Download button to save logs to a file:
  • Logs are saved as a .txt file
  • Filename includes pod name and timestamp
  • All currently loaded logs are included

Offline Clusters & Cached Data

Log streaming requires an active connection to your cluster through the Ankra agent.

When the Cluster is Online

  • Logs stream in real-time from the Kubernetes API
  • New log lines appear as they are written
  • You can load older logs by scrolling up

When the Cluster Goes Offline

  • Existing logs remain visible Logs already loaded in the viewer stay accessible
  • New logs cannot be fetched The stream pauses until connection is restored
  • A warning banner appears Indicates the cluster is offline and data may be stale

Reconnection

When the cluster comes back online:
  • The log stream automatically reconnects
  • New logs resume streaming
  • You may need to refresh to clear cached state
If you navigate away from the Logs page while the cluster is offline, you will not be able to view logs again until the connection is restored.

AI-Assisted Troubleshooting

Ankra’s AI assistant can analyze logs to help you diagnose issues faster.

Using AI with Logs

  1. Navigate to the pod experiencing issues
  2. Click the Troubleshoot button in the pod details
  3. The AI analyzes recent logs, events, and resource status
  4. You receive actionable insights and suggested fixes

What AI Analyzes

The AI assistant examines:
  • Error patterns Identifies recurring errors and their root causes
  • Stack traces Parses exception details and suggests fixes
  • Resource issues Detects OOM kills, CPU throttling, and probe failures
  • Configuration problems Spots misconfigured environment variables or mounts

Best Practices

Provide context: When chatting with AI, mention what you’ve already tried and any recent changes to help it give more targeted advice.
For more details on AI-powered debugging, see Kubernetes AI Troubleshooting.

Performance Tips

The log viewer uses virtualized rendering and batched updates (150ms flush interval) to handle high-volume output efficiently. Here are additional strategies:

Reduce Initial Load

  • Use smaller tail lines Start with 100-500 lines instead of “All” for busy workloads
  • Select specific containers Avoid streaming all containers if you only need one

Filter Aggressively

  • Apply log level filters Hide DEBUG/INFO logs when investigating errors
  • Use search early Filters apply in real-time to both existing and incoming logs, reducing visual noise immediately
  • Enable regex for precision Narrow results with patterns like error|exception|failed

Manage the Stream

  • Scroll up freely Your position is preserved even during high-throughput streams; new logs accumulate in the background
  • Pause when needed Pause streaming to stop new data from arriving entirely
  • Download for analysis Export logs to a file for grep, analysis tools, or sharing
  • Clear when needed Clear accumulated logs and start fresh if the buffer grows very large

Multi-Pod Considerations

When viewing logs from multiple pods:
  • Start with one pod, then add more as needed
  • High-traffic pods can overwhelm the combined view
  • Consider using log level filters to reduce cross-pod noise

Large Clusters

For clusters with hundreds of pods:
  • Pods are loaded on-demand when you select a workload, so the initial page load stays fast regardless of cluster size
  • Pod-to-workload matching uses Kubernetes label selectors for accurate results
  • There is no hard cap on the number of pods workloads with hundreds of replicas are fully supported

Common Tasks

Debugging a Crashing Pod

  1. Navigate to Logs in the sidebar
  2. Select the workload containing the failing pod
  3. Look for ERROR or FATAL messages
  4. Scroll up to see logs from before the crash
  5. Check previous container logs if the pod restarted

Monitoring Multiple Replicas

  1. Open Logs for your workload
  2. Select all pods in the pod dropdown
  3. Each log line shows which pod it came from
  4. Use search to filter across all pods

Finding Specific Errors

  1. Enter your search term in the filter box
  2. Enable regex mode for complex patterns
  3. Use log level filters to show only errors
  4. The match count helps you assess frequency

Tips

Use the Command Palette: Press ⌘+K and type “logs” to quickly jump to the Logs page.
Pause for Investigation: When debugging, pause the stream to prevent new logs from pushing important lines out of view.
Regex for JSON Logs: Use patterns like "error":\s*true or "status":\s*[45]\d\d to find specific JSON log entries.
Download for Sharing: Download logs before sharing with team members or attaching to issue reports.

Keyboard Shortcuts

ShortcutAction
⌘+K → “logs”Open Logs page
EscapeExit fullscreen mode

Still have questions? Join our Slack community and we’ll help out.