The Logs feature provides real-time log streaming and analysis for all your Kubernetes workloads, helping you debug issues and monitor application behavior.
Overview
Ankra’s Logs feature allows you to:
- Stream real-time logs from any workload in your cluster
- View logs from multiple pods simultaneously
- Filter and search within log output
- Switch between containers in multi-container pods
- Customize display with font size, line wrapping, and color highlighting
Accessing Logs
There are two ways to access logs in Ankra:
Cluster Logs
Navigate to Logs in the cluster sidebar to access the cluster-wide log viewer. This allows you to:
- Select any workload (Deployment, StatefulSet, or DaemonSet)
- View logs from all pods belonging to that workload
- Switch between pods without leaving the page
Pod Logs
Navigate to Kubernetes → Pods, click on a pod, and select the Logs tab to view logs for that specific pod.
Log Viewer Features
Workload Selection
In the Cluster Logs view, use the workload selector to choose which application’s logs to view:
| Workload Type | Badge Color | Description |
|---|
| Deployment | Blue | Stateless applications |
| StatefulSet | Purple | Stateful applications |
| DaemonSet | Green | Node-level services |
Pod Selection
When viewing a workload with multiple pods, you can:
- View a single pod - Select one pod from the dropdown
- View all pods - Select multiple pods to see combined logs
- Identify log sources - Each log line shows which pod it came from
Container Selection
For pods with multiple containers:
- Use the container dropdown to switch between containers
- Select multiple containers to view combined logs
- Init containers and sidecar containers are also available
Streaming Controls
Play/Pause
| Button | Description |
|---|
| Pause | Stop receiving new log lines (existing logs remain visible) |
| Play | Resume streaming new log lines |
The log viewer automatically scrolls to show new logs. When you scroll up to view older logs:
- Auto-scroll pauses automatically
- A Follow button appears at the bottom
- Click Follow to resume auto-scrolling
Load Older Logs
Scroll to the top of the log viewer to automatically load older logs. The viewer will:
- Fetch the previous time window of logs
- Prepend them to the current view
- Maintain your scroll position
Filtering & Search
Text Search
Use the search box to filter logs:
- Type any text to filter log lines
- Matching lines are highlighted
- Match count is displayed
Regex Search
Click the .* button to enable regex mode:
- Use regular expressions for advanced filtering
- Syntax errors are shown inline
- Common patterns:
error|warn, \d{3}, "status":\s*\d+
Log Level Filtering
Use the level filter buttons to show/hide logs by severity:
| Level | Color | Examples |
|---|
| Error | Red | ERROR, FATAL, CRITICAL |
| Warn | Yellow | WARN, WARNING |
| Info | Blue | INFO |
| Debug | Gray | DEBUG, TRACE |
Display Settings
Click the Settings (gear icon) to customize the log viewer:
Tail Lines
Choose how many recent log lines to fetch when connecting:
- 100 lines (default)
- 500 lines
- 1000 lines
- 5000 lines
- All available logs
Font Size
Adjust the log text size:
- 10px - 20px range
- Use the slider or +/- buttons
Line Wrapping
Toggle line wrapping on/off:
- On - Long lines wrap to multiple lines
- Off - Long lines scroll horizontally
Color Levels
Toggle syntax highlighting for log levels:
- On - Log levels are color-coded
- Off - Plain text display
Timestamps
Toggle timestamp display:
- On - Show Kubernetes timestamps at the start of each line
- Off - Hide timestamps for cleaner output
Fullscreen Mode
Click the Fullscreen button (expand icon) to enter fullscreen mode:
- Log viewer fills the entire screen
- All controls remain accessible
- Press
Escape or click the button again to exit
Downloading Logs
Click the Download button to save logs to a file:
- Logs are saved as a
.txt file
- Filename includes pod name and timestamp
- All currently loaded logs are included
Offline Clusters & Cached Data
Log streaming requires an active connection to your cluster through the Ankra agent.
When the Cluster is Online
- Logs stream in real-time from the Kubernetes API
- New log lines appear as they are written
- You can load older logs by scrolling up
When the Cluster Goes Offline
- Existing logs remain visible - Logs already loaded in the viewer stay accessible
- New logs cannot be fetched - The stream pauses until connection is restored
- A warning banner appears - Indicates the cluster is offline and data may be stale
Reconnection
When the cluster comes back online:
- The log stream automatically reconnects
- New logs resume streaming
- You may need to refresh to clear cached state
If you navigate away from the Logs page while the cluster is offline, you will not be able to view logs again until the connection is restored.
AI-Assisted Troubleshooting
Ankra’s AI assistant can analyze logs to help you diagnose issues faster.
Using AI with Logs
- Navigate to the pod experiencing issues
- Click the Troubleshoot button in the pod details
- The AI analyzes recent logs, events, and resource status
- You receive actionable insights and suggested fixes
What AI Analyzes
The AI assistant examines:
- Error patterns - Identifies recurring errors and their root causes
- Stack traces - Parses exception details and suggests fixes
- Resource issues - Detects OOM kills, CPU throttling, and probe failures
- Configuration problems - Spots misconfigured environment variables or mounts
Best Practices
Provide context: When chatting with AI, mention what you’ve already tried and any recent changes to help it give more targeted advice.
For more details on AI-powered debugging, see Kubernetes AI Troubleshooting.
When working with high-volume log output, use these strategies to keep the viewer responsive:
Reduce Initial Load
- Use smaller tail lines - Start with 100-500 lines instead of “All” for busy workloads
- Select specific containers - Avoid streaming all containers if you only need one
Filter Aggressively
- Apply log level filters - Hide DEBUG/INFO logs when investigating errors
- Use search early - Filter to relevant keywords before logs accumulate
- Enable regex for precision - Narrow results with patterns like
error|exception|failed
Manage the Stream
- Pause when investigating - Stop new logs from pushing important lines out of view
- Download for analysis - Export logs to a file for grep, analysis tools, or sharing
- Refresh periodically - Clear accumulated logs and start fresh if the viewer slows down
Multi-Pod Considerations
When viewing logs from multiple pods:
- Start with one pod, then add more as needed
- High-traffic pods can overwhelm the combined view
- Consider using log level filters to reduce cross-pod noise
The log viewer maintains a buffer of recent lines. Very high-volume workloads may cause older lines to be discarded to maintain performance.
Common Tasks
Debugging a Crashing Pod
- Navigate to Logs in the sidebar
- Select the workload containing the failing pod
- Look for ERROR or FATAL messages
- Scroll up to see logs from before the crash
- Check previous container logs if the pod restarted
Monitoring Multiple Replicas
- Open Logs for your workload
- Select all pods in the pod dropdown
- Each log line shows which pod it came from
- Use search to filter across all pods
Finding Specific Errors
- Enter your search term in the filter box
- Enable regex mode for complex patterns
- Use log level filters to show only errors
- The match count helps you assess frequency
Tips
Use the Command Palette: Press ⌘+K and type “logs” to quickly jump to the Logs page.
Pause for Investigation: When debugging, pause the stream to prevent new logs from pushing important lines out of view.
Regex for JSON Logs: Use patterns like "error":\s*true or "status":\s*[45]\d\d to find specific JSON log entries.
Download for Sharing: Download logs before sharing with team members or attaching to issue reports.
Keyboard Shortcuts
| Shortcut | Action |
|---|
⌘+K → “logs” | Open Logs page |
Escape | Exit fullscreen mode |
Still have questions? Join our Slack community and we’ll help out.