Help & Guides
Features
A quick overview of what Cova can do. Each feature links to a detailed step-by-step guide below.
Monitor Scan
Analyze your monitoring tool configurations and score your coverage.
Repo Scan
Scan your codebase for endpoints, databases, and services that need monitoring.
Incident Autopilot
AI-powered incident commander that investigates production problems across your stack.
PR Guard
Auto-scan pull requests for new infrastructure that needs monitoring coverage.
Deploy Monitor
Push AI-generated monitor configs directly to Datadog with one click.
Ask Cova
AI chat that understands your monitoring setup and codebase architecture. Ask about gaps, incidents, on-call policies, and more.
Monitor Scan
What is it?
Monitor Scan connects to your monitoring tools (PagerDuty, Datadog, Grafana, Sentry, New Relic, Sumo Logic, Splunk), pulls your live configuration, and runs a rule-based analysis that scores your monitoring coverage across multiple dimensions. When AI is enabled, it also generates prioritized recommendations and natural-language summaries.
What it checks
- Alert coverage - Are your services and infrastructure covered by alert conditions?
- Notification routing - Do alerts reach the right people via the right channels?
- Escalation policies - Are there proper escalation paths and timeouts?
- Dashboard health - Are dashboards organized and maintained?
- Error tracking - Are errors captured and triaged?
- Release tracking - Are deployments instrumented for observability?
What you get
- Per-dimension coverage scores (0-100) with penalty breakdowns
- Prioritized findings by severity (critical, warning, info)
- Specific "fix first" recommendations per area
- AI-powered recommendations for areas scoring below 80%
- Natural-language executive summary
- Exportable HTML report with full findings
- One-click Deploy Monitor to push generated configs directly to Datadog PRO
- Deployed Monitors tab tracking all monitors pushed to Datadog via Cova
Requirements
- At least one monitoring tool connected
- API key/token for each tool (see Scopes & Permissions for required access levels)
- Pro plan required for AI-powered features PRO
Repo Scan
What is it?
Repo Scan analyzes your actual codebase to discover endpoints, databases, message queues, and services - then cross-references them against your connected monitoring tools to identify what should be monitored but isn't. It bridges the gap between "what's configured" (Monitor Scan) and "what exists in code" (Repo Scan).
Three ways to scan
- Upload - Drag and drop a ZIP or TAR.GZ of your repo
- GitHub - Select a repo and branch from your connected GitHub account
- GitLab - Select a project and branch from your connected GitLab account
What it finds
- API endpoints - REST routes across frameworks (FastAPI, Express, Spring Boot, NestJS, Go, etc.)
- Databases - PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch, DynamoDB + ORM usage
- Message queues - RabbitMQ, Kafka, Celery, Bull, SQS, Pub/Sub, NATS
- Services - Docker Compose services, Dockerfiles, Kubernetes deployments
- Architecture patterns - Framework detection, service boundaries, infrastructure dependencies
What you get
- Architecture summary with detected frameworks and languages
- Full inventory of endpoints, databases, queues, and services
- Monitoring gap analysis cross-referenced with your connected tools
- AI-generated monitoring recommendations specific to your stack
- Architecture context added to Ask Cova chat
Requirements
- A repository (uploaded, GitHub, or GitLab)
- Pro plan required for AI recommendations PRO
- GitHub/GitLab scan requires connected SCM account
Incident Autopilot
What is it?
Incident Autopilot is an AI-powered incident commander. Describe what's happening ("API latency spike on checkout service" or "users reporting 500 errors"), and it generates a structured investigation playbook pulling real-time data from all your connected tools - what to check first, which dashboards to open, who's on-call, and the blast radius.
What you get
- Investigation timeline - Ordered steps based on your specific incident description
- Cross-tool correlation - Pulls live data from PagerDuty, Datadog, Grafana, Sentry, New Relic simultaneously
- On-call identification - Who's on-call right now via PagerDuty
- Blast radius assessment - Which services and teams are affected
- Runbook-style output - Copy-pasteable investigation steps and dashboard links
- Context-aware - If you've run a repo scan, it factors in your architecture
Data sources
- PagerDuty - recent incidents, on-call schedules, escalation policies
- Datadog - monitors, dashboards, service catalog
- Grafana - alert rules, dashboards, contact points
- Sentry - error events, issue trends, releases
- New Relic - entities, alert conditions, violations
- Sumo Logic - active monitors, connections
- Splunk - fired alerts from saved searches
Requirements
- Pro plan required PRO
- At least one monitoring tool connected (more tools = richer investigations)
- Run a monitor analysis first for best context
PR Observability Guard
What is it?
PR Guard automatically scans every pull request for new endpoints, databases, services, and message queues that need monitoring. It posts a comment directly on the PR with risk-scored findings, coverage status, and suggested monitor configurations - before the code gets merged. No AI activation required.
How it works
- GitHub/GitLab webhook fires when a PR is opened or updated
- Cova analyzes the diff using pattern matching (same engine as Repo Scan)
- Each detected endpoint gets a risk score (critical, high, medium, low) with a business-impact reason
- A comment is posted (or updated) on the PR with findings, coverage gaps, and monitor configs
- Comment is deduplicated - one Cova comment per PR, updated on new commits
What it detects
- New endpoints - REST routes across 10+ frameworks
- New databases - Connection strings, ORM models, client initializations
- New queues - Message broker producers and consumers
- New services - Docker, Kubernetes, infrastructure definitions
Coverage status
PR Guard checks your connected tools and reports coverage gaps in three categories:
- HTTP monitoring - Datadog, Grafana, New Relic
- Alerting - PagerDuty, Datadog, Grafana, New Relic
- Error tracking - Sentry, New Relic
Requirements
- GitHub App installed with
pull_requests:writepermission, or GitLab webhook configured - PR Guard enabled in Cova's sidebar settings
- Monitored repositories selected (all or specific repos)
Deploy Monitor
What is it?
Deploy Monitor lets you push AI-generated monitor configurations directly to your Datadog account from within Cova - no copy-pasting or manual setup required. When you click Generate Fix on a coverage gap or quality issue, Cova produces a ready-to-deploy config and gives you a one-click button to create or update monitors in Datadog.
How it works
- Generate Fix on any finding to produce one or more monitor configs
- Each config shows a NEW badge (will create a new monitor) or UPDATE badge (will improve an existing monitor)
- For multi-service gaps, Cova generates configs for each missing service in a single batch
- Click Deploy Monitor to push to Datadog. A confirmation dialog shows exactly what will be created or updated before anything happens
- After deployment, a direct View in Datadog link appears, and the monitor is tracked in your Deployed Monitors tab
What it supports
- Regular Datadog monitors (metric, query, composite, log, etc.)
- Synthetic API tests (HTTP checks, SSL, DNS)
- Smart update detection - updates existing monitors instead of creating duplicates
- Multi-region support (US1, US3, US5, EU, AP1)
- 3-level config sanitization for API compatibility
Requirements
- Datadog connected with an API key that has write permissions
- Pro plan (or admin-granted access) PRO
- Generate Fix requires Pro plan PRO
Ask Cova
What is it?
Ask Cova is an AI chat assistant that understands your monitoring configuration and (if scanned) your codebase architecture. It helps with incident triage, monitoring improvements, on-call questions, and architecture-aware troubleshooting.
What you can ask
- Risk assessment - "What are my biggest monitoring gaps?"
- Escalation review - "Do my alerts reach the right people?"
- Coverage gaps - "Which services have no alert coverage?"
- Quick fixes - "What should I fix first to improve my score?"
- Architecture questions - "How is authentication handled in my codebase?" (requires repo scan)
- Tool-specific questions - "How do I set up a Datadog monitor for this endpoint?"
Context awareness
- Knows your connected tools and their configurations
- References your actual monitoring data (alert rules, dashboards, services)
- If you've run a repo scan, it understands your architecture (endpoints, databases, frameworks)
- Quick prompt categories adapt based on what tools and scans are available
When is it available?
- Appears as a floating chat button after your first monitor analysis
- Available on all plans
- Richer answers when both monitor scan and repo scan have been run
Getting Started (First-Time Setup)
New to Cova? Follow these steps to go from zero to your first monitoring analysis in about five minutes.
Navigate to getcova.ai and sign up. You have four options:
- Email & password - create an account, then verify your email via a code sent to your inbox
- Google - one-click sign-in with your Google account
- GitHub - one-click sign-in with your GitHub account (recommended if you plan to use Repo Scan or PR Guard)
- Access code - quick access for demos and evaluations (data stored in browser only)
Click Integrations in the left sidebar. You'll see cards for each supported monitoring tool.
Click any tool card to expand it. Enter your API key or token, then click Connect. A green checkmark confirms the connection. See Connecting Tools for detailed credential instructions per tool.
Click the Run Analysis button in the sidebar. Cova will connect to your tools, pull configuration data, and evaluate your monitoring setup. This takes 15-60 seconds depending on data volume.
Your dashboard now shows a Health Score, individual findings with severity ratings, and coverage breakdowns. See Reading Your Dashboard for a detailed walkthrough.
Try Demo Mode
Want to see what Cova does before connecting your real tools? Demo mode loads sample data so you can explore every feature risk-free.
The purple Try Demo button is at the top of the sidebar, right below the Cova logo.
Cova plays through the same analysis animation you'd see with real tools, then loads sample results from all seven monitoring tools (PagerDuty, Datadog, Grafana, Sentry, New Relic, Sumo Logic, and Splunk).
After the animation, the Monitor Scan page loads with sample data. All pages are available with realistic demo content:
Purple dots next to nav items indicate demo-populated pages. The Investigate page comes pre-loaded with a sample incident investigation. You can click Run Analysis again to replay the animation.
Click the Exit Demo button in the purple banner at the top of the content area, or click the Exit Demo button in the sidebar (same location as Try Demo). Your real connections and data are restored exactly as they were.
Connecting Monitoring Tools
Cova analyzes your monitoring tools by connecting to their APIs. Each tool needs a specific credential. All credentials are encrypted at rest using AES-128 encryption.
Currently Supported Tools
More integrations are on the way. Today, Cova connects to:
| Tool | Credential Needed | Where to Find It |
|---|---|---|
| REST API Key (read-only) | PagerDuty → Integrations → API Access Keys → Create New API Key | |
| API Key + Application Key + Site/Region (includes write scopes for Deploy Monitor) | Datadog → Organization Settings → API Keys / Application Keys. Select your Datadog site from the dropdown (US1, US3, US5, EU, or AP1). See Scopes & Permissions | |
| Service Account Token (Viewer) + Instance URL | Grafana → Administration → Service Accounts → Add Token | |
| Auth Token + Organization Slug | Sentry → Settings → Developer Settings → Custom Integrations (scopes: project:read, org:read, event:read, alerts:read) |
|
| User API Key (NRAK-xxx) + Account ID | New Relic → User menu → API Keys → Create a key (type: User) | |
| Access ID + Access Key + Region | Sumo Logic → Administration → Security → Access Keys → Add Access Key | |
| Auth Token + Instance URL | Splunk → Settings → Tokens → New Token. Instance URL is your Splunk management endpoint (default port 8089, e.g. https://splunk.example.com:8089) |
Coming Soon
The following integration is in development. Click the "Coming Soon" tool card in the app to request early access.
Dynatrace - Software intelligence
How to Connect
Click Integrations in the left sidebar. You'll see cards for each supported monitoring tool.
The card expands to show input fields for that tool's credentials.
Cova validates the credentials by making a test API call. If successful, a green checkmark appears and the card shows "Connected." If validation fails, you'll see an error message - double-check your key and try again.
Disconnecting a Tool
To disconnect, expand the tool card and click Disconnect. An inline confirmation will appear - confirm to remove the credentials. You can reconnect at any time by entering new credentials.
Scopes & Permissions
Each tool requires specific API permissions for Cova to scan your configuration. Most tools only need read access. Datadog is the exception - write scopes are needed for Deploy Monitor (pushing AI-generated configs directly to Datadog).
| Tool | Required Permissions | What They Access | Why |
|---|---|---|---|
| PagerDuty | Read-only API Key | Services, incidents, escalation policies | Analyze alert routing and on-call coverage |
| Datadog | monitors_read, monitors_write, dashboards_read, metrics_read, events_read, synthetics_read, synthetics_write |
Monitors, dashboards, metrics, events, synthetic tests | Read scopes for scanning; monitors_write and synthetics_write enable Deploy Monitor |
| Grafana | Service Account with Viewer role | Datasources, dashboards, alert rules | Read-only analysis of alerting setup |
| Sentry | project:read, org:read, event:read, alerts:read |
Projects, org info, events, alert rules | Analyze error tracking and alert configuration |
| New Relic | User API Key (read access via NerdGraph) | Alert policies, conditions, notification channels, synthetics | Analyze alert coverage and notification routing |
| Sumo Logic | Access ID + Access Key | Monitors, connections, dashboards | Analyze log-based alerting and dashboard health |
| Splunk | Auth Token (read access) | Saved searches, alerts, dashboards | Analyze alert coverage and notification routing |
monitors_write and synthetics_write on Datadog. All other tools are read-only by default.
Running a Monitoring Analysis
Prerequisites
You need at least one monitoring tool connected (see Connecting Tools). The more tools you connect, the more comprehensive the analysis.
How to Run
Make sure you have at least one monitoring tool connected (see Connecting Tools). Once connected, the Run Analysis button appears in the sidebar.
The gradient button appears at the bottom of the sidebar when tools are connected. Clicking it starts the analysis pipeline.
You'll see step-by-step progress indicators as Cova works through each connected tool. Typical steps include: connecting to APIs, fetching configurations, analyzing patterns, and generating findings.
When complete, the dashboard populates with your Health Score, findings, and coverage breakdowns. If AI is enabled, you'll also see a narrative summary explaining the key takeaways.
How Long Does It Take?
| Scenario | Typical Duration |
|---|---|
| 1 tool, rule-based only | 15-30 seconds |
| 2-3 tools, rule-based only | 30-60 seconds |
| With AI summary enabled | Add ~10-20 seconds for AI processing |
If the Analysis Fails
- Invalid or expired credentials - Go to Settings and reconnect the tool with fresh credentials
- Rate limiting - Wait a minute and try again; Cova respects API rate limits
- Network timeout - The backend or the external API may be temporarily unavailable; retry shortly
Each analysis is saved to your History tab (within Monitor Scan), so you can always go back and compare results over time.
Reading Your Dashboard
After running an analysis, your dashboard displays a single scrollable view with everything you need. Here's what each section means.
Hero Row: Health Score + Coverage Score
The top of the dashboard shows two score rings side by side:
- Health Score (left) - Your overall monitoring health (0-100). Measures the total issues found across connected tools.
- Coverage Score (right) - How complete your monitoring setup is (0-100). Measures how well each area (alerting, escalation, etc.) is covered.
The Health Score is color-coded:
| Score Range | Color | Label | Meaning |
|---|---|---|---|
| 0 - 39 | Red | Critical | Significant gaps in monitoring coverage or configuration |
| 40 - 64 | Orange | Needs Attention / Fair | Several areas need improvement; solid foundation building |
| 65 - 100 | Green | Fair / Good / Excellent | Well-configured monitoring setup with strong coverage |
AI Summary + Recent Trend
If AI is enabled, an AI-generated narrative summarizes your monitoring posture with per-tool breakdowns. If you've run multiple analyses, a score delta shows whether your health score improved or declined.
Coverage Dimension Cards (with Findings)
The main body of the dashboard groups everything by coverage dimension - sorted worst-first so you focus on the biggest gaps. Each card shows a scored area of your monitoring with a tool logo, label, score bar, and percentage. Findings are embedded directly inside their related coverage cards, so you see issues in context.
Coverage scores follow the same color-coding as health scores:
| Score Range | Color | Meaning |
|---|---|---|
| 0 - 39% | Red | Critical gaps - This area needs immediate attention |
| 40 - 69% | Orange | Needs improvement - Partial coverage with notable gaps |
| 70 - 100% | Green | Well covered - Good coverage, minor improvements possible |
Click a coverage card to expand it and see:
- Criteria - What Cova measures for this dimension
- AI recommendation - Specific guidance for improving this area (when AI is enabled)
- Issues - Related finding cards with severity, impact, action, status, and Generate Fix button
- Covered - Green-tagged list of items that have monitoring configured
- Missing - Red-tagged list of items that lack monitoring coverage
- Generate Fix - AI-powered button to create monitor configs for the gap. When multiple services are missing, generates one config per service. When quality warnings exist on covered monitors, generates improved configs for each warning
Score penalties: A coverage dimension can show less than 100% even when all services are covered. Quality issues on existing monitors (missing notification targets, no recovery thresholds, no tags) reduce the score. The card shows "N warnings reducing score" to explain the gap. Clicking Generate Fix on these produces targeted improvements for each warning.
Which coverage dimensions appear depends on which tools you have connected:
Escalation Routing - Do incidents follow a clear chain of escalation with backup responders?
Alert Quality - Are escalation policies properly configured with multiple levels and reasonable timeouts?
Error Rate Monitoring - Are you alerted when services start returning errors above normal baselines?
Latency / Performance - Are response times monitored with thresholds that catch degradation before users notice?
Business Flow Coverage - Are critical user journeys monitored end-to-end with synthetic tests?
Alert Coverage - Do your datasources have alert rules configured to catch issues?
Notification Routing - Are alerts routed to the right people through contact points and policies?
Dashboard Health - Do your dashboards have panels configured and not sitting empty?
Issue Tracking - Are your projects sending events with active SDKs?
Alert Configuration - Do your projects have alert rules to catch spikes and regressions?
Release Tracking - Are your projects deploying with Sentry releases for regression detection?
Alert Coverage - Do your entities (APM, infrastructure, browser, mobile) have alert conditions?
Notification Routing - Are alert policies routed to active notification destinations?
Synthetic Monitoring - Are your synthetic monitors actively reporting?
Monitor Coverage - Are your Sumo Logic monitors enabled and actively detecting issues?
Notification Routing - Do your monitors have notification actions so alerts reach the right people?
Collector Health - Are all collectors alive and ingesting data?
Alert Coverage - Are your scheduled saved searches configured with alerting conditions?
Notification Routing - Do your alert-enabled searches have notification actions configured?
Dashboard Health - Are user dashboards present and organized across your Splunk apps?
Each area shows a percentage score. Scores start from a base (how much is covered) and can be penalized by related issues - for example, having 10 escalation policies but 5 of them misconfigured will reduce your Alert Quality score. Click any area to expand it and see what's covered, what's missing, and which specific issues are affecting the score.
What Cova Checks Per Tool
When you run an analysis, Cova inspects your tool configurations and flags issues at three severity levels: Critical (things that will cause missed incidents), Warning (risks that weaken your response), and Info (improvements worth considering).
PagerDuty
Cova fetches your services, escalation policies, schedules, and on-call rotations. It checks for:
- Services missing an escalation policy (alerts go nowhere)
- Escalation policies with only one level (no backup if the first responder misses it)
- Schedules with only 1-2 people (burnout risk, single point of failure)
- Escalation delays over 30 minutes (too slow for critical incidents)
- Policy levels targeting pending/uninvited users
- Services with no integrations connected
- Nobody on-call for a policy in the next 7 days
Datadog
Cova fetches your monitors, SLOs, synthetics, and downtimes. It checks for:
- Monitors with no notification targets (alerts fire but nobody knows)
- Muted monitors that might be hiding real problems
- Monitors stuck in Alert or No Data state
- No monitors routed to an incident management tool (PagerDuty, OpsGenie, etc.)
- All notifications going through a single channel
- Missing SLOs (no formal reliability targets)
- Paused synthetic tests (user journeys going unmonitored)
- Excessive active downtimes creating monitoring blind spots
- Monitors missing tags, recovery thresholds, or re-alert settings
Grafana
Cova fetches your datasources, dashboards, alert rules, contact points, notification policies, and mute timings. It checks for:
- Datasources with no alert rules (data flowing in but nobody watching it)
- No alert rules or contact points configured at all
- Paused alert rules that should be active
- All contact points using the same notification type (no redundancy)
- Contact points not wired into notification policies
- Empty dashboards with no panels
- Dashboards with panels but no alert thresholds
- Excessive or unused mute timings
- Alert rules missing summary or description annotations
Sentry
Cova fetches your projects, unresolved issues, alert rules, and releases. It checks for:
- Projects with no recent events (broken or missing SDK)
- Projects with no alert rules configured
- Alert rules with no actions (fire silently)
- Projects with excessive unresolved issues (issue fatigue)
- Old critical issues that have never been resolved
- Projects with no recent releases (no regression detection)
- Projects using default alert rules only
New Relic
Cova queries NerdGraph to analyze entities, alert policies, NRQL conditions, notification destinations, and synthetic monitors. It checks for:
- Entities with no alert conditions (alertSeverity = NOT_CONFIGURED)
- Alert policies with no notification destinations (fire silently)
- Disabled NRQL alert conditions
- Entities stuck in CRITICAL alert state (stale alerts)
- Policies with only a single notification destination (no redundancy)
- Synthetic monitors not actively reporting
- No synthetic monitors configured at all
- Entities with no tags (poor organization)
- Large percentage of disabled conditions
Splunk
Cova fetches your saved searches, fired alerts, dashboards, indexes, and alert actions. It checks for:
- No alert rules configured despite having saved searches
- Disabled scheduled searches leaving monitoring gaps
- Alerts missing threshold configuration (comparator or threshold)
- Many unscheduled saved searches that cannot trigger automatic alerts
- Alerts with no notification actions (fire silently, nobody notified)
- Majority of alerts using a single notification channel (no redundancy)
- All alerts using the same action type
- No user dashboards outside system apps
- All dashboards in the default search app (poor organization)
- Very few user indexes configured (data in default indexes)
When two or more tools are connected, filter buttons appear so you can view coverage from a specific tool.
Per-Tool Summary
Below the hero row, each connected tool gets a collapsible summary section. Click to expand and see:
- Tool logo and name - Which system the findings came from
- Severity counts - Badges showing critical, warning, and info counts at a glance
- Top Risk - The most critical finding highlighted in a red card (if any critical issues exist)
- Warnings summary - Count of warnings to address
- All checks passed - Green card shown when no issues are found for a tool
This gives you a quick per-tool overview before diving into the full coverage dimension cards below.
Other Findings
Any findings that don't belong to a specific coverage dimension appear in an "Other Findings" section below the coverage cards.
Managing Findings
Findings are the actionable output of each analysis. Cova lets you track their status, filter them, and export them.
Status Workflow
Default status. The issue has been identified but not addressed yet.
You're actively working on fixing this issue.
The issue has been fixed. It will be verified in the next analysis run.
You've reviewed this and decided it's not applicable or not a priority.
Changing Status
There are two ways to update a finding's status:
Quick Toggle (Checkbox)
Click the checkbox next to any finding to quickly mark it as resolved. Click again to revert to open.
Expanded Status Buttons
Click a finding to expand it, then use the status buttons (In Progress, Resolved, Dismissed) for more granular control.
Filtering & Views
- Filter by severity - Click the severity pills (Critical, Warning, Info) in the filter bar to show only findings of that level
- Filter by tool - Click a tool pill to show findings from that tool only
- Filters apply everywhere - Active filters affect findings inside all coverage dimension cards simultaneously
Exporting a Report
Cova generates a branded report containing your Health Score, AI summary, coverage breakdowns, and all findings with severity levels, impact descriptions, and recommended actions.
The report opens in a preview modal. Use the Print button to send it to your printer (or save as PDF via your browser's print dialog), or click Download to save the HTML file. Share it with your team or attach it to a Jira ticket for tracking remediation work.
Scanning a Repository
Repo scanning lets Cova understand your actual codebase architecture and make context-aware monitoring recommendations - suggesting what should be monitored based on your code, not just what is configured.
Three Ways to Scan
Upload Upload a ZIP or TAR.GZ
Drag and drop (or click to browse) a compressed archive of your repository. Supported formats: .zip, .tar.gz, .tgz. Maximum file size depends on server configuration.
Best for: quick one-off scans, repos not hosted on GitHub/GitLab, or when you want to scan a specific snapshot.
GitHub Scan from GitHub
Connect your GitHub account first (see GitHub & GitLab), then select a repository and branch from the dropdown.
Best for: ongoing monitoring, teams using GitHub, and pairing with PR Guard for automatic PR scanning.
GitLab Scan from GitLab
Connect your GitLab account first (see GitHub & GitLab), then select a project and branch.
Best for: teams using GitLab for source control.
Running a Scan
Click Repo Scan in the sidebar.
Select the Upload, GitHub, or GitLab tab at the top of the page.
For Upload, drag and drop a .zip or .tar.gz file. For GitHub or GitLab, you must connect your account first.
Once connected, choose a repository from the dropdown, enter the branch name (defaults to main), and click Scan.
Cova downloads the code, identifies the tech stack, maps endpoints and services, then (if AI is enabled) generates architecture-aware recommendations. A progress indicator shows each stage as it completes.
Once the scan finishes, results appear below with:
Below the overview you'll find detailed endpoints, recommendations with severity and rationale, and gap badges highlighting areas where monitoring is missing.
Automatic PR Scanning
If you've connected GitHub, enable PR Guard to automatically scan every pull request for new endpoints, databases, and services. Cova posts a GitHub comment flagging monitoring gaps directly on the PR.
Generating an Incident Runbook
After a scan, you can generate a runbook from the results - a document that combines your monitoring findings with architecture context for incident response. Look for the Generate Runbook option in the scan results view.
Using Ask Cova (AI Chat)
Ask Cova is an AI chat assistant that understands both your monitoring configuration and (if scanned) your codebase architecture. It can help with incident triage, monitoring improvements, and on-call questions.
When Is It Available?
Quick Prompts
When you open Ask Cova, you'll see prompt categories tailored to your setup. Four base categories are always available:
Risk Risk Assessment
Questions about your biggest monitoring gaps, what's most likely to cause an undetected outage, and where coverage is weakest.
Escalation Escalation Policies
Questions about whether alerts reach the right people, escalation timing, and policy gaps.
On-Call On-Call Health
Questions about on-call rotation balance, burnout risk, and schedule coverage.
Improvement Improvement Plans
Questions about prioritizing fixes, quick wins, and building a remediation roadmap.
Contextual Prompts
Additional prompt categories appear automatically based on your connected tools and history:
| Category | Appears When | Example Prompts |
|---|---|---|
| Datadog Insights | Datadog connected | Monitor health, SLO gaps, synthetic test coverage |
| Grafana Insights | Grafana connected | Datasource alert gaps, dashboard cleanup, notification routing |
| Sentry Insights | Sentry connected | Error tracking gaps, alert configuration, release health |
| New Relic Insights | New Relic connected | Entity alert gaps, notification routing, synthetic coverage |
| Sumo Logic Insights | Sumo Logic connected | Monitor coverage, notification routing, collector health |
| Splunk Insights | Splunk connected | Alert coverage, saved search health, notification routing |
| Cross-Tool Analysis | 2+ tools connected | Cross-tool coverage gaps, end-to-end incident flow |
| Incident Triage | Repo scanned | User errors, latency spikes, stuck jobs, connection failures |
| Trends & History | 2+ analyses run | "Has my coverage improved?", "What changed since last analysis?" |
Scoping to a Specific Tool
Use the scope pills above the input box to focus your question on a specific tool. When scoped, your question is automatically prefixed with the tool name so the AI focuses its answer on that tool's data. Click "All" to remove the scope.
Trend Questions
After running two or more analyses, you can ask Cova about trends. It has access to your last 5 analysis snapshots and will cite specific numbers in its answers - for example: "Your health score improved from 45 to 62 (+17 points). Critical findings dropped from 8 to 3."
Copying Responses
Each AI response has a copy button. Use it to paste answers into Slack, Jira tickets, or runbooks.
Tips for Getting Useful Answers
- Be specific - "What's wrong with my PagerDuty escalation policy for the payments team?" works better than "How's my monitoring?"
- Describe symptoms - For triage, describe what users are experiencing rather than what you think the technical cause is
- Ask follow-ups - The chat maintains context, so you can drill down: "Tell me more about that third recommendation"
- Reference the scan - If you've scanned a repo, ask questions that bridge monitoring and code: "Which API endpoints in my repo don't have corresponding alerts?"
- Ask about trends - After multiple analyses, ask "Has my coverage improved?" or "What changed since last time?"
Incident Autopilot
Incident Autopilot is an AI-powered incident commander that investigates production problems across all your connected tools. Describe what's happening and Cova generates a structured investigation playbook - what to check first, which dashboards to open, which logs to grep, who's on-call, and the blast radius.
Setup
Go to Integrations and connect one or more tools (PagerDuty, Datadog, Grafana, Sentry, New Relic, Sumo Logic, or Splunk). The Autopilot pulls live data from all connected tools during an investigation - the more tools connected, the more comprehensive the results.
Go to Repo Scan and scan the repository you want to investigate against. This gives the Autopilot your codebase architecture (services, endpoints, databases) so it can map symptoms to specific components and assess blast radius. Without a scan, investigations are based on tool data alone.
Click Investigate in the sidebar. You'll see a search card at the top and a repository selection card below.
Expand the Repository Selection card and choose the GitHub or GitLab tab. Select a repo from the list, enter a branch, and click Scan Repository (if not already scanned).
Once scanned, the card collapses and the search card shows: "Investigating against your-org/your-repo via GitHub - 12 endpoints, 3 databases, 5 services"
Type a description of the production problem in the search bar and click Investigate. Cova cross-references your symptom against live tool data and codebase architecture to generate a structured playbook.
What to Type
Describe what users are experiencing, not what you think the technical cause is. Good examples:
- "Users can't transfer money"
- "Checkout is slow - 30% of transactions timing out"
- "Users can't log in since 2pm"
- "API returning 500 errors on the /payments endpoint"
What You Get Back
The investigation modal shows a structured playbook with 7 sections:
| Section | Description |
|---|---|
| Summary | 2-3 sentence assessment of the situation based on live data and architecture |
| Severity | Critical, High, Medium, or Low - based on user impact and blast radius |
| Blast Radius | Affected services, endpoints, databases, and a human-readable impact statement |
| What to Check First | 3-7 prioritized investigation steps, each referencing a specific tool |
| Related Alerts | Active alerts from connected tools that relate to the reported symptom |
| Who's On-Call | Current on-call responders from PagerDuty, prioritized by relevance to affected services |
| Logs to Check | Specific grep patterns or log sources to investigate |
Service-Relevant Filtering
The Autopilot automatically filters results to prioritize items related to the affected services:
- Related Alerts - Only alerts that match the blast radius services are shown at full opacity. Unrelated active alerts are dimmed but still visible.
- On-Call - Responders whose escalation policy or team matches the affected services appear first. Others are dimmed as fallback context.
- Service matching - Uses PagerDuty service names, Datadog service tags, Sentry project slugs, New Relic entity names, and Grafana alert labels to correlate alerts with impacted services.
Download as PDF
Click the Download PDF button in the investigation modal to generate a branded document. The browser's print dialog opens, where you can save as PDF or print directly. The PDF includes all 7 sections formatted for sharing with your team.
Data Sources
The investigation pulls from all connected tools simultaneously:
| Tool | Data Used |
|---|---|
| PagerDuty | Open incidents, on-call schedules, escalation policies |
| Datadog | Alerting monitors with service tags |
| Grafana | Firing alert rules with labels |
| Sentry | Unresolved issues by project |
| New Relic | Entities with critical/warning alert severity |
| Sumo Logic | Active monitors |
| Splunk | Fired alerts from saved searches |
PR Observability Guard
PR Observability Guard automatically scans every pull request (GitHub) and merge request (GitLab) for new endpoints, databases, services, and message queues, then posts a comment flagging monitoring gaps - with risk scoring and suggested monitor configs. No AI activation required - it works using pure pattern matching.
What It Detects
| Category | Examples |
|---|---|
| Endpoints | FastAPI/Flask routes, Express routes, NestJS decorators, Spring Boot mappings, Go router handlers |
| Databases | PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch, DynamoDB + ORMs (SQLAlchemy, Prisma, TypeORM, etc.) |
| Message Queues | RabbitMQ, Kafka, Celery, Bull, AWS SQS, Google Pub/Sub, NATS |
| Services | Docker Compose services, Dockerfiles, Kubernetes deployments |
Risk Scoring
Each detected endpoint is assigned a risk level based on what it does:
| Risk | Criteria | Example |
|---|---|---|
| 🔴 Critical | Payment, auth, or financial endpoints with state-changing methods (POST/PUT/DELETE) | POST /api/payments/charge |
| 🟠 High | Critical-path keywords with GET, or any DELETE endpoint | DELETE /api/users/{id} |
| 🟡 Medium | State-changing methods on non-critical paths | POST /api/comments |
| 🟢 Low | Read-only endpoints | GET /api/health |
Each risk assessment includes a business-impact reason (e.g., "Unmonitored payment endpoints can cause silent revenue loss") so engineers understand why monitoring matters for that specific endpoint.
Setup - GitHub App Permissions
Navigate to GitHub.com → Settings → Developer settings → GitHub Apps → cova-monitoring. Click Permissions & events in the sidebar.
Under Repository permissions, ensure these are set:
| Permission | Access Level | Why |
|---|---|---|
| Pull requests | Read & write | Read PR diffs, post review comments |
| Contents | Read-only | Fetch changed file contents |
| Metadata | Read-only | Basic repo info (required by all GitHub Apps) |
Under Subscribe to events, check:
- Pull request - triggers PR Guard on open, update, reopen
- Push - (optional) for push-triggered repo scans
Click Save changes. If you already have the app installed on repositories, GitHub will notify those installations about the new permissions. The org/account owner needs to approve the updated permissions in their Settings → Installations page.
https://getcova.ai/github/webhook. If not, update it in the GitHub App's General settings.Enable PR Guard in Cova
Click PR Guard in the sidebar. You'll see the status card showing "Inactive" and a toggle to enable it.
Click the toggle switch. The status card changes to "Active" with a green bar. This setting persists across server restarts.
By default, PR Guard scans all repositories the GitHub App has access to. To limit scanning to specific repos, select Select repositories and check the ones you want monitored.
What the PR Comment Looks Like
When a PR introduces new infrastructure, Cova's GitHub bot posts a comment with:
- Endpoints table - Risk level, HTTP method, path, file, and monitoring coverage status
- Coverage note - If no HTTP monitoring tool is connected, a note explains how to get accurate coverage checks
- "Why these endpoints need monitoring" - Expandable section with business-impact reasons for critical/high-risk endpoints
- "Suggested monitor configs" - Expandable section with ready-to-use JSON configs (Datadog, Grafana, or generic) with thresholds scaled by risk
- Database connections - New databases/ORMs found with recommended monitoring (connection pool, query latency)
- Message queues - New queues found with recommended monitoring (queue depth, consumer lag, dead letters)
Coverage Status
The "Status" column in the endpoints table adapts based on which monitoring tools you have connected to Cova:
| Connected Tools | Status Shown |
|---|---|
| Datadog, Grafana, or New Relic + PagerDuty | Needs monitor + alert rule |
| Datadog, Grafana, or New Relic only | Needs HTTP monitor |
| PagerDuty only | No HTTP monitoring identified* |
| Sentry only | Error tracking only* |
| No tools connected | No monitoring tools connected* |
Asterisked (*) statuses include a note under the table encouraging you to connect HTTP monitoring tools for more accurate findings.
Troubleshooting
| Issue | Cause | Fix |
|---|---|---|
| No comment on PR | PR Guard toggle is off | Enable in Cova → PR Guard |
| No comment on PR | PR only changed non-code files | Expected behavior - no gaps to report |
| 403 error in activity log | Missing pull_requests:write | Update GitHub App permissions (step 2 above) |
| Webhook not received | Missing Pull request event subscription | Check GitHub App → Permissions & events (step 3 above) |
| Repo not being scanned | Repo filter is set to specific repos | Check PR Guard → Repository Filter settings |
| Duplicate endpoints in table | Unlikely - deduplication is built in | Check if endpoints have different HTTP methods |
| No comment on GitLab MR | Missing project webhook | Add webhook in GitLab project → Settings → Webhooks (see GitLab Setup above) |
| GitLab 401 error | OAuth token expired | Reconnect GitLab in Repo Scan (Cova auto-refreshes tokens, but they can expire if unused) |
GitLab Setup
For GitLab merge requests, you need to add a project webhook manually (GitLab doesn't have an "App" model like GitHub).
Go to Repo Scan → GitLab → Connect and authorize Cova via OAuth. This gives Cova access to read your merge request diffs and post comments.
Go to your GitLab project → Settings → Webhooks → Add new webhook.
| Field | Value |
|---|---|
| URL | https://getcova.ai/gitlab/webhook |
| Secret token | Must match the GITLAB_WEBHOOK_SECRET env var on the server |
| Trigger | Check Merge request events |
| SSL verification | Enable (recommended) |
Toggle PR Guard on in the PR Guard page. The same toggle controls both GitHub and GitLab scanning.
Connecting GitHub & GitLab
Connecting a source control platform lets Cova scan your repositories, power Incident Autopilot investigations, and enable PR Guard for automatic PR scanning. You can connect from Repo Scan, Incident Autopilot, or PR Guard - the steps are the same.
Connecting GitHub
GitHub uses a GitHub App installation flow. This grants Cova read access to your repositories and enables webhook-based PR scanning.
Open Repo Scan, Incident Autopilot, or PR Guard from the sidebar. Select the GitHub tab. You'll see a "Not connected" state with a connect button.
Install the Cova GitHub App to connect your repositories.
A popup opens on github.com with the GitHub App installation page. You'll go through these steps on GitHub's side:
- Select an account or organization to install the app on. If you belong to multiple orgs, pick the one with the repos you want to scan.
- Choose repository access - select "All repositories" or "Only select repositories". You can limit access to specific repos for security.
- Click Install & Authorize to complete the installation.
Permissions: Cova requests read-only access to repository contents (for scanning) and write access to pull requests (for PR Guard comments). No code is ever modified. You can change repository access later in GitHub Settings → Applications → Cova Monitoring → Configure.
The popup closes automatically and Cova refreshes to show your connected status. A green checkmark appears next to GitHub and your repository list is loaded.
Click a repository from the list to select it (radio-style selection). Enter a branch name if different from main, then click Scan Repository. Cova downloads and analyzes the codebase to map your architecture.
Connecting GitLab
GitLab uses an OAuth2 authorization flow. This grants Cova read access to your projects and the ability to post merge request comments for PR Guard.
Open Repo Scan, Incident Autopilot, or PR Guard from the sidebar. Select the GitLab tab. You'll see a connect prompt.
Connect your GitLab account to scan repositories.
A popup opens on gitlab.com with the OAuth consent screen. GitLab asks you to authorize the Cova application. The requested scopes are:
- api - Read repositories and post merge request comments
- read_repository - Download repository archives for scanning
Click Authorize to grant access. Unlike GitHub, GitLab grants access to all projects you have access to (no per-repo selection during OAuth).
The popup closes automatically and Cova refreshes. A green checkmark appears next to GitLab, your username is shown, and your project list is loaded.
Click a project from the list to select it. Enter a branch name if different from main, then click Scan Repository.
Where You Can Connect From
GitHub and GitLab connections are shared across all features. Connect once, and it's available everywhere:
| Feature | What it uses the connection for |
|---|---|
| Repo Scan | Scan repositories to map services, endpoints, and databases for monitoring recommendations |
| Incident Autopilot | Investigate incidents against your codebase architecture |
| PR Guard | Automatically scan pull requests/merge requests for new infrastructure that needs monitoring |
What Happens After Connection
- Your repository list loads automatically - visible on Repo Scan, Incident Autopilot, and PR Guard
- For GitHub, PR Guard webhooks are configured automatically via the GitHub App
- For GitLab, PR Guard webhooks need to be configured manually per project
- The connection persists across browser sessions via localStorage
What Happens After a Deploy or Restart
/scm/restore endpoint. You don't need to reconnect manually. If you encounter a "not connected" error after a restart, the restore happens automatically on the next action.Disconnecting
To disconnect GitHub or GitLab, go to the Repo Scan page:
On the Repo Scan page, find the connected platform card and click Disconnect. An inline confirmation appears: "Are you sure?"
Click Yes in the red confirmation prompt. The connection is removed from both your browser and the backend. You can reconnect at any time by going through the flow again.
Setting Up Webhooks for PR Guard
Webhooks allow Cova to automatically scan pull requests and merge requests when they're opened or updated. GitHub and GitLab handle webhooks differently.
GitHub Webhooks (Automatic)
To verify webhooks are working:
Create a new PR (or push a commit to an existing one) that includes code changes.
Go to PR Guard in the sidebar. The Recent Activity section at the bottom should show the scan result. The webhook status under the GitHub tab will show "Last received: just now".
GitLab Webhooks (Manual Setup)
GitLab doesn't have an "App" model like GitHub, so webhooks must be added to each project manually. This tells GitLab to notify Cova when merge requests are opened or updated.
In GitLab, go to your project → Settings (left sidebar) → Webhooks.
Enter the following settings:
After adding the webhook, GitLab shows it in the list. Click the Test dropdown and select Merge request events to send a test payload. Check PR Guard → Recent Activity in Cova to confirm it was received.
GitLab webhooks are per-project. Add the same webhook to every project you want PR Guard to scan. For organizations on GitLab Premium, you can add a group-level webhook instead.
Open the PR Guard page and switch to the GitLab tab. Once a webhook has been received, you'll see a green "Webhook active" indicator.
Troubleshooting
| Issue | Solution |
|---|---|
| Popup blocked by browser | Allow popups for the Cova domain, or click the blocked popup notification in your browser's address bar |
| "No GitHub installation connected" | This usually resolves itself - Cova auto-restores connections. If it persists, disconnect and reconnect from Repo Scan |
| Repos not showing after connect | For GitHub, check which repos you granted access to during installation. You can modify this in GitHub Settings → Applications → Cova Monitoring → Configure |
| PR Guard comments not posting (GitHub) | The app needs pull_requests: write permission. Uninstall and reinstall the GitHub App to update permissions |
| PR Guard comments not posting (GitLab) | Ensure the OAuth scope includes api. Disconnect and reconnect GitLab from Repo Scan |
| GitLab webhook not received | Verify the webhook URL is correct (https://getcova.ai/gitlab/webhook), SSL verification is enabled, and "Merge request events" is checked |
| GitLab token expired | Cova automatically refreshes expired GitLab tokens. If issues persist, disconnect and reconnect from Repo Scan |
Billing & Plans
Cova offers a free tier for evaluation and a Pro plan for teams that need unlimited access.
Free vs Pro
| Feature | Free | Pro ($49/mo) |
|---|---|---|
| Monitor scans | 5 / month | Unlimited |
| AI chat messages | 15 / month | Unlimited |
| Repo scans | 3 / month | Unlimited |
| Tool integrations | 2 | Unlimited |
| Generate Fix | 3 lifetime | Unlimited |
| Runbook generation | 3 lifetime | Unlimited |
| Report export | 3 lifetime | Unlimited |
| Incident Autopilot | 2 lifetime | Unlimited |
| Deploy to Datadog | - | Included |
| PR Guard | - | Included |
Understanding Usage Limits
Monthly limits (scans, chat, repo scans, integrations) reset on the 1st of each calendar month. Lifetime limits (Generate Fix, Runbook, Report Export, Incident Autopilot) are a total allocation that does not reset.
When you're approaching a limit, you'll see an "X left" badge on the relevant button. When a limit is reached, the button shows a limit message and you'll be prompted to upgrade.
View your current usage at any time from the Billing page in the sidebar.
Upgrading to Pro
The upgrade button appears on the Billing page, in usage limit warnings, and on feature-locked screens. Clicking it takes you to a secure Stripe checkout page.
Enter your payment details on the Stripe checkout page. New subscribers get a 14-day free trial - you won't be charged until the trial ends.
After checkout you're redirected back to Cova with all Pro features unlocked. Your Billing page now shows your plan status, next invoice date, and a link to manage your subscription.
Managing Your Subscription
Go to Billing in the sidebar and click "Manage Subscription". This opens the Stripe Customer Portal where you can:
- Update your payment method
- View and download past invoices
- Cancel your subscription
Team Management
Invite teammates to your Cova workspace so they can access shared analysis results and monitoring insights.
Sending Invites
Click Settings in the left sidebar and find the Send an Invite section.
Type your teammate's email address and click Send Invite. They'll receive a branded email with a link to join.
Pending invites appear below the input field. You can Resend (if they missed the email) or Revoke (to cancel the invitation).
Accepting an Invite
Open the email from Cova and click the invitation link. You'll be taken to the login page.
Log in using any method (email, Google, GitHub, or access code). The invite is automatically accepted when you sign in - it's matched by the invite token, not your email address.
Viewing Your Team
Accepted team members appear in the Settings page under the invite section. You can see who has joined and when they accepted.
How to Apply Generated Fix in Datadog
Cova generates Datadog monitor configs as JSON. If Datadog is connected, you can deploy directly from Cova. Otherwise, paste the config manually or use the API.
Option A: Deploy Monitor (one-click)
Make sure Datadog is connected in Integrations with an API key and Application key that has write permissions.
If the gap has multiple missing services (e.g., 4 uncovered services for Latency), Cova generates one config per service in a single click. If quality warnings exist (e.g., "Monitor X has no recovery threshold"), it generates improved configs for each warning.
When multiple configs are generated, a checklist appears with each service name. Select all, deselect all, or pick specific ones. The Preview, JSON, and Terraform tabs show all selected configs combined.
Each config displays a NEW badge (creating a monitor) or UPDATE badge (improving an existing one). Click Deploy Monitor, Deploy All, or Deploy Selected to proceed.
A confirmation dialog appears showing exactly what will happen - for example, "This will create 2 new and update 1 existing monitor(s) in Datadog." Click Continue to proceed or Cancel to go back.
On success, a green confirmation appears with a View in Datadog link that opens each monitor directly in your Datadog console. Deployed monitors also appear in the Deployed Monitors tab on the Monitor Scan page.
Smart updates: If the fix is for an existing monitor (e.g. "Monitor X has no recovery threshold"), Deploy Monitor updates the original monitor instead of creating a duplicate. For new coverage gaps, it creates a new monitor. Cova shows NEW or UPDATE badges before you deploy so you always know what will happen.
Option B: Paste in Datadog UI
Click Copy Config in the Cova modal to copy the JSON to your clipboard.
Go to Monitors → New Monitor. Choose the monitor type that matches the config (usually Metric or APM).
Click Edit tab, then toggle to Edit as JSON. Paste the config and click Save.
Option C: Datadog API
Send the JSON config as the request body to POST https://api.datadoghq.com/api/v1/monitor with your API and Application keys as headers (DD-API-KEY and DD-APPLICATION-KEY).
Datadog's JSON editor is the easiest way to apply a generated config. No API key with write permissions needed - your regular Datadog account access is enough.
How to Apply Generated Fix in Grafana
Cova generates Grafana alert rule or contact point configs as JSON. You can apply them through the UI or API.
Option A: Import via Grafana UI
Click Copy Config in the Cova modal.
Go to Alerting → Alert Rules → New Alert Rule. For contact points, go to Alerting → Contact Points → New Contact Point.
Most Grafana forms have a JSON/code view. Switch to it and paste the config, then save.
Option B: Grafana API
For alert rules: POST /api/v1/provisioning/alert-rules. For contact points: POST /api/v1/provisioning/contact-points. Authenticate with a service account token that has Editor or Admin role.
If you use Grafana Cloud, the API base URL is https://your-instance.grafana.net. For self-hosted, use your instance URL.
How to Apply Generated Fix in PagerDuty
PagerDuty does not have a JSON import UI. Generated configs are applied via the PagerDuty REST API.
Creating a Service
Go to Integrations → API Access Keys → Create New API Key. The key you use for Cova (read-only) won't work here - you need a key with write access.
Send the config to POST https://api.pagerduty.com/services with the header Authorization: Token token=YOUR_KEY.
Creating an Escalation Policy
Send the config to POST https://api.pagerduty.com/escalation_policies with the same authorization header.
Creating Event Orchestration Rules
List orchestrations with GET https://api.pagerduty.com/event_orchestrations and note the ID of the one you want to add rules to.
Send the config to PUT https://api.pagerduty.com/event_orchestrations/{id}/router.
PagerDuty requires a separate read/write API key to create resources. The read-only key you connected to Cova cannot be used to apply configs.
How to Apply Generated Fix in Sentry
Sentry configs are applied via the Sentry REST API. There is no JSON import in the Sentry UI.
Creating an Alert Rule
Go to Settings → Auth Tokens → Create New Token with scopes project:write and alerts:write.
Find your organization slug and project slug in the Sentry URL: sentry.io/organizations/{org}/projects/{project}/.
Send the config to POST https://sentry.io/api/0/projects/{org}/{project}/rules/ with the header Authorization: Bearer YOUR_TOKEN.
The auth token you use for Cova may only have read scopes. You'll need a separate token with write access to create alert rules.
How to Apply Generated Fix in New Relic
New Relic uses the NerdGraph GraphQL API to create alert conditions and notification destinations.
Creating an Alert Condition
Go to api.newrelic.com/graphiql (US) or api.eu.newrelic.com/graphiql (EU). Sign in with your account.
Paste the generated config into a mutation like:
mutation { alertsNrqlConditionStaticCreate(accountId: YOUR_ACCOUNT_ID, policyId: YOUR_POLICY_ID, condition: { ...PASTE_CONFIG... }) { id name } }
Creating a Notification Destination
The generated config will include the destination type and properties. Paste into the NerdGraph explorer with your account ID.
The NerdGraph API Explorer lets you test mutations interactively before running them. Your existing User API Key (NRAK-xxx) works for both reads and writes.
How to Apply Generated Fix in Sumo Logic
Sumo Logic configs are applied via the Sumo Logic REST API.
Creating a Monitor
You can use the same Access ID and Access Key you connected to Cova, provided your role has the manageMonitorsV2 capability.
Send the config to POST https://api.{region}.sumologic.com/api/v1/monitors using Basic Auth with your Access ID and Access Key. Replace {region} with your deployment (e.g., us1, eu, au).
Creating a Notification Connection
Send the config to POST https://api.{region}.sumologic.com/api/v1/connections with the same authentication.
Sumo Logic uses different API base URLs per region. Make sure you use the same region you selected when connecting to Cova. The full list: us1, us2, eu, au, de, jp, ca, in, fed.
How to Apply Generated Fix in Splunk
Cova generates Splunk saved search configurations as JSON. You can apply them through the Splunk Web UI or the REST API.
Option A: Splunk Web UI
In the Generate Fix modal, click Copy Config to copy the JSON to your clipboard. Note the key fields: search (the SPL query), alert_type, alert.comparator, alert.threshold, and actions.
In Splunk Web, go to Settings → Searches, Reports & Alerts → New Alert. Paste the SPL query from the search field, set the schedule, and configure the trigger conditions using the alert.comparator and alert.threshold values.
Under Trigger Actions, add the actions specified in the config (email, webhook, Slack, etc.). Save the alert.
Option B: Splunk REST API
Use the Splunk management API to create saved searches programmatically:
curl -k -u admin:password \
https://your-splunk:8089/services/saved/searches \
-d name="Your Alert Name" \
-d search="index=main sourcetype=access_combined status>=500" \
-d alert_type="number of events" \
-d alert.comparator="greater than" \
-d alert.threshold=10 \
-d alert.severity=4 \
-d is_scheduled=1 \
-d cron_schedule="*/5 * * * *" \
-d actions="email" \
-d "action.email.to=ops@example.com"The Splunk REST API uses port 8089 (management port) by default, not the web UI port (8000). If using Splunk Cloud, check your admin for the correct management endpoint. The -k flag skips SSL verification for self-signed certificates - remove it in production if you have a valid certificate.
How to Apply Generated Fix with Terraform
Every generated fix includes a Terraform tab with ready-to-use HCL code. This is the recommended approach for teams that manage infrastructure as code.
Supported Providers
| Tool | Terraform Provider | Resource Type |
|---|---|---|
| Datadog | datadog/datadog | datadog_monitor, datadog_synthetics_test |
| Grafana | grafana/grafana | grafana_rule_group, grafana_contact_point |
| PagerDuty | PagerDuty/pagerduty | pagerduty_service, pagerduty_escalation_policy |
| Sentry | jianyuan/sentry | sentry_issue_alert, sentry_metric_alert |
| New Relic | newrelic/newrelic | newrelic_nrql_alert_condition |
| Sumo Logic | SumoLogic/sumologic | sumologic_monitor, sumologic_connection |
| Splunk | splunk/splunk | splunk_saved_searches |
Steps
After generating a fix, toggle from JSON to Terraform to see the HCL code. Click Copy Config to copy it.
Paste the HCL block into your .tf file (e.g., monitoring.tf). Make sure the corresponding provider is already configured in your terraform { required_providers { } } block.
The generated HCL maps JSON keys directly to Terraform attributes. Some providers use slightly different attribute names - check the Terraform Registry docs for your provider if you get validation errors.
Run terraform plan to preview the changes, then terraform apply to create the resource. Terraform will show you exactly what will be created before you confirm.
The Terraform output is generated by converting the JSON config to HCL. It gives you a strong starting point, but you may need to adjust attribute names or add provider-specific fields. Always run terraform plan before applying.
Using Terraform means your monitors are version-controlled, reviewable in PRs, and reproducible across environments. This is the recommended approach for production infrastructure.