Help & Guides

Step-by-step guides for getting the most out of Cova.

Features

A quick overview of what Cova can do. Each feature links to a detailed step-by-step guide below.

Ask Cova

AI chat that understands your monitoring setup and codebase architecture. Ask about gaps, incidents, on-call policies, and more.

Monitor Scan

What is it?

Monitor Scan connects to your monitoring tools (PagerDuty, Datadog, Grafana, Sentry, New Relic, Sumo Logic, Splunk), pulls your live configuration, and runs a rule-based analysis that scores your monitoring coverage across multiple dimensions. When AI is enabled, it also generates prioritized recommendations and natural-language summaries.

What it checks

  • Alert coverage - Are your services and infrastructure covered by alert conditions?
  • Notification routing - Do alerts reach the right people via the right channels?
  • Escalation policies - Are there proper escalation paths and timeouts?
  • Dashboard health - Are dashboards organized and maintained?
  • Error tracking - Are errors captured and triaged?
  • Release tracking - Are deployments instrumented for observability?
  • Alert noise behavior - Are your alerts actually useful, or are they noisy, ignored, or stale? (Analyzes 14 days of real alert history)

What you get

  • Health score badge with expandable breakdown showing exactly what's affecting your score
  • Per-dimension coverage scores (0-100) with penalty breakdowns
  • Prioritized findings by severity (critical, warning, low)
  • Specific "fix first" recommendations per area
  • AI-powered recommendations for areas scoring below 80%
  • Natural-language executive summary
  • Exportable HTML report with full findings
  • One-click Deploy Monitor to push generated configs directly to Datadog, Sentry, Grafana, or New Relic PRO
  • Alert noise findings - noisy, flapping, stale, ignored, and storm patterns detected from real alert history
  • Deployed Monitors tab tracking all monitors pushed to your tools via Cova
  • Alert Noise Analysis - flapping monitors, off-hours pages, noisy services (PagerDuty & Datadog)
  • Scheduled Scans - automate recurring analyses on a daily, weekly, or monthly cadence PRO

Requirements

  • At least one monitoring tool connected
  • API key/token for each tool (see Scopes & Permissions for required access levels)
  • Pro plan required for AI-powered features PRO
Full step-by-step guide →

Repo Scan

What is it?

Repo Scan analyzes your actual codebase to discover endpoints, databases, message queues, and services - then cross-references them against your connected monitoring tools to identify what should be monitored but isn't. It bridges the gap between "what's configured" (Monitor Scan) and "what exists in code" (Repo Scan).

Three ways to scan

  • Upload - Drag and drop a ZIP or TAR.GZ of your repo
  • GitHub - Select a repo and branch from your connected GitHub account
  • GitLab - Select a project and branch from your connected GitLab account

What it finds

  • API endpoints - REST routes across frameworks (FastAPI, Express, Spring Boot, NestJS, Go, etc.)
  • Databases - PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch, DynamoDB + ORM usage
  • Message queues - RabbitMQ, Kafka, Celery, Bull, SQS, Pub/Sub, NATS
  • Services - Docker Compose services, Dockerfiles, Kubernetes deployments
  • Architecture patterns - Framework detection, service boundaries, infrastructure dependencies

What you get

  • Architecture summary with detected frameworks and languages
  • Full inventory of endpoints, databases, queues, and services
  • Monitoring gap analysis cross-referenced with your connected tools
  • AI-generated monitoring recommendations specific to your stack
  • Architecture context added to Ask Cova chat

Requirements

  • A repository (uploaded, GitHub, or GitLab)
  • Pro plan required for AI recommendations PRO
  • GitHub/GitLab scan requires connected SCM account
Full step-by-step guide →

Incident Autopilot

What is it?

Incident Autopilot is an AI-powered incident commander. Describe what's happening ("API latency spike on checkout service" or "users reporting 500 errors"), and it generates a structured investigation playbook pulling real-time data from all your connected tools - what to check first, which dashboards to open, who's on-call, and the blast radius.

What you get

  • Investigation timeline - Ordered steps based on your specific incident description
  • Cross-tool correlation - Pulls live data from PagerDuty, Datadog, Grafana, Sentry, New Relic simultaneously
  • On-call identification - Who's on-call right now via PagerDuty
  • Blast radius assessment - Which services and teams are affected
  • Runbook-style output - Copy-pasteable investigation steps and dashboard links
  • Context-aware - If you've run a repo scan, it factors in your architecture

Data sources

  • PagerDuty - recent incidents, on-call schedules, escalation policies
  • Datadog - monitors, dashboards, service catalog
  • Grafana - alert rules, dashboards, contact points
  • Sentry - error events, issue trends, releases
  • New Relic - entities, alert conditions, violations
  • Sumo Logic - active monitors, connections
  • Splunk - fired alerts from saved searches

Requirements

  • Pro plan required PRO
  • At least one monitoring tool connected (more tools = richer investigations)
  • Run a monitor analysis first for best context
Full step-by-step guide →

PR Observability Guard

What is it?

PR Guard automatically scans every pull request for new endpoints, databases, services, and message queues that need monitoring. It posts a comment directly on the PR with risk-scored findings, coverage status, and suggested monitor configurations - before the code gets merged. No AI activation required.

How it works

  • GitHub/GitLab webhook fires when a PR is opened or updated
  • Cova analyzes the diff using pattern matching (same engine as Repo Scan)
  • Low-value endpoints (health checks, docs, metrics) are filtered out automatically
  • Each detected endpoint gets a risk score (critical, high, medium, low) with a business-impact reason
  • Endpoints are cross-referenced against your latest Monitor Scan results - only genuinely uncovered ones are flagged
  • A comment is posted (or updated) on the PR with findings, coverage status, and suggested monitor configs
  • If all endpoints are already monitored, no comment is posted - no noise on clean PRs
  • You can re-scan from the Cova dashboard after connecting tools or running a new Monitor Scan

What it detects

  • New endpoints - REST routes across 10+ frameworks
  • New databases - Connection strings, ORM models, client initializations
  • New queues - Message broker producers and consumers
  • New services - Docker, Kubernetes, infrastructure definitions

Smart coverage checks

If you've run a Monitor Scan, PR Guard cross-references detected endpoints against your actual monitoring data. Each endpoint gets one of three statuses:

  • Monitored - Matched to an existing monitor in your connected tools (e.g. "Monitored (payment-api)")
  • Not monitored - No matching monitor found - includes a suggested config
  • No tools connected - Falls back to generic recommendations

Low-value endpoints (health checks, docs, metrics, static files) are automatically filtered out to reduce noise.

Requirements

  • GitHub App installed with pull_requests:write permission, or GitLab webhook configured
  • PR Guard enabled in Cova's sidebar settings
  • Monitored repositories selected (all or specific repos)
Full setup guide →

Deploy Monitor

What is it?

Deploy Monitor lets you push AI-generated monitor configurations directly to your monitoring tools from within Cova - no copy-pasting or manual setup required. When you click Generate Fix on a coverage gap or quality issue, Cova produces a ready-to-deploy config and gives you a one-click button to create or update monitors.

How it works

  • Generate Fix on any finding to produce one or more monitor configs
  • Each config shows a NEW badge (will create a new monitor) or UPDATE badge (will improve an existing monitor)
  • For multi-service gaps, Cova generates configs for each missing service in a single batch
  • Click Deploy to push to your tool. A confirmation dialog shows exactly what will be created or updated before anything happens
  • After deployment, a direct link to the created monitor appears, and it's tracked in your Deployed Monitors tab

Supported tools

  • Datadog - monitors, synthetic tests. Multi-region (US1, US3, US5, EU, AP1)
  • Sentry - issue alerts and metric alerts. Requires alerts:write scope
  • Grafana - provisioned alert rules. Creates a "Cova Generated Alerts" folder. Requires Editor role
  • New Relic - NRQL alert conditions. Creates a "Cova Generated Alerts" policy. Requires User API key with alerting write access

For PagerDuty, Splunk, and Sumo Logic, Generate Fix produces JSON configs that can be imported manually or applied via their APIs.

Key features

  • Smart update detection - updates existing monitors by name instead of creating duplicates
  • Config sanitization and validation before sending to APIs
  • Enum normalization (auto-fixes lowercase values AI may generate)
  • Range clamping on numeric fields to stay within API limits

Requirements

  • A supported tool connected with write permissions (see Scopes & Permissions)
  • Pro plan (or admin-granted access) PRO
Step-by-step guides per tool →

Alert Noise Analysis

What is it?

Alert Noise Analysis examines your alert history from PagerDuty and Datadog to identify patterns that contribute to alert fatigue - noisy monitors, flapping alerts, off-hours pages, and low-signal notifications that waste your team's time.

What it analyzes

  • Alert volume trends - Daily and weekly patterns, spikes, and sustained noise periods
  • Flapping monitors - Alerts that trigger and resolve repeatedly in short windows
  • Noisy services - Which services or monitors generate the most alerts relative to incidents
  • Off-hours pages - Pages that wake people up but rarely require immediate action
  • Auto-resolved alerts - Alerts that resolve before anyone acknowledges them
  • Duplicate alerts - Multiple monitors firing for the same underlying issue

Supported tools

  • PagerDuty - Incident history, urgency distribution, service breakdown
  • Datadog - Monitor alert history, muting patterns, alert type distribution

How to use it

  1. Connect PagerDuty and/or Datadog from the Integrations page
  2. Go to Monitor Scan and run an analysis
  3. The alert noise section appears in your results, showing noise metrics and AI recommendations for reducing alert fatigue

AI recommendations

After analyzing your alert patterns, Cova generates actionable recommendations - such as adjusting thresholds, muting low-value alerts, consolidating duplicate monitors, or re-routing off-hours pages to lower-urgency channels.

Scheduled Scans

What is it?

Scheduled Scans let you set up automated, recurring monitor analyses on a daily, weekly, or monthly cadence. Instead of manually clicking "Run Analysis," Cova runs it for you on a schedule and tracks score changes over time.

How to set it up

  1. Go to Settings in the sidebar
  2. Find the Scheduled Scans section
  3. Choose a frequency: Daily, Weekly, or Monthly
  4. Enable the schedule - Cova will run analyses automatically and store results in your history

What happens on each scan

  • All connected monitoring tools are scanned, same as a manual "Run Analysis"
  • Results appear in your analysis history with a timestamp
  • Score delta tracking shows how your coverage changes between scans

Requirements

  • At least one monitoring tool must be connected
  • Available on Pro plan

CLI

What is it?

The Cova CLI lets you scan repos, connect monitoring tools, run analyses, and check PRs for monitoring gaps - all from the terminal or CI/CD pipelines. Same backend, same analysis engine as the web app.

Key commands

  • cova login - Authenticate with your API token
  • cova status - Check account info and connected tools
  • cova connect <tool> - Connect a monitoring tool (PagerDuty, Datadog, Grafana, Sentry, New Relic, Sumo Logic, Splunk)
  • cova scan <path> - Scan a local repo for monitoring recommendations
  • cova analyze - Run a monitor analysis across connected tools
  • cova pr-check - Check a git diff for monitoring gaps (great for CI gates)

CI/CD integration

Set the COVA_TOKEN environment variable in your CI platform and use cova pr-check --diff origin/main..HEAD as a pipeline step. Returns exit code 0 if clean, 1 if gaps are found.

Installation

Install via pip: pip install cova-cli. Requires Python 3.9+.

Full CLI reference →

Ask Cova

What is it?

Ask Cova is an AI chat assistant that understands your monitoring configuration and (if scanned) your codebase architecture. It helps with incident triage, monitoring improvements, on-call questions, and architecture-aware troubleshooting.

What you can ask

  • Risk assessment - "What are my biggest monitoring gaps?"
  • Escalation review - "Do my alerts reach the right people?"
  • Coverage gaps - "Which services have no alert coverage?"
  • Quick fixes - "What should I fix first to improve my score?"
  • Architecture questions - "How is authentication handled in my codebase?" (requires repo scan)
  • Tool-specific questions - "How do I set up a Datadog monitor for this endpoint?"

Context awareness

  • Knows your connected tools and their configurations
  • References your actual monitoring data (alert rules, dashboards, services)
  • If you've run a repo scan, it understands your architecture (endpoints, databases, frameworks)
  • Quick prompt categories adapt based on what tools and scans are available

When is it available?

  • Appears as a floating chat button after your first monitor analysis
  • Available on all plans
  • Richer answers when both monitor scan and repo scan have been run
Full usage guide →
S1

Getting Started (First-Time Setup)

New to Cova? Follow these steps to go from zero to your first monitoring analysis in about five minutes. Every new account starts with a 14-day Pro trial with full access - no credit card required.

1
Open Cova and create an account

Navigate to getcova.ai and sign up. You have three options:

  • Email & password - create an account, then verify your email via a code sent to your inbox
  • Google - one-click sign-in with your Google account
  • GitHub - one-click sign-in with your GitHub account (recommended if you plan to use Repo Scan or PR Guard)
Login options
Email address
Password
Create Account
or continue with
Google
GitHub
14-day Pro trial: Every new account starts with full Pro access for 14 days - no credit card required. Your data is stored in the cloud and follows you across devices.
2
Navigate to Integrations

Click Integrations in the left sidebar. You'll see cards for each supported monitoring tool.

Integrations page
PagerDuty
Datadog
Grafana
Sentry
New Relic
Sumo Logic
Splunk
3
Connect your first monitoring tool

Click any tool card to expand it. Enter your API key or token, then click Connect. A green checkmark confirms the connection. See Connecting Tools for detailed credential instructions per tool.

Expanded tool card (example: PagerDuty)
PagerDuty Connected
API Key
••••••••••••••••
Connected
4
Run your first analysis

Click the Run Analysis button in the sidebar. Cova will connect to your tools, pull configuration data, and evaluate your monitoring setup. This takes 15-60 seconds depending on data volume.

Sidebar - Run Analysis button
PagerDuty
Run Analysis  ← Click
5
Explore your results

Your dashboard now shows a Health Score, individual findings with severity ratings, and coverage breakdowns. See Reading Your Dashboard for a detailed walkthrough.

Monitor Scan results
62 health
Needs Work
14 findings across 3 tools
3 critical 7 warnings 4 info
Alert Coverage78%
Escalation Routing45%
Notification Routing60%
You're set up! From here you can connect more tools, scan a repository for architecture-aware recommendations, or ask Cova questions about your monitoring setup.
Guided tour: On your first login, Cova walks you through the key features with an interactive product tour. You can restart the tour at any time from Settings.
D

Try Demo Mode

Want to see what Cova does before connecting your real tools? Demo mode loads sample data so you can explore every feature risk-free.

1
Click "Try Demo" in the sidebar

The purple Try Demo button is at the top of the sidebar, right below the Cova logo.

Sidebar
cova
Try Demo ← Click this
Monitor Scan
Repo Scan
Incident Autopilot
2
Watch the analysis animation

Cova plays through the same analysis animation you'd see with real tools, then loads sample results from all seven monitoring tools (PagerDuty, Datadog, Grafana, Sentry, New Relic, Sumo Logic, and Splunk).

3
Explore the dashboard

After the animation, the Monitor Scan page loads with sample data. All pages are available with realistic demo content:

Demo mode - banner + sidebar
Demo Mode - Viewing sample data to preview Cova's capabilities Exit Demo
Monitor Scan
Repo Scan
Incident Autopilot
PR Guard

Purple dots next to nav items indicate demo-populated pages. The Incident Autopilot page comes pre-loaded with a sample incident investigation. You can click Run Analysis again to replay the animation.

4
Exit demo mode

Click the Exit Demo button in the purple banner at the top of the content area, or click the Exit Demo button in the sidebar (same location as Try Demo). Your real connections and data are restored exactly as they were.

Exit options
Option 1: Banner
Demo Mode - Viewing sample data... Exit Demo  
Option 2: Sidebar
Exit Demo  
Demo mode is view-only. No data is saved, no API calls are made, and no AI usage is consumed. Settings and Integrations are hidden during demo mode.
S2

Connecting Monitoring Tools

Cova analyzes your monitoring tools by connecting to their APIs. Each tool needs a specific credential. All credentials are encrypted at rest using AES-128 encryption.

Currently Supported Tools

More integrations are on the way. Today, Cova connects to:

ToolCredential NeededWhere to Find It
PagerDuty REST API Key (read-only) PagerDuty → Integrations → API Access Keys → Create New API Key
Datadog API Key + Application Key + Site/Region (includes write scopes for Deploy Monitor) Datadog → Organization Settings → API Keys / Application Keys. Select your Datadog site from the dropdown (US1, US3, US5, EU, or AP1). See Scopes & Permissions
Grafana Service Account Token (Editor for Deploy, Viewer for analysis only) + Instance URL Grafana → Administration → Service Accounts → Add Token
Sentry Auth Token + Organization Slug Sentry → Settings → Developer Settings → Custom Integrations (scopes: project:read, org:read, event:read, alerts:read, alerts:write)
New Relic User API Key (NRAK-xxx) + Account ID New Relic → User menu → API Keys → Create a key (type: User, needs alerting write access for Deploy)
Sumo Logic Access ID + Access Key + Region Sumo Logic → Administration → Security → Access Keys → Add Access Key
Splunk Auth Token + Instance URL Splunk → Settings → Tokens → New Token. Instance URL is your Splunk management endpoint (default port 8089, e.g. https://splunk.example.com:8089)

How to Connect

1
Go to Integrations

Click Integrations in the left sidebar. You'll see cards for each supported monitoring tool.

2
Click a tool card to expand it

The card expands to show input fields for that tool's credentials.

Expanded tool card (example: Datadog)
Datadog
API Key
Paste your API key...
Application Key
Paste your application key...
Datadog Site
US1 (datadoghq.com)
Options: US1, US3, US5, EU, AP1
Connect  ← Click
3
Enter your credentials and click Connect

Cova validates the credentials by making a test API call. If successful, a green checkmark appears and the card shows "Connected." If validation fails, you'll see an error message - double-check your key and try again.

Connected state
Datadog Connected
Credentials survive restarts. Your tool credentials are encrypted and stored in the database - they persist across devices and deploys.

Disconnecting a Tool

To disconnect, expand the tool card and click Disconnect. An inline confirmation will appear - confirm to remove the credentials. You can reconnect at any time by entering new credentials.

S2b

Scopes & Permissions

Each tool requires specific API permissions for Cova to scan your configuration. Most tools only need read access. Four tools support Deploy Monitor (one-click push of AI-generated configs), which requires write permissions.

ToolRequired PermissionsWhat They AccessWhy
PagerDuty Read-only API Key Services, incidents, escalation policies Analyze alert routing and on-call coverage
Datadog monitors_read, monitors_write, dashboards_read, metrics_read, events_read, synthetics_read, synthetics_write Monitors, dashboards, metrics, events, synthetic tests Read scopes for scanning; monitors_write and synthetics_write enable Deploy Monitor
Grafana Service Account with Viewer role (Editor for Deploy) Datasources, dashboards, alert rules Viewer for analysis; Editor enables Deploy Monitor to create alert rules
Sentry project:read, org:read, event:read, alerts:read, alerts:write Projects, org info, events, alert rules Read scopes for analysis; alerts:write enables Deploy Monitor
New Relic User API Key with alerting write access Alert policies, conditions, notification channels, synthetics Read access for analysis; alerting write enables Deploy Monitor to create NRQL conditions
Sumo Logic Access ID + Access Key Monitors, connections, dashboards Analyze log-based alerting and dashboard health
Splunk Auth Token (read access) Saved searches, alerts, dashboards Analyze alert coverage and notification routing
Scan-only mode: If you only plan to scan and analyze (no Deploy Monitor), you can use read-only permissions on all tools. Write scopes are only needed for Deploy: monitors_write / synthetics_write on Datadog, alerts:write on Sentry, Editor role on Grafana, and alerting write access on New Relic.
S3

Running a Monitoring Analysis

Prerequisites

You need at least one monitoring tool connected (see Connecting Tools). The more tools you connect, the more comprehensive the analysis.

How to Run

1
Connect at least one tool

Make sure you have at least one monitoring tool connected (see Connecting Tools). Once connected, the Run Analysis button appears in the sidebar.

Sidebar with connected tool
Connected
PagerDuty
2
Click "Run Analysis" in the sidebar

The gradient button appears at the bottom of the sidebar when tools are connected. Clicking it starts the analysis pipeline.

Run Analysis button
Run Analysis  ← Click
3
Watch the progress

You'll see step-by-step progress indicators as Cova works through each connected tool. Typical steps include: connecting to APIs, fetching configurations, analyzing patterns, and generating findings.

Analysis in progress
Authenticating API connections...
Fetching services and escalation policies...
Reading alert rules and schedules...
Mapping on-call coverage...
Identifying coverage gaps...
Scoring monitoring health...
4
Review your results

When complete, the dashboard populates with your Health Score, findings, and coverage breakdowns. If AI is enabled, you'll also see a narrative summary explaining the key takeaways.

How Long Does It Take?

ScenarioTypical Duration
1 tool, rule-based only15-30 seconds
2-3 tools, rule-based only30-60 seconds
With AI summary enabledAdd ~10-20 seconds for AI processing

If the Analysis Fails

!
Common failure causes:
  • Invalid or expired credentials - Go to Settings and reconnect the tool with fresh credentials
  • Rate limiting - Wait a minute and try again; Cova respects API rate limits
  • Network timeout - The backend or the external API may be temporarily unavailable; retry shortly

Each analysis is saved to your History tab (within Monitor Scan), so you can always go back and compare results over time.

S4

Reading Your Dashboard

After running an analysis, your dashboard displays a single scrollable view with everything you need. Here's what each section means.

Health Score Badge

Next to the "Monitor Scan" title, a compact badge shows your overall health score (0-100). Click it to expand a dropdown showing exactly what's affecting your score - each deduction is listed with a severity dot and reason.

How the health score is calculated: Your score starts at 100 and is reduced by findings across all connected tools:

  • Critical finding: -8 points each
  • Warning finding: -4 points each
  • Low finding: -1 point each
  • Coverage dimensions below 50%: up to -3 points each

The badge is color-coded:

Score RangeColorLabelMeaning
0 - 39RedCritical / Needs AttentionSignificant gaps in monitoring coverage or configuration
40 - 69OrangeFairSeveral areas need improvement
70 - 100GreenGood / ExcellentWell-configured monitoring setup

AI Summary + Recent Trend

If AI is enabled, an AI-generated narrative summarizes your monitoring posture with per-tool breakdowns. If you've run multiple analyses, a score delta shows whether your health score improved or declined.

Coverage Dimension Cards (with Findings)

The main body of the dashboard groups everything by coverage dimension - sorted worst-first so you focus on the biggest gaps. Each card shows a scored area of your monitoring with a tool logo, label, score bar, and percentage. Findings are embedded directly inside their related coverage cards, so you see issues in context.

Coverage scores are color-coded:

Score RangeColorMeaning
0 - 39%RedCritical gaps - This area needs immediate attention
40 - 69%OrangeNeeds improvement - Partial coverage with notable gaps
70 - 100%GreenWell covered - Good coverage, minor improvements possible

Click a coverage card to expand it and see:

  • Criteria - What Cova measures for this dimension
  • AI recommendation - Specific guidance for improving this area (when AI is enabled)
  • Issues - Related finding cards with severity, impact, action, status, and Generate Fix button
  • Covered - Green-tagged list of items that have monitoring configured
  • Missing - Red-tagged list of items that lack monitoring coverage
  • Generate Fix - AI-powered button to create monitor configs for the gap. When multiple services are missing, generates one config per service. When quality warnings exist on covered monitors, generates improved configs for each warning

Score penalties: A coverage dimension can show less than 100% even when all items are covered. Each critical finding on that dimension deducts 15 points and each warning deducts 7 points. For example, if all 56 services have escalation policies (100% base) but 10 have configuration issues, the score drops accordingly. The card shows "N warnings reducing score" to explain the gap. Clicking Generate Fix on these produces targeted improvements for each issue.

Which coverage dimensions appear depends on which tools you have connected:

  • Escalation Routing - Do incidents follow a clear chain of escalation with backup responders?
  • Alert Quality - Are escalation policies properly configured with multiple levels and reasonable timeouts?
  • Alert Noise - Are your alerts actionable, or are they noisy, ignored, auto-dismissed, or causing alert storms?
  • Error Rate Monitoring - Are you alerted when services start returning errors above normal baselines?
  • Latency / Performance - Are response times monitored with thresholds that catch degradation before users notice?
  • Business Flow Coverage - Are critical user journeys monitored end-to-end with synthetic tests?
  • Alert Noise - Are your monitors well-behaved, or are they flapping between states, sitting stale, or generating excessive events?
  • Alert Coverage - Do your datasources have alert rules configured to catch issues?
  • Notification Routing - Are alerts routed to the right people through contact points and policies?
  • Dashboard Health - Do your dashboards have panels configured and not sitting empty?
  • Issue Tracking - Are your projects sending events with active SDKs?
  • Alert Configuration - Do your projects have alert rules to catch spikes and regressions?
  • Release Tracking - Are your projects deploying with Sentry releases for regression detection?
  • Alert Coverage - Do your entities (APM, infrastructure, browser, mobile) have alert conditions?
  • Notification Routing - Are alert policies routed to active notification destinations?
  • Synthetic Monitoring - Are your synthetic monitors actively reporting?
  • Monitor Coverage - Are your Sumo Logic monitors enabled and actively detecting issues?
  • Notification Routing - Do your monitors have notification actions so alerts reach the right people?
  • Collector Health - Are all collectors alive and ingesting data?
  • Alert Coverage - Are your scheduled saved searches configured with alerting conditions?
  • Notification Routing - Do your alert-enabled searches have notification actions configured?
  • Dashboard Health - Are user dashboards present and organized across your Splunk apps?

Each area shows a percentage score. Scores start from a base (how much is covered) and are penalized by related findings - each critical finding deducts 15 points, each warning deducts 7 points. For example, having 10 escalation policies but 5 of them misconfigured will reduce your Alert Quality score below 100% even though all policies exist. Click any area to expand it and see what's covered, what's missing, and which specific issues are affecting the score.

What Cova Checks Per Tool

When you run an analysis, Cova inspects your tool configurations and flags issues at three severity levels: Critical (things that will cause missed incidents), Warning (risks that weaken your response), and Low (improvements worth considering).

PagerDuty

Cova fetches your services, escalation policies, schedules, and on-call rotations. It checks for:

  • Services missing an escalation policy (alerts go nowhere)
  • Escalation policies with only one level (no backup if the first responder misses it)
  • Schedules with only 1-2 people (burnout risk, single point of failure)
  • Escalation delays over 30 minutes (too slow for critical incidents)
  • Policy levels targeting pending/uninvited users
  • Services with no integrations connected
  • Nobody on-call for a policy in the next 7 days
  • Noisy services - high incident volume with short durations (auto-resolved noise)
  • Ignored alerts - incidents acknowledged within seconds (auto-dismissed)
  • After-hours pages - high-urgency incidents firing disproportionately outside business hours
  • Alert storms - multiple incidents clustering within minutes on the same service

Datadog

Cova fetches your monitors, SLOs, synthetics, and downtimes. It checks for:

  • Monitors with no notification targets (alerts fire but nobody knows)
  • Muted monitors that might be hiding real problems
  • Monitors stuck in Alert or No Data state
  • No monitors routed to an incident management tool (PagerDuty, OpsGenie, etc.)
  • All notifications going through a single channel
  • Missing SLOs (no formal reliability targets)
  • Paused synthetic tests (user journeys going unmonitored)
  • Excessive active downtimes creating monitoring blind spots
  • Monitors missing tags, recovery thresholds, or re-alert settings
  • Flapping monitors - rapid state transitions (OK/Alert/OK) indicating unstable thresholds
  • Stale monitors - active monitors with zero events in the last 14 days
  • High-volume monitors - monitors generating excessive events, drowning out real signals

Grafana

Cova fetches your datasources, dashboards, alert rules, contact points, notification policies, and mute timings. It checks for:

  • Datasources with no alert rules (data flowing in but nobody watching it)
  • No alert rules or contact points configured at all
  • Paused alert rules that should be active
  • All contact points using the same notification type (no redundancy)
  • Contact points not wired into notification policies
  • Empty dashboards with no panels
  • Dashboards with panels but no alert thresholds
  • Excessive or unused mute timings
  • Alert rules missing summary or description annotations

Sentry

Cova fetches your projects, unresolved issues, alert rules, and releases. It checks for:

  • Projects with no recent events (broken or missing SDK)
  • Projects with no alert rules configured
  • Alert rules with no actions (fire silently)
  • Projects with excessive unresolved issues (issue fatigue)
  • Old critical issues that have never been resolved
  • Projects with no recent releases (no regression detection)
  • Projects using default alert rules only

New Relic

Cova queries NerdGraph to analyze entities, alert policies, NRQL conditions, notification destinations, and synthetic monitors. It checks for:

  • Entities with no alert conditions (alertSeverity = NOT_CONFIGURED)
  • Alert policies with no notification destinations (fire silently)
  • Disabled NRQL alert conditions
  • Entities stuck in CRITICAL alert state (stale alerts)
  • Policies with only a single notification destination (no redundancy)
  • Synthetic monitors not actively reporting
  • No synthetic monitors configured at all
  • Entities with no tags (poor organization)
  • Large percentage of disabled conditions

Splunk

Cova fetches your saved searches, fired alerts, dashboards, indexes, and alert actions. It checks for:

  • No alert rules configured despite having saved searches
  • Disabled scheduled searches leaving monitoring gaps
  • Alerts missing threshold configuration (comparator or threshold)
  • Many unscheduled saved searches that cannot trigger automatic alerts
  • Alerts with no notification actions (fire silently, nobody notified)
  • Majority of alerts using a single notification channel (no redundancy)
  • All alerts using the same action type
  • No user dashboards outside system apps
  • All dashboards in the default search app (poor organization)
  • Very few user indexes configured (data in default indexes)

When two or more tools are connected, filter buttons appear so you can view coverage from a specific tool.

Per-Tool Summary

Below the hero row, each connected tool gets a collapsible summary section. Click to expand and see:

  • Tool logo and name - Which system the findings came from
  • Severity counts - Badges showing critical, warning, and info counts at a glance
  • Top Risk - The most critical finding highlighted in a red card (if any critical issues exist)
  • Warnings summary - Count of warnings to address
  • All checks passed - Green card shown when no issues are found for a tool

This gives you a quick per-tool overview before diving into the full coverage dimension cards below.

Other Findings

Any findings that don't belong to a specific coverage dimension appear in an "Other Findings" section below the coverage cards.

S5

Managing Findings

Findings are the actionable output of each analysis. Cova lets you track their status, filter them, and export them.

Status Workflow

Open

Default status. The issue has been identified but not addressed yet.

In Progress

You're actively working on fixing this issue.

Resolved

The issue has been fixed. It will be verified in the next analysis run.

Dismissed

You've reviewed this and decided it's not applicable or not a priority.

Changing Status

There are two ways to update a finding's status:

Quick Toggle (Checkbox)

Click the checkbox next to any finding to quickly mark it as resolved. Click again to revert to open.

Expanded Status Buttons

Click a finding to expand it, then use the status buttons (In Progress, Resolved, Dismissed) for more granular control.

Filtering & Views

  • Filter by severity - Click the severity pills (Critical, Warning, Info) in the filter bar to show only findings of that level
  • Filter by tool - Click a tool pill to show findings from that tool only
  • Filters apply everywhere - Active filters affect findings inside all coverage dimension cards simultaneously

Exporting a Report

1
Click the "Export Report" button above the findings list

Cova generates a branded report containing your Health Score, AI summary, coverage breakdowns, and all findings with severity levels, impact descriptions, and recommended actions.

Export button location
Critical (3) Warning (7) Info (4)
Export Report  ← Click
2
Preview, print, or download

The report opens in a preview modal. Use the Print button to send it to your printer (or save as PDF via your browser's print dialog), or click Download to save the HTML file. Share it with your team or attach it to a Jira ticket for tracking remediation work.

Report preview modal
Monitoring Report
Print
Download
Close
Report preview renders here
S5b

Understanding Alert Noise

Cova doesn't just check your monitoring configuration - it also analyzes 14 days of real alert history to detect behavioral patterns that erode trust in your alerts. These noise findings appear alongside config findings in your Monitor Scan results.

What it is

When you run a Monitor Scan with PagerDuty or Datadog connected, Cova pulls recent incident and event history from those tools and runs pattern detection across the data. It identifies alerts that fire too often, get ignored, flip-flop between states, or sit completely silent - all signs that your alert configuration needs tuning.

Graceful degradation: If the alert history API is unavailable or rate-limited, the scan still completes with config-only findings. You won't lose your config analysis just because the history endpoint timed out.

Noise Types

Each noise type maps to a specific behavioral pattern. Here's what they mean and what to do about them:

Noisy critical warning

Tool: PagerDuty

Trigger: Service has more than 5 incidents per week AND the average incident duration is under 10 minutes. Over 15 per week is critical.

What to do: These are alerts that fire constantly and resolve almost immediately - classic auto-resolve noise. Raise the alert threshold, add a sustained-duration condition (e.g., "only fire if the condition persists for 5 minutes"), or suppress transient spikes with composite conditions.

Ignored warning

Tool: PagerDuty

Trigger: Over 80% of a service's incidents are acknowledged within 30 seconds.

What to do: When responders ack alerts in seconds, they're dismissing them reflexively - not investigating. Lower the priority, convert to a dashboard metric, or remove the alert entirely if it's not driving action.

After-Hours warning

Tool: PagerDuty

Trigger: Over 50% of a service's high-urgency incidents fire between 10pm and 7am.

What to do: Review whether these truly require immediate paging. Consider downgrading to low-urgency overnight, routing to a follow-the-sun schedule, or batching non-critical overnight pages into a morning digest.

Storm warning

Tool: PagerDuty

Trigger: 3 or more incidents fire within a 5-minute window, and this pattern occurs at least twice in 14 days.

What to do: Alert storms overwhelm responders. Use PagerDuty's Event Orchestration or alert grouping to consolidate related incidents into a single actionable alert. Consider dependent/composite alerts to suppress downstream cascades.

Flapping warning

Tool: Datadog

Trigger: Monitor has 3 or more state transitions within a 30-minute window, and this occurs at least twice in 14 days.

What to do: Flapping monitors usually have thresholds set too close to normal operating values. Widen the gap between alert and recovery thresholds, use min() or avg() over a longer window instead of last(), or add a recovery threshold with hysteresis.

Stale low

Tool: Datadog

Trigger: Monitor is active and in OK state, but has zero events in the last 14 days. Capped at 10 findings to avoid noise.

What to do: A monitor that never fires might be covering infrastructure that no longer exists, or its query may no longer match any metrics. Verify the monitor's query still returns data. If the underlying service was decommissioned, delete the monitor to reduce clutter.

High Volume critical warning

Tool: Datadog

Trigger: Monitor generates more than 20 events per week (warning). Over 50 events per week is critical.

What to do: High-volume monitors drown out real signals. If the monitor is legitimately noisy, tighten the alert condition or increase the evaluation window. If it's an aggregate monitor, consider splitting it into per-service monitors so the noise is isolated.

How to Read the Alert Noise Card

Alert noise appears as an "Alert Noise" coverage dimension card in your scan results, alongside your other coverage dimensions like Escalation Routing or Error Rate Monitoring. The card works the same way:

  • Score - Starts at 100 and is penalized by each noise finding (critical -15, warning -7, low -2)
  • Findings - Click to expand and see individual noise patterns with severity, description, and recommended actions
  • Generate Fix - Available on noise findings to generate improved monitor configs that address the specific noise pattern

Currently supported: Alert noise analysis is available for PagerDuty and Datadog. Support for Grafana, New Relic, and Sentry is planned for a future release.

S6

Scanning a Repository

Repo scanning lets Cova understand your actual codebase architecture and make context-aware monitoring recommendations - suggesting what should be monitored based on your code, not just what is configured.

Three Ways to Scan

Upload  Upload a ZIP or TAR.GZ

Drag and drop (or click to browse) a compressed archive of your repository. Supported formats: .zip, .tar.gz, .tgz. Maximum file size depends on server configuration.

Best for: quick one-off scans, repos not hosted on GitHub/GitLab, or when you want to scan a specific snapshot.

GitHub  Scan from GitHub

Connect your GitHub account first (see GitHub & GitLab), then select a repository and branch from the dropdown.

Best for: ongoing monitoring, teams using GitHub, and pairing with PR Guard for automatic PR scanning.

GitLab  Scan from GitLab

Connect your GitLab account first (see GitHub & GitLab), then select a project and branch.

Best for: teams using GitLab for source control.

Running a Scan

1
Navigate to the Repo Scan page

Click Repo Scan in the sidebar.

2
Choose your scan method

Select the Upload, GitHub, or GitLab tab at the top of the page.

Source tabs
Upload
GitHub  ← Selected
GitLab
Bitbucket Soon

For Upload, drag and drop a .zip or .tar.gz file. For GitHub or GitLab, you must connect your account first.

3
Select a repository, branch, and scan

Once connected, choose a repository from the dropdown, enter the branch name (defaults to main), and click Scan.

GitHub connected state
GitHub Connected
4 repositories accessible
Repository
your-org/your-repo  ← Select
Branch
main
Scan  ← Click
4
Wait for the scan to complete

Cova downloads the code, identifies the tech stack, maps endpoints and services, then (if AI is enabled) generates architecture-aware recommendations. A progress indicator shows each stage as it completes.

5
Review results

Once the scan finishes, results appear below with:

Codebase Overview
47
Files
12
Endpoints
3
Services
2
Databases
8
Dependencies
5
Recommendations
Python (12) FastAPI PostgreSQL

Below the overview you'll find detailed endpoints, recommendations with severity and rationale, and gap badges highlighting areas where monitoring is missing.

Automatic PR Scanning

If you've connected GitHub, enable PR Guard to automatically scan every pull request for new endpoints, databases, and services. Cova posts a GitHub comment flagging monitoring gaps directly on the PR.

S7

Using Ask Cova (AI Chat)

Ask Cova is an AI chat assistant that understands both your monitoring configuration and (if scanned) your codebase architecture. It can help with incident triage, monitoring improvements, and on-call questions.

When Is It Available?

Ask Cova becomes available after your first analysis. The AI needs monitoring data to provide useful answers. If you've also scanned a repo, it can answer architecture-aware questions too.

Quick Prompts

When you open Ask Cova, you'll see prompt categories tailored to your setup. Four base categories are always available:

Risk  Risk Assessment

Questions about your biggest monitoring gaps, what's most likely to cause an undetected outage, and where coverage is weakest.

Escalation  Escalation Policies

Questions about whether alerts reach the right people, escalation timing, and policy gaps.

On-Call  On-Call Health

Questions about on-call rotation balance, burnout risk, and schedule coverage.

Improvement  Improvement Plans

Questions about prioritizing fixes, quick wins, and building a remediation roadmap.

Contextual Prompts

Additional prompt categories appear automatically based on your connected tools and history:

CategoryAppears WhenExample Prompts
Datadog InsightsDatadog connectedMonitor health, SLO gaps, synthetic test coverage
Grafana InsightsGrafana connectedDatasource alert gaps, dashboard cleanup, notification routing
Sentry InsightsSentry connectedError tracking gaps, alert configuration, release health
New Relic InsightsNew Relic connectedEntity alert gaps, notification routing, synthetic coverage
Sumo Logic InsightsSumo Logic connectedMonitor coverage, notification routing, collector health
Splunk InsightsSplunk connectedAlert coverage, saved search health, notification routing
Cross-Tool Analysis2+ tools connectedCross-tool coverage gaps, end-to-end incident flow
Incident TriageRepo scannedUser errors, latency spikes, stuck jobs, connection failures
Trends & History2+ analyses run"Has my coverage improved?", "What changed since last analysis?"

Scoping to a Specific Tool

Use the scope pills above the input box to focus your question on a specific tool. When scoped, your question is automatically prefixed with the tool name so the AI focuses its answer on that tool's data. Click "All" to remove the scope.

Trend Questions

After running two or more analyses, you can ask Cova about trends. It has access to your last 5 analysis snapshots and will cite specific numbers in its answers - for example: "Your health score improved from 45 to 62 (+17 points). Critical findings dropped from 8 to 3."

Copying Responses

Each AI response has a copy button. Use it to paste answers into Slack, Jira tickets, or runbooks.

Tips for Getting Useful Answers

  • Be specific - "What's wrong with my PagerDuty escalation policy for the payments team?" works better than "How's my monitoring?"
  • Describe symptoms - For triage, describe what users are experiencing rather than what you think the technical cause is
  • Ask follow-ups - The chat maintains context, so you can drill down: "Tell me more about that third recommendation"
  • Reference the scan - If you've scanned a repo, ask questions that bridge monitoring and code: "Which API endpoints in my repo don't have corresponding alerts?"
  • Ask about trends - After multiple analyses, ask "Has my coverage improved?" or "What changed since last time?"
S7b

Incident Autopilot

Incident Autopilot is an AI-powered incident commander that investigates production problems across all your connected tools. Describe what's happening and Cova generates a structured investigation playbook - what to check first, which dashboards to open, which logs to grep, who's on-call, and the blast radius.

Setup

1
Connect at least one monitoring tool

Go to Integrations and connect one or more tools (PagerDuty, Datadog, Grafana, Sentry, New Relic, Sumo Logic, or Splunk). The Autopilot pulls live data from all connected tools during an investigation - the more tools connected, the more comprehensive the results.

Integrations page
PagerDuty PagerDuty
Datadog Datadog
Grafana Grafana
Sentry Sentry
2
Scan a repository (recommended)

Go to Repo Scan and scan the repository you want to investigate against. This gives the Autopilot your codebase architecture (services, endpoints, databases) so it can map symptoms to specific components and assess blast radius. Without a scan, investigations are based on tool data alone.

3
Navigate to Incident Autopilot

Click Incident Autopilot in the sidebar. You'll see a search card at the top and a repository selection card below.

Incident Autopilot page layout
e.g. "checkout is slow" or "users can't log in"
Investigate
2 monitoring tools connected for live incident data
Select and scan the repository to investigate against
Cova maps your incident to services, endpoints, and databases found in your codebase.
4
Select a repository to investigate against

Expand the Repository Selection card and choose the GitHub or GitLab tab. Select a repo from the list, enter a branch, and click Scan Repository (if not already scanned).

Repository selection card (expanded)
GitHub
GitLab
Connection
Connected via Cova GitHub App
Repository
your-org/your-repo ← Selected
your-org/another-repo
Branch
main
Scan Repository  ← Click

Once scanned, the card collapses and the search card shows: "Investigating against your-org/your-repo via GitHub - 12 endpoints, 3 databases, 5 services"

5
Describe the incident and investigate

Type a description of the production problem in the search bar and click Investigate. Cova cross-references your symptom against live tool data and codebase architecture to generate a structured playbook.

Search card with input
Checkout is slow and users are timing out
Investigate  ← Click
2 monitoring tools connected for live incident data
Investigating against your-org/your-repo via GitHub - 12 endpoints, 3 databases, 5 services

What to Type

Describe what users are experiencing, not what you think the technical cause is. Good examples:

  • "Users can't transfer money"
  • "Checkout is slow - 30% of transactions timing out"
  • "Users can't log in since 2pm"
  • "API returning 500 errors on the /payments endpoint"

What You Get Back

The investigation modal shows a structured playbook with 7 sections:

SectionDescription
Summary2-3 sentence assessment of the situation based on live data and architecture
SeverityCritical, High, Medium, or Low - based on user impact and blast radius
Blast RadiusAffected services, endpoints, databases, and a human-readable impact statement
What to Check First3-7 prioritized investigation steps, each referencing a specific tool
Related AlertsActive alerts from connected tools that relate to the reported symptom
Who's On-CallCurrent on-call responders from PagerDuty, prioritized by relevance to affected services
Logs to CheckSpecific grep patterns or log sources to investigate

Service-Relevant Filtering

The Autopilot automatically filters results to prioritize items related to the affected services:

  • Related Alerts - Only alerts that match the blast radius services are shown at full opacity. Unrelated active alerts are dimmed but still visible.
  • On-Call - Responders whose escalation policy or team matches the affected services appear first. Others are dimmed as fallback context.
  • Service matching - Uses PagerDuty service names, Datadog service tags, Sentry project slugs, New Relic entity names, and Grafana alert labels to correlate alerts with impacted services.

Download as PDF

Click the Download PDF button in the investigation modal to generate a branded document. The browser's print dialog opens, where you can save as PDF or print directly. The PDF includes all 7 sections formatted for sharing with your team.

Data Sources

The investigation pulls from all connected tools simultaneously:

ToolData Used
PagerDutyOpen incidents, on-call schedules, escalation policies
DatadogAlerting monitors with service tags
GrafanaFiring alert rules with labels
SentryUnresolved issues by project
New RelicEntities with critical/warning alert severity
Sumo LogicActive monitors
SplunkFired alerts from saved searches
The more tools you have connected, the more comprehensive the investigation. With no connected tools, the Autopilot card is hidden.
S8

PR Observability Guard

PR Observability Guard automatically scans every pull request (GitHub) and merge request (GitLab) for new endpoints, databases, services, and message queues, then posts a comment flagging monitoring gaps - with risk scoring and suggested monitor configs. No AI activation required - it works using pure pattern matching.

What It Detects

CategoryExamples
EndpointsFastAPI/Flask routes, Express routes, NestJS decorators, Spring Boot mappings, Go router handlers
DatabasesPostgreSQL, MySQL, MongoDB, Redis, Elasticsearch, DynamoDB + ORMs (SQLAlchemy, Prisma, TypeORM, etc.)
Message QueuesRabbitMQ, Kafka, Celery, Bull, AWS SQS, Google Pub/Sub, NATS
ServicesDocker Compose services, Dockerfiles, Kubernetes deployments

Risk Scoring

Each detected endpoint is assigned a risk level based on what it does:

RiskCriteriaExample
CriticalPayment, auth, or financial endpoints with state-changing methods (POST/PUT/DELETE)POST /api/payments/charge
HighCritical-path keywords with GET, or any DELETE endpointDELETE /api/users/{id}
MediumState-changing methods on non-critical pathsPOST /api/comments
LowRead-only endpointsGET /api/status

Each risk assessment includes a business-impact reason (e.g. "Unmonitored payment endpoints can cause silent revenue loss") so engineers understand why monitoring matters for that specific endpoint.

Setup - GitHub App Permissions

Prerequisite: You must have the Cova GitHub App installed and connected via the Repo Scan page before enabling PR Guard. The GitHub App needs specific permissions to read PR diffs and post comments.
1
Go to your GitHub App settings

Navigate to GitHub.com → Settings → Developer settings → GitHub Apps → cova-monitoring. Click Permissions & events in the sidebar.

2
Set required repository permissions

Under Repository permissions, ensure these are set:

PermissionAccess LevelWhy
Pull requestsRead & writeRead PR diffs, post review comments
ContentsRead-onlyFetch changed file contents
MetadataRead-onlyBasic repo info (required by all GitHub Apps)
3
Subscribe to webhook events

Under Subscribe to events, check:

  • Pull request - triggers PR Guard on open, update, reopen
  • Push - (optional) for push-triggered repo scans
4
Save and approve on existing installations

Click Save changes. If you already have the app installed on repositories, GitHub will notify those installations about the new permissions. The org/account owner needs to approve the updated permissions in their Settings → Installations page.

Webhook URL: Your GitHub App's webhook URL should already be set to https://getcova.ai/github/webhook. If not, update it in the GitHub App's General settings.

Enable PR Guard in Cova

1
Navigate to PR Guard

Click PR Guard in the sidebar. You'll see the status card showing "Inactive" and a toggle to enable it.

PR Guard - Inactive state
Inactive
Toggle on to start scanning PRs and MRs
2
Toggle PR Guard on

Click the toggle switch. The status card changes to "Active" with a green bar. This setting persists across server restarts.

PR Guard - Active state
Active
Scanning PRs and MRs on incoming webhooks
4
Scanned
3
Comments Posted
9
Gaps Detected
3
Configure repository filter (optional)

By default, PR Guard scans all repositories the GitHub App has access to. To limit scanning to specific repos, select Select repositories and check the ones you want monitored.

Repository filter options
All repositories
Select repositories

What the PR Comment Looks Like

When a PR introduces new infrastructure, Cova's GitHub bot posts a comment with:

  • Summary line - e.g. "Found 4 endpoints - 2 need monitoring, 2 already covered."
  • Endpoints table - Risk level, HTTP method, path, file, and coverage status (Monitored / Not monitored)
  • "Suggested monitor configs" - Expandable section with ready-to-use JSON configs (Datadog, Grafana, or generic) for uncovered endpoints only
  • Database connections - New databases/ORMs found with recommended monitoring (connection pool, query latency)
  • Message queues - New queues found with recommended monitoring (queue depth, consumer lag, dead letters)
No spam: If a PR doesn't introduce any new endpoints, databases, services, or queues, no comment is posted. If all detected endpoints are already monitored, no comment is posted either. New commits to the same PR update the existing comment instead of posting a new one.

Smart Coverage Checks

PR Guard cross-references each detected endpoint against your latest Monitor Scan results. The "Status" column adapts based on what data is available:

ScenarioStatus Shown
Monitor Scan has been run and endpoint matches an existing monitorMonitored (service-name)
Monitor Scan has been run but no matching monitor foundNot monitored
Tools connected but no Monitor Scan run yetAdd {tool} monitor
No tools connectedNo tools connected

For the most accurate coverage checks, connect your monitoring tools and run a Monitor Scan before opening PRs. PR Guard will then only flag endpoints that genuinely lack monitoring.

Automatic Filtering

PR Guard automatically filters out low-value endpoints that rarely need dedicated monitoring. These are excluded from the PR comment entirely:

  • Health checks (/health, /healthz, /ready, /ping)
  • Documentation routes (/docs, /redoc, /swagger, /openapi)
  • Metrics endpoints (/metrics, /prometheus)
  • Static file paths (/static, /favicon)
  • Debug routes (/debug, /__debug__)

Re-scanning a PR

If you connect monitoring tools or run a new Monitor Scan after the initial PR comment was posted, you can re-scan from the Cova dashboard:

  1. Go to PR Guard in the sidebar
  2. Find the PR in the Recent Activity list
  3. Click Re-scan

The existing PR comment on GitHub/GitLab is updated in place with the new coverage data. This is useful when you want the comment to reflect monitors you've added since the PR was first scanned.

Troubleshooting

IssueCauseFix
No comment on PRPR Guard toggle is offEnable in Cova → PR Guard
No comment on PRPR only changed non-code filesExpected behavior - no gaps to report
No comment on PRAll detected endpoints are already monitoredExpected behavior - PR Guard stays silent when everything is covered
No comment on PROnly health check / docs / metrics endpoints detectedExpected behavior - low-value endpoints are filtered out
403 error in activity logMissing pull_requests:writeUpdate GitHub App permissions (step 2 above)
Webhook not receivedMissing Pull request event subscriptionCheck GitHub App → Permissions & events (step 3 above)
Repo not being scannedRepo filter is set to specific reposCheck PR Guard → Repository Filter settings
Duplicate endpoints in tableUnlikely - deduplication is built inCheck if endpoints have different HTTP methods
No comment on GitLab MRMissing project webhookAdd webhook in GitLab project → Settings → Webhooks (see GitLab Setup above)
GitLab 401 errorOAuth token expiredReconnect GitLab in Repo Scan (Cova auto-refreshes tokens, but they can expire if unused)

GitLab Setup

For GitLab merge requests, you need to add a project webhook manually (GitLab doesn't have an "App" model like GitHub).

1
Connect GitLab in Cova

Go to Repo Scan → GitLab → Connect and authorize Cova via OAuth. This gives Cova access to read your merge request diffs and post comments.

2
Add a project webhook in GitLab

Go to your GitLab project → Settings → Webhooks → Add new webhook.

FieldValue
URLhttps://getcova.ai/gitlab/webhook
Secret tokenMust match the GITLAB_WEBHOOK_SECRET env var on the server
TriggerCheck Merge request events
SSL verificationEnable (recommended)
3
Enable PR Guard in Cova

Toggle PR Guard on in the PR Guard page. The same toggle controls both GitHub and GitLab scanning.

Per-project webhooks: Unlike GitHub (where the App covers all installed repos), GitLab webhooks must be added to each project individually. Group-level webhooks are available on GitLab Premium.
S9

Connecting GitHub & GitLab

Connecting a source control platform lets Cova scan your repositories, power Incident Autopilot investigations, and enable PR Guard for automatic PR scanning. You can connect from Repo Scan, Incident Autopilot, or PR Guard - the steps are the same.

Connecting GitHub

GitHub uses a GitHub App installation flow. This grants Cova read access to your repositories and enables webhook-based PR scanning.

1
Navigate to any page with GitHub integration

Open Repo Scan, Incident Autopilot, or PR Guard from the sidebar. Select the GitHub tab. You'll see a "Not connected" state with a connect button.

What you'll see
GitHub
GitLab
Connection

Install the Cova GitHub App to connect your repositories.

Install GitHub App  ← Click this
2
Authorize on GitHub (popup window)

A popup opens on github.com with the GitHub App installation page. You'll go through these steps on GitHub's side:

  • Select an account or organization to install the app on. If you belong to multiple orgs, pick the one with the repos you want to scan.
  • Choose repository access - select "All repositories" or "Only select repositories". You can limit access to specific repos for security.
  • Click Install & Authorize to complete the installation.

Permissions: Cova requests read-only access to repository contents (for scanning) and write access to pull requests (for PR Guard comments). No code is ever modified. You can change repository access later in GitHub Settings → Applications → Cova Monitoring → Configure.

3
Popup closes - GitHub is connected

The popup closes automatically and Cova refreshes to show your connected status. A green checkmark appears next to GitHub and your repository list is loaded.

Connected state
GitHub
GitLab
Connection
Connected via Cova GitHub App
Repository
your-org/your-repo
your-org/another-repo private
4
Select a repository and scan

Click a repository from the list to select it (radio-style selection). Enter a branch name if different from main, then click Scan Repository. Cova downloads and analyzes the codebase to map your architecture.

Connecting GitLab

GitLab uses an OAuth2 authorization flow. This grants Cova read access to your projects and the ability to post merge request comments for PR Guard.

1
Navigate to any page with GitLab integration

Open Repo Scan, Incident Autopilot, or PR Guard from the sidebar. Select the GitLab tab. You'll see a connect prompt.

What you'll see
GitHub
GitLab
Connection

Connect your GitLab account to scan repositories.

Connect GitLab  ← Click this
2
Authorize on GitLab (popup window)

A popup opens on gitlab.com with the OAuth consent screen. GitLab asks you to authorize the Cova application. The requested scopes are:

  • api - Read repositories and post merge request comments
  • read_repository - Download repository archives for scanning

Click Authorize to grant access. Unlike GitHub, GitLab grants access to all projects you have access to (no per-repo selection during OAuth).

3
Popup closes - GitLab is connected

The popup closes automatically and Cova refreshes. A green checkmark appears next to GitLab, your username is shown, and your project list is loaded.

Connected state
Connected as @your-username
Repository
your-group/your-project
your-group/another-project private
4
Select a project and scan

Click a project from the list to select it. Enter a branch name if different from main, then click Scan Repository.

Where You Can Connect From

GitHub and GitLab connections are shared across all features. Connect once, and it's available everywhere:

FeatureWhat it uses the connection for
Repo ScanScan repositories to map services, endpoints, and databases for monitoring recommendations
Incident AutopilotInvestigate incidents against your codebase architecture
PR GuardAutomatically scan pull requests/merge requests for new infrastructure that needs monitoring

What Happens After Connection

  • Your repository list loads automatically - visible on Repo Scan, Incident Autopilot, and PR Guard
  • For GitHub, PR Guard webhooks are configured automatically via the GitHub App
  • For GitLab, PR Guard webhooks need to be configured manually per project
  • The connection persists across browser sessions via localStorage

What Happens After a Deploy or Restart

Auto-restore: SCM connections are stored in your browser's localStorage. When the backend restarts (e.g. after a deploy), Cova automatically restores your connections using the /scm/restore endpoint. You don't need to reconnect manually. If you encounter a "not connected" error after a restart, the restore happens automatically on the next action.

Disconnecting

To disconnect GitHub or GitLab, go to the Repo Scan page:

1
Click Disconnect on the platform card

On the Repo Scan page, find the connected platform card and click Disconnect. An inline confirmation appears: "Are you sure?"

2
Confirm the disconnection

Click Yes in the red confirmation prompt. The connection is removed from both your browser and the backend. You can reconnect at any time by going through the flow again.

Setting Up Webhooks for PR Guard

Webhooks allow Cova to automatically scan pull requests and merge requests when they're opened or updated. GitHub and GitLab handle webhooks differently.

GitHub Webhooks (Automatic)

No manual setup needed. When you install the Cova GitHub App (steps above), webhooks are automatically configured. The GitHub App handles PR event delivery, so PR Guard works immediately after connecting.

To verify webhooks are working:

1
Open a pull request on any connected repository

Create a new PR (or push a commit to an existing one) that includes code changes.

2
Check PR Guard activity in Cova

Go to PR Guard in the sidebar. The Recent Activity section at the bottom should show the scan result. The webhook status under the GitHub tab will show "Last received: just now".

Webhook status (GitHub tab)
Automatically configured via the GitHub App
Last received: just now

GitLab Webhooks (Manual Setup)

GitLab doesn't have an "App" model like GitHub, so webhooks must be added to each project manually. This tells GitLab to notify Cova when merge requests are opened or updated.

Prerequisite: You must have connected GitLab via OAuth first. The webhook delivers events, but Cova needs the OAuth token to read MR diffs and post comments.
1
Navigate to your GitLab project's webhook settings

In GitLab, go to your project → Settings (left sidebar) → Webhooks.

GitLab sidebar navigation
Repository
Issues
Merge requests
Settings
General
Integrations
Webhooks  
Access Tokens
2
Click "Add new webhook" and fill in the configuration

Enter the following settings:

Webhook configuration form
URL
https://getcova.ai/gitlab/webhook
Secret token
(leave blank)
Trigger
Merge request events ← Check this
Push events (optional)
SSL verification
Enable SSL verification
Add webhook
3
Test the webhook

After adding the webhook, GitLab shows it in the list. Click the Test dropdown and select Merge request events to send a test payload. Check PR Guard → Recent Activity in Cova to confirm it was received.

4
Repeat for each project

GitLab webhooks are per-project. Add the same webhook to every project you want PR Guard to scan. For organizations on GitLab Premium, you can add a group-level webhook instead.

5
Verify in Cova

Open the PR Guard page and switch to the GitLab tab. Once a webhook has been received, you'll see a green "Webhook active" indicator.

Webhook status (GitLab tab)
Webhook active
Last received: 2m ago (5 total)

Troubleshooting

IssueSolution
Popup blocked by browserAllow popups for the Cova domain, or click the blocked popup notification in your browser's address bar
"No GitHub installation connected"This usually resolves itself - Cova auto-restores connections. If it persists, disconnect and reconnect from Repo Scan
Repos not showing after connectFor GitHub, check which repos you granted access to during installation. You can modify this in GitHub Settings → Applications → Cova Monitoring → Configure
PR Guard comments not posting (GitHub)The app needs pull_requests: write permission. Uninstall and reinstall the GitHub App to update permissions
PR Guard comments not posting (GitLab)Ensure the OAuth scope includes api. Disconnect and reconnect GitLab from Repo Scan
GitLab webhook not receivedVerify the webhook URL is correct (https://getcova.ai/gitlab/webhook), SSL verification is enabled, and "Merge request events" is checked
GitLab token expiredCova automatically refreshes expired GitLab tokens. If issues persist, disconnect and reconnect from Repo Scan
S10

Setting Up Slack

Connect Slack to get scan digests, score drop alerts, and AI-generated fix approvals posted directly to a channel. You can also chat with Cova via DM or @mention in any channel.

What you'll get: After connecting, Cova posts scan results as rich digests with health scores, findings grouped by tool, and "Generate Fix" buttons. When a fix is generated, an approval message with Approve/Reject buttons lets you deploy monitors without leaving Slack.

Part 1: Create a Slack App

Before connecting from Cova, you need a Slack app in your workspace. If your team already has the Cova Slack app installed, skip to Part 2.

1
Go to the Slack API dashboard

Open api.slack.com/apps and click Create New App. Choose "From scratch" when prompted.

  • App Name: Cova (or whatever you prefer)
  • Workspace: Select the workspace you want to receive notifications in
2
Configure OAuth & Permissions

In the left sidebar, go to OAuth & Permissions. Scroll to Bot Token Scopes and add these scopes:

ScopeWhat it's for
chat:writePost scan digests and approval messages
chat:write.publicPost to channels the bot hasn't been invited to
channels:joinAuto-join channels when posting
channels:historyRead channel messages for context
groups:historyRead private channel messages
im:historyRead DMs for Ask Cova chat
im:readList DM conversations
commandsSlash commands (/cova scan, /cova help)
app_mentions:readRespond when someone @mentions the bot
incoming-webhookWebhook fallback for notifications
3
Enable Interactivity

In the left sidebar, go to Interactivity & Shortcuts. Turn it on and set the Request URL to:

https://nocta-backend.onrender.com/slack/interactions

This is required for buttons to work - "Generate Fix", "Approve & Deploy", "Reject", and "Run New Scan" all send interactions to this URL.

4
Enable Event Subscriptions

In the left sidebar, go to Event Subscriptions. Turn it on and set the Request URL to:

https://nocta-backend.onrender.com/slack/events

Slack will verify this URL automatically. Under Subscribe to bot events, add:

  • app_mention - triggers when someone @mentions Cova in a channel
  • message.im - triggers when someone DMs the bot

These power the Ask Cova chat - you can ask monitoring questions directly in Slack.

5
Add Slash Commands (optional)

In the left sidebar, go to Slash Commands and create these commands:

CommandRequest URLDescription
/covahttps://nocta-backend.onrender.com/slack/commandsRun scans and interact with Cova

Usage: /cova scan triggers a monitor scan, /cova help shows available commands.

6
Install the app to your workspace

Go to Install App in the left sidebar and click Install to Workspace. Authorize the permissions when prompted. You'll see a Bot User OAuth Token starting with xoxb- - you don't need to copy this, Cova handles it automatically via OAuth.

7
Copy your app credentials

Go to Basic Information in the left sidebar. Under App Credentials, note your:

  • Client ID
  • Client Secret
  • Signing Secret

These are set as environment variables on the Cova backend (SLACK_CLIENT_ID, SLACK_CLIENT_SECRET, SLACK_SIGNING_SECRET). If you're self-hosting, add them to your environment. For the hosted version, these are already configured.

Part 2: Connect Slack from Cova

Once the Slack app exists in your workspace, connect it from Cova's Settings page.

1
Open Settings

Click Settings in the sidebar. Scroll to the Slack Notifications card.

What you'll see
Slack Notifications

Get scan digests and alerts posted directly to a Slack channel.

Add to Slack  ← Click this
2
Authorize in the popup

A popup opens on slack.com asking you to authorize the Cova app. Select the channel where you want notifications posted (e.g. #monitoring, #ops, or #cova-alerts). Click Allow.

3
Verify the connection

Back in Cova, the Slack card updates to show your workspace name and channel. Click Test to send a test message to the channel and confirm everything works.

Connected state
Your Workspace • #monitoring
Connected
Test
Disconnect

Part 3: Configure Notifications

Once connected, toggle which notifications Cova sends to your Slack channel:

SettingWhat it doesDefault
Scheduled scan resultsPosts a full scan digest after each scheduled scan completesOn
Manual scan resultsPosts a digest when you manually run a scan from the dashboardOn
Score drop alertsAlerts when your health score drops by more than the thresholdOn
Score drop thresholdHow many points the score must drop to trigger an alert (5, 10, 15, 20, or 25)10 points

What Slack Messages Look Like

Cova posts several types of messages to your channel:

Scan Digest

After each scan, a color-coded message shows your health score, findings grouped by tool (criticals first), and a "Run New Scan" button. Each tool section shows its icon and severity counts.

Repo Scan Digest

After a repo scan, shows the repo name, scan stats, and top recommendations. Each recommendation has a "Generate Fix" button that creates a deployable monitor config.

Approval Messages

When you click "Generate Fix", Cova generates a monitor config and posts an approval message with the tool logo, a config preview, and Approve & Deploy / Reject buttons. Clicking Approve deploys the monitor directly to your tool. Approvals expire after 24 hours if not acted on.

Ask Cova (DM or @mention)

DM the bot or @mention it in any channel to ask monitoring questions. Same AI chat as the dashboard, but in Slack.

Troubleshooting

ProblemFix
"This app is not configured to handle interactive responses"Go to your Slack app settings → Interactivity & Shortcuts → turn it on and set the Request URL to https://nocta-backend.onrender.com/slack/interactions
Buttons don't respondSame fix - Interactivity must be enabled with the correct Request URL
Bot doesn't respond to DMs or @mentionsCheck Event Subscriptions is on with the correct Request URL, and app_mention + message.im events are subscribed
Messages not posting to channelClick Test in Settings. If it fails, try disconnecting and reconnecting Slack
"Generate Fix" shows wrong toolCova picks the tool based on your most recent scan results. Run a new scan to refresh which tools are detected as connected
S11

Billing & Plans

Every new account starts with a 14-day Pro trial - full access, no credit card required. When the trial ends, you'll be moved to the Free plan. All your data (scans, analyses, connections, history) is preserved. Upgrade to Pro anytime to keep unlimited access.

Free vs Pro

FeatureFreePro ($49/mo)
Monitor scans5 / monthUnlimited
AI chat messages15 / monthUnlimited
Repo scans3 / monthUnlimited
Tool integrations2Unlimited
Generate Fix3 lifetimeUnlimited
Report export3 lifetimeUnlimited
Incident Autopilot2 lifetimeUnlimited
Deploy Monitor-Included (Datadog, Sentry, Grafana, New Relic)
PR Guard-Included
Scheduled Scans-Included
Alert Noise AnalysisIncludedIncluded
CLI & API accessIncludedIncluded

Understanding Usage Limits

Monthly limits (scans, chat, repo scans, integrations) reset on the 1st of each calendar month. Lifetime limits (Generate Fix, Report Export, Incident Autopilot) are a total allocation that does not reset.

When you're approaching a limit, you'll see an "X left" badge on the relevant button. When a limit is reached, the button shows a limit message and you'll be prompted to upgrade.

View your current usage at any time from the Billing page in the sidebar.

Upgrading to Pro

1
Click "Upgrade to Pro"

The upgrade button appears on the Billing page, in usage limit warnings, and on feature-locked screens. Clicking it takes you to a secure Stripe checkout page.

2
Complete payment

Enter your payment details on the Stripe checkout page. New subscribers get a 14-day free trial - you won't be charged until the trial ends.

3
Pro is active immediately

After checkout you're redirected back to Cova with all Pro features unlocked. Your Billing page now shows your plan status, next invoice date, and a link to manage your subscription.

Managing Your Subscription

Go to Billing in the sidebar and click "Manage Subscription". This opens the Stripe Customer Portal where you can:

  • Update your payment method
  • View and download past invoices
  • Cancel your subscription
Cancellation: If you cancel, your account reverts to Free plan limits at the end of the current billing period. All your existing data, connections, and analysis history are preserved - you just won't be able to exceed Free tier limits.
S12

Team Management

Invite teammates to your Cova workspace so they can access shared analysis results and monitoring insights.

Sending Invites

1
Go to Settings

Click Settings in the left sidebar and find the Send an Invite section.

2
Enter their email and send

Type your teammate's email address and click Send Invite. They'll receive a branded email with a link to join.

3
Manage pending invites

Pending invites appear below the input field. You can Resend (if they missed the email) or Revoke (to cancel the invitation).

Accepting an Invite

1
Click the invite link

Open the email from Cova and click the invitation link. You'll be taken to the login page.

2
Sign in or create an account

Log in using any method (email, Google, or GitHub). The invite is automatically accepted when you sign in - it's matched by the invite token, not your email address.

Cross-email accept: If you receive an invite at work@company.com but sign in with your personal Google account (personal@gmail.com), the invite is still accepted. Cova matches invites by token, not email address. The inviter will see which email you actually signed up with.

Viewing Your Team

Accepted team members appear in the Settings page under the invite section. You can see who has joined and when they accepted.

S13

Security & Audit Log

Cova logs all significant actions in your account so you can see exactly what happened and when. Audit logs are available on Pro and Enterprise plans.

What Gets Logged

Every write action is recorded with a timestamp, action type, and relevant details:

  • Tool connections and disconnections - which tool was connected or removed
  • Monitor deployments - monitor name, target tool, whether it was created or updated
  • Config generation - which finding triggered the generated fix
  • Admin actions - feature grants, revocations, account deletions, impersonation sessions

Viewing Your Audit Log

1
Go to Settings

Click Settings in the left sidebar.

2
Click the Audit Log tab

The Settings page has two tabs: General and Audit Log. Switch to the Audit Log tab to see your activity history.

3
Filter by action type

Use the dropdown filter to narrow results to specific actions like tool connections, monitor deployments, or config generation.

Retention

PlanAudit Log Retention
FreeNot available
Pro30 days
Enterprise1 year
Free plan users: The Audit Log tab shows an upgrade prompt. All actions are still logged server-side - upgrading to Pro will surface your full 30-day history immediately.

Security Practices

For a full overview of how Cova handles encryption, authentication, API key storage, and data retention, see our Security page.

Datadog

How to Apply Generated Fix in Datadog

Cova generates Datadog monitor configs as JSON. If Datadog is connected, you can deploy directly from Cova. Otherwise, paste the config manually or use the API.

Option A: Deploy Monitor (one-click)

1
Connect Datadog

Make sure Datadog is connected in Integrations with an API key and Application key that has write permissions.

2
Click Generate Fix on a coverage gap

If the gap has multiple missing services (e.g. 4 uncovered services for Latency), Cova generates one config per service in a single click. If quality warnings exist (e.g. "Monitor X has no recovery threshold"), it generates improved configs for each warning.

3
Select which monitors to deploy

When multiple configs are generated, a checklist appears with each service name. Select all, deselect all, or pick specific ones. The Preview, JSON, and Terraform tabs show all selected configs combined.

4
Click Deploy Monitor or Deploy Selected

Each config displays a NEW badge (creating a monitor) or UPDATE badge (improving an existing one). Click Deploy Monitor, Deploy All, or Deploy Selected to proceed.

5
Confirm the deployment

A confirmation dialog appears showing exactly what will happen - for example, "This will create 2 new and update 1 existing monitor(s) in Datadog." Click Continue to proceed or Cancel to go back.

6
View in Datadog

On success, a green confirmation appears with a View in Datadog link that opens each monitor directly in your Datadog console. Deployed monitors also appear in the Deployed Monitors tab on the Monitor Scan page.

Smart updates: If the fix is for an existing monitor (e.g. "Monitor X has no recovery threshold"), Deploy Monitor updates the original monitor instead of creating a duplicate. For new coverage gaps, it creates a new monitor. Cova shows NEW or UPDATE badges before you deploy so you always know what will happen.

Option B: Paste in Datadog UI

1
Copy the config

Click Copy Config in the Cova modal to copy the JSON to your clipboard.

2
Open Datadog Monitors

Go to Monitors → New Monitor. Choose the monitor type that matches the config (usually Metric or APM).

3
Switch to JSON editor

Click Edit tab, then toggle to Edit as JSON. Paste the config and click Save.

Option C: Datadog API

1
POST to the Monitors API

Send the JSON config as the request body to POST https://api.datadoghq.com/api/v1/monitor with your API and Application keys as headers (DD-API-KEY and DD-APPLICATION-KEY).

Datadog's JSON editor is the easiest way to apply a generated config. No API key with write permissions needed - your regular Datadog account access is enough.

Grafana

How to Apply Generated Fix in Grafana

Cova generates Grafana alert rule configs as JSON. If Grafana is connected with an Editor-role token, you can deploy directly from Cova. Otherwise, apply manually via the UI or API.

Option A: Deploy Monitor (one-click)

With Grafana connected using an Editor service account token, click Deploy in the Generate Fix modal. Cova creates a "Cova Generated Alerts" folder and provisions alert rules directly. If a rule with the same name already exists, it updates it instead of creating a duplicate. A direct link to the rule in Grafana appears on success.

Option B: Import via Grafana UI

1
Copy the config

Click Copy Config in the Cova modal.

2
Open Grafana Alerting

Go to Alerting → Alert Rules → New Alert Rule. For contact points, go to Alerting → Contact Points → New Contact Point.

3
Use the JSON editor

Most Grafana forms have a JSON/code view. Switch to it and paste the config, then save.

Option C: Grafana API

1
POST to the appropriate endpoint

For alert rules: POST /api/v1/provisioning/alert-rules. For contact points: POST /api/v1/provisioning/contact-points. Authenticate with a service account token that has Editor or Admin role.

If you use Grafana Cloud, the API base URL is https://your-instance.grafana.net. For self-hosted, use your instance URL.

PagerDuty

How to Apply Generated Fix in PagerDuty

PagerDuty does not have a JSON import UI. Generated configs are applied via the PagerDuty REST API.

Creating a Service

1
Get a read/write API key

Go to Integrations → API Access Keys → Create New API Key. The key you use for Cova (read-only) won't work here - you need a key with write access.

2
POST to the Services API

Send the config to POST https://api.pagerduty.com/services with the header Authorization: Token token=YOUR_KEY.

Creating an Escalation Policy

1
POST to the Escalation Policies API

Send the config to POST https://api.pagerduty.com/escalation_policies with the same authorization header.

Creating Event Orchestration Rules

1
Find your Orchestration ID

List orchestrations with GET https://api.pagerduty.com/event_orchestrations and note the ID of the one you want to add rules to.

2
Update the router

Send the config to PUT https://api.pagerduty.com/event_orchestrations/{id}/router.

!

PagerDuty requires a separate read/write API key to create resources. The read-only key you connected to Cova cannot be used to apply configs.

Sentry

How to Apply Generated Fix in Sentry

If Sentry is connected with an auth token that includes alerts:write, you can deploy directly from Cova. Otherwise, apply manually via the API.

Option A: Deploy Monitor (one-click)

With Sentry connected using a token that has alerts:write scope, click Deploy in the Generate Fix modal. Cova creates issue alerts or metric alerts in your Sentry project. If an alert with the same name already exists, it updates it. A direct link to the alert in Sentry appears on success.

Option B: Sentry REST API

1
Get an auth token with write access

Go to Settings → Auth Tokens → Create New Token with scopes project:write and alerts:write.

2
Identify your project slug

Find your organization slug and project slug in the Sentry URL: sentry.io/organizations/{org}/projects/{project}/.

3
POST to the Alert Rules API

Send the config to POST https://sentry.io/api/0/projects/{org}/{project}/rules/ with the header Authorization: Bearer YOUR_TOKEN.

If your Cova token already includes alerts:write, you can use one-click Deploy instead. Otherwise, create a separate token with write access for manual API import.

New Relic

How to Apply Generated Fix in New Relic

If New Relic is connected with a User API key that has alerting write access, you can deploy directly from Cova. Otherwise, apply manually via NerdGraph.

Option A: Deploy Monitor (one-click)

With New Relic connected using a User API key with alerting write access, click Deploy in the Generate Fix modal. Cova creates a "Cova Generated Alerts" policy (or finds an existing one) and provisions NRQL alert conditions directly via the NerdGraph API. If a condition with the same name already exists, it updates it. A direct link to the condition in New Relic appears on success.

Option B: NerdGraph API Explorer

1
Open the NerdGraph API Explorer

Go to api.newrelic.com/graphiql (US) or api.eu.newrelic.com/graphiql (EU). Sign in with your account.

2
Use the alertsNrqlConditionStaticCreate mutation

Paste the generated config into a mutation like:
mutation { alertsNrqlConditionStaticCreate(accountId: YOUR_ACCOUNT_ID, policyId: YOUR_POLICY_ID, condition: { ...PASTE_CONFIG... }) { id name } }

Option C: Creating a Notification Destination

1
Use the aiNotificationsDestinationCreate mutation

The generated config will include the destination type and properties. Paste into the NerdGraph explorer with your account ID.

The NerdGraph API Explorer lets you test mutations interactively before running them. Your existing User API Key (NRAK-xxx) works for both reads and writes.

Sumo Logic

How to Apply Generated Fix in Sumo Logic

Sumo Logic configs are applied via the Sumo Logic REST API.

Creating a Monitor

1
Use your existing credentials

You can use the same Access ID and Access Key you connected to Cova, provided your role has the manageMonitorsV2 capability.

2
POST to the Monitors API

Send the config to POST https://api.{region}.sumologic.com/api/v1/monitors using Basic Auth with your Access ID and Access Key. Replace {region} with your deployment (e.g. us1, eu, au).

Creating a Notification Connection

1
POST to the Connections API

Send the config to POST https://api.{region}.sumologic.com/api/v1/connections with the same authentication.

Sumo Logic uses different API base URLs per region. Make sure you use the same region you selected when connecting to Cova. The full list: us1, us2, eu, au, de, jp, ca, in, fed.

Splunk

How to Apply Generated Fix in Splunk

Cova generates Splunk saved search configurations as JSON. You can apply them through the Splunk Web UI or the REST API.

Option A: Splunk Web UI

1
Copy the generated config from Cova

In the Generate Fix modal, click Copy Config to copy the JSON to your clipboard. Note the key fields: search (the SPL query), alert_type, alert.comparator, alert.threshold, and actions.

2
Create a new alert in Splunk

In Splunk Web, go to Settings → Searches, Reports & Alerts → New Alert. Paste the SPL query from the search field, set the schedule, and configure the trigger conditions using the alert.comparator and alert.threshold values.

3
Configure notification actions

Under Trigger Actions, add the actions specified in the config (email, webhook, Slack, etc.). Save the alert.

Option B: Splunk REST API

Use the Splunk management API to create saved searches programmatically:

curl -k -u admin:password \ https://your-splunk:8089/services/saved/searches \ -d name="Your Alert Name" \ -d search="index=main sourcetype=access_combined status>=500" \ -d alert_type="number of events" \ -d alert.comparator="greater than" \ -d alert.threshold=10 \ -d alert.severity=4 \ -d is_scheduled=1 \ -d cron_schedule="*/5 * * * *" \ -d actions="email" \ -d "action.email.to=ops@example.com"

The Splunk REST API uses port 8089 (management port) by default, not the web UI port (8000). If using Splunk Cloud, check your admin for the correct management endpoint. The -k flag skips SSL verification for self-signed certificates - remove it in production if you have a valid certificate.

Terraform

How to Apply Generated Fix with Terraform

Every generated fix includes a Terraform tab with ready-to-use HCL code. This is the recommended approach for teams that manage infrastructure as code.

Supported Providers

ToolTerraform ProviderResource Type
Datadogdatadog/datadogdatadog_monitor, datadog_synthetics_test
Grafanagrafana/grafanagrafana_rule_group, grafana_contact_point
PagerDutyPagerDuty/pagerdutypagerduty_service, pagerduty_escalation_policy
Sentryjianyuan/sentrysentry_issue_alert, sentry_metric_alert
New Relicnewrelic/newrelicnewrelic_nrql_alert_condition
Sumo LogicSumoLogic/sumologicsumologic_monitor, sumologic_connection
Splunksplunk/splunksplunk_saved_searches

Steps

1
Click the Terraform tab in the Generate Fix modal

After generating a fix, toggle from JSON to Terraform to see the HCL code. Click Copy Config to copy it.

2
Add the resource to your Terraform configuration

Paste the HCL block into your .tf file (e.g. monitoring.tf). Make sure the corresponding provider is already configured in your terraform { required_providers { } } block.

3
Review and adjust attribute names

The generated HCL maps JSON keys directly to Terraform attributes. Some providers use slightly different attribute names - check the Terraform Registry docs for your provider if you get validation errors.

4
Plan and apply

Run terraform plan to preview the changes, then terraform apply to create the resource. Terraform will show you exactly what will be created before you confirm.

The Terraform output is generated by converting the JSON config to HCL. It gives you a strong starting point, but you may need to adjust attribute names or add provider-specific fields. Always run terraform plan before applying.

Using Terraform means your monitors are version-controlled, reviewable in PRs, and reproducible across environments. This is the recommended approach for production infrastructure.