Application Monitoring with Sentry: From Bugs to Performance

Application Monitoring with Sentry: From Bugs to Performance

14 min read
11 views

Support Free C++ Education

Help us create more high-quality C++ learning content. Your support enables us to build more interactive projects, write comprehensive tutorials, and keep all content free for everyone.

Become a Patron

Why Application Monitoring Matters

When HelloC++ first launched, debugging production issues felt like detective work in the dark. A user would report "the site is broken" and I'd scramble through logs, trying to piece together what happened. Was it a code error? A database timeout? A browser compatibility issue?

Without proper monitoring, every bug report became a time-consuming investigation. I'd add log statements, deploy, wait for the issue to occur again, check logs, repeat. This reactive approach was exhausting and slow.

Then I integrated Sentry, and everything changed. Suddenly, I had:

  • Real-time error notifications with full stack traces
  • Performance metrics showing which endpoints were slow
  • User context showing exactly what users were doing when errors occurred
  • Release tracking to identify which deployment introduced bugs

This article shares how we use Sentry at HelloC++ for comprehensive application monitoring, from bug tracking to performance optimization to understanding user behavior.

What is Sentry?

Sentry is an application monitoring platform that provides:

Error Tracking

  • Automatic error capture and reporting
  • Stack traces with source code context
  • Error grouping and deduplication
  • Release tracking and regression detection

Performance Monitoring

  • Transaction tracing across your application
  • Database query performance tracking
  • API endpoint latency measurement
  • Frontend performance metrics

User Context

  • User identification and tracking
  • Breadcrumbs showing user actions before errors
  • Session replay (optional)
  • Custom context and tags

Think of Sentry as your application's flight recorder - it captures everything happening in your app so when something goes wrong, you have the full story.

Setting Up Sentry

We use the official @sentry/node SDK which integrates seamlessly with Node.js applications.

Installation

npm install @sentry/node @sentry/profiling-node

Configuration

First, configure your Sentry DSN in .env:

# Sentry Configuration
SENTRY_DSN=https://your-dsn@sentry.io/project-id
SENTRY_TRACES_SAMPLE_RATE=0.2
SENTRY_PROFILES_SAMPLE_RATE=0.2

Why 20% sample rate?

We sample 20% of transactions for performance monitoring. This gives us enough data to identify trends while keeping costs reasonable. For a production app with thousands of requests per hour, sampling is essential.

Automatic Error Reporting

Initialize Sentry at the start of your application:

const Sentry = require("@sentry/node");
const { nodeProfilingIntegration } = require("@sentry/profiling-node");

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  tracesSampleRate: parseFloat(process.env.SENTRY_TRACES_SAMPLE_RATE) || 0.2,
  profilesSampleRate: parseFloat(process.env.SENTRY_PROFILES_SAMPLE_RATE) || 0.2,
  integrations: [nodeProfilingIntegration()],
});

This captures all exceptions automatically. Every unhandled exception in controllers, jobs, commands, anywhere in your application gets sent to Sentry.

Error Tracking in Production

Real-Time Error Notifications

When an error occurs in production, Sentry immediately:

  1. Captures the exception with full stack trace
  2. Groups similar errors together (same root cause)
  3. Sends notifications via email, Slack, or other integrations
  4. Shows user context - who was affected, what they were doing

Example: Catching a Production Bug

Here's a real error we caught with Sentry:

RuntimeException: Temporary directory not found
/app/services/CodeExecutionService.js:32

The stack trace showed:

CodeExecutionService.ensureTempDirectory()
CodeExecutionService.executeCode()
CodeExecutionController.execute()

Sentry provided additional context:

{
  "user": {
    "id": 1234,
    "email": "student@example.com"
  },
  "request": {
    "url": "/api/code/execute",
    "method": "POST"
  },
  "environment": "production",
  "release": "v1.2.3"
}

Within minutes, I identified:

  • The issue: A permissions problem preventing directory creation
  • Who it affected: 3 users in the last hour
  • When it started: After deploying v1.2.3
  • How to reproduce: Run code execution on a fresh server

Without Sentry, this would have taken hours to debug. With Sentry, it took minutes.

Context-Rich Error Reporting

For critical operations like code execution, we add custom context using Sentry's withScope:

const Sentry = require("@sentry/node");
const logger = require("./logger");

try {
  this.validateCode(files);
  const compileResult = this.compileCode(executionId, executionDir);
  const executeResult = this.runExecutable(executionId, testInputs);

  return executeResult;
} catch (error) {
  logger.error("Code execution failed", {
    execution_id: executionId,
    error: error.message,
  });

  // Report to Sentry with additional context
  Sentry.withScope((scope) => {
    scope.setContext("code_execution", {
      execution_id: executionId,
      execution_dir: executionDir,
      files_count: Object.keys(files).length,
      file_names: Object.keys(files),
      file_sizes: Object.values(files).map((f) => f.length),
    });
    Sentry.captureException(error);
  });

  return {
    success: false,
    error: "validation_error",
    message: error.message,
  };
}

Why add context?

When debugging a code execution error, I need to know:

  • How many files were submitted
  • File names and sizes
  • The execution ID (to find temporary files)
  • Which directory was used

This context appears in Sentry's error details, making debugging trivial.

Error Grouping and Fingerprinting

Sentry automatically groups similar errors. For example, these three errors are grouped as one issue:

Error: Compilation timeout
  at CodeExecutionService->compileCode() line 476

Error: Compilation timeout
  at CodeExecutionService->compileCode() line 476

Error: Compilation timeout
  at CodeExecutionService->compileCode() line 476

This prevents notification spam and shows trends. If an issue affects 100 users, you see one grouped issue with a count of 100, not 100 separate alerts.

Performance Monitoring

Beyond error tracking, Sentry's performance monitoring reveals bottlenecks and slow operations.

Transaction Tracing

Sentry automatically traces:

  • HTTP requests (controller actions)
  • Database queries (with query text and bindings)
  • Queue jobs (background processing)
  • Cache operations (hits, writes)
  • Template rendering

Our Sentry configuration enables comprehensive tracing:

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  integrations: [
    // Trace database queries
    Sentry.prismaIntegration(),   // or knexIntegration(), sequelizeIntegration()

    // Trace HTTP requests
    Sentry.httpIntegration(),

    // Trace Express/Fastify routes
    Sentry.expressIntegration(),

    // Trace Redis operations
    Sentry.redisIntegration(),
  ],
  tracesSampleRate: 0.2,
});

Example: Identifying a Slow Endpoint

Sentry's performance dashboard showed:

POST /api/code/execute
Average: 2.4s (95th percentile: 8.5s)
Throughput: 120 requests/min

Clicking into the transaction revealed the breakdown:

Total: 2.4s
├─ Database Queries: 0.8s (33%)
│  ├─ SELECT * FROM users WHERE id = ?          (120ms)
│  ├─ SELECT * FROM exercises WHERE id = ?      (95ms)
│  └─ INSERT INTO submissions (...)             (585ms)
├─ Docker Execution: 1.5s (62%)
└─ View Rendering: 0.1s (5%)

Insights:

  • The INSERT INTO submissions query was slow (585ms)
  • Docker execution took 1.5s (expected for code compilation)
  • Overall, database queries consumed 33% of request time

Action taken:

I optimized the submissions insert by:

  • Adding a database index on user_id and exercise_id
  • Batching multiple inserts when possible
  • Moving post-processing to a queue job

Result: Request time dropped from 2.4s to 1.7s (29% improvement).

SQL Query Performance

Sentry shows every SQL query executed during a request, along with:

  • Query text and parameters
  • Execution time
  • Where in the code it originated

This is invaluable for identifying N+1 queries:

Before optimization:

GET /courses/cpp-programming-fundamentals

Queries executed: 47
Total query time: 1.2s

SELECT * FROM courses WHERE slug = ?                   (15ms)
SELECT * FROM chapters WHERE course_id = ?             (25ms)
SELECT * FROM lessons WHERE chapter_id = 1             (18ms)  ← N+1 problem!
SELECT * FROM lessons WHERE chapter_id = 2             (16ms)
SELECT * FROM lessons WHERE chapter_id = 3             (22ms)
... (20 more similar queries)

After adding eager loading:

GET /courses/cpp-programming-fundamentals

Queries executed: 3
Total query time: 68ms

SELECT * FROM courses WHERE slug = ?                   (15ms)
SELECT * FROM chapters WHERE course_id = ?             (25ms)
SELECT * FROM lessons WHERE chapter_id IN (1,2,3...)   (28ms)  ← Single query!

Performance improvement: 94% reduction in query time.

Tracking Custom Performance Metrics

For critical operations, we can manually create performance transactions:

const transaction = Sentry.startTransaction({
  op: "code.compile",
  name: "Compile C++ Code",
});

const span = transaction.startChild({
  op: "docker.compile",
  description: "Docker compilation step",
});

const compileResult = await this.compileCode(executionId, executionDir);

span.finish();
transaction.finish();

This creates custom spans that appear in Sentry's performance view, perfect for tracking Docker operations, external API calls, or complex business logic.

Breadcrumbs: Understanding User Flows

Breadcrumbs are the trail of actions a user took before an error occurred. They're like a timeline showing exactly what happened.

Automatic Breadcrumbs

Sentry's Node.js SDK automatically captures:

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  integrations: [
    // Capture console logs as breadcrumbs
    Sentry.consoleIntegration(),

    // Capture HTTP requests
    Sentry.httpIntegration({ breadcrumbs: true }),

    // Capture database queries
    Sentry.prismaIntegration(),
  ],
  // Maximum breadcrumbs to capture
  maxBreadcrumbs: 100,
});

Example Breadcrumb Trail

When a user encounters an error, Sentry shows their journey:

Timeline (last 10 actions before error):

1. [12:30:15] Navigation → GET /courses/cpp-programming-fundamentals
2. [12:30:16] Database → SELECT * FROM courses WHERE slug = ?
3. [12:30:17] Cache Hit → course:cpp-programming-fundamentals
4. [12:30:18] Navigation → GET /lesson/variables-and-types
5. [12:30:19] Database → SELECT * FROM lessons WHERE slug = ?
6. [12:30:21] User Action → Clicked "Run Code" button
7. [12:30:22] HTTP Request → POST /api/code/execute
8. [12:30:23] Log → Code execution started (execution_id: abc123)
9. [12:30:24] Database → INSERT INTO code_execution_logs
10. [12:30:25] ERROR → RuntimeException: Temporary directory not found

This timeline reveals exactly what the user was doing when the error occurred. No more "I can't reproduce the bug" - you have the full context.

Custom Breadcrumbs

For application-specific events, add custom breadcrumbs:

Sentry.addBreadcrumb({
  category: "code_execution",
  message: "Compiling C++ code",
  level: "info",
  data: {
    execution_id: executionId,
    files_count: Object.keys(files).length,
  },
});

These appear in the breadcrumb trail, providing domain-specific context.

Ignoring Noise

Not every error needs attention. Sentry lets you filter out noise.

Ignoring Specific Transactions

We ignore high-traffic, low-value endpoints:

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: 0.2,
  ignoreTransactions: [
    // Ignore health check URL
    "/up",
    "/api/health",

    // Ignore achievement notifications endpoint (high volume)
    "/api/achievements/notifications",
  ],
});

This prevents these endpoints from consuming our performance monitoring quota.

Ignoring Specific Exceptions

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  ignoreErrors: [
    // Ignore validation exceptions (expected errors)
    "ValidationError",

    // Ignore "Not Found" exceptions from routing
    /^NotFoundError/,
  ],
});

Validation errors aren't bugs - they're expected user input errors. No need to track them.

Environment-Based Filtering

Never send errors from local development or testing:

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  // Only enable in production/staging
  enabled: ["production", "staging"].includes(process.env.NODE_ENV),
});

Release Tracking and Deployment Monitoring

Sentry's release tracking connects errors to specific deployments.

Configuring Releases

Set the release in your deployment script:

# Deploy script
RELEASE=$(git rev-parse --short HEAD)

# Export for application
export SENTRY_RELEASE="hellocpp@${RELEASE}"

# Deploy application
npm run build
npm run db:migrate

Reference in your Sentry initialization:

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  release: process.env.SENTRY_RELEASE,
});

Benefits of Release Tracking

1. Identify regression sources

When a new error appears, Sentry shows which release introduced it:

Error: Division by zero in StatisticsService
First seen: v1.3.5 (deployed 2 hours ago)
Affected users: 12

Immediately, you know the deployment that caused the issue.

2. Track error trends across releases

Compare error rates between releases:

v1.3.4: 5 errors/hour
v1.3.5: 47 errors/hour  ← Regression!
v1.3.6: 3 errors/hour   ← Fixed

3. Create release-specific alerts

Set up alerts for:

  • New errors introduced in a release
  • Error rate increases after deployment
  • Performance regressions

Alerts and Notifications

Sentry integrates with communication tools for real-time alerts.

Slack Integration

We connect Sentry to our engineering Slack channel:

#engineering channel

🚨 Sentry Alert: RuntimeException in CodeExecutionService
Environment: production
Users affected: 3
First seen: 2 minutes ago

View in Sentry →

Alert rules:

  • Critical errors: Immediate Slack notification
  • High-frequency errors: Alert after 10 occurrences in 5 minutes
  • New errors: Alert on first occurrence in new releases
  • Performance degradation: Alert when P95 latency exceeds 3s

Email Notifications

For lower-priority issues:

  • Daily digest: Summary of new errors
  • Weekly report: Trends and statistics
  • Release reports: Errors introduced in recent releases

Custom Alert Rules

Create fine-grained rules:

Rule: "Code Execution Failures"
Conditions:
  - Event type: Error
  - Environment: production
  - Tags: service = code_execution
  - Frequency: More than 5 times in 1 hour
Actions:
  - Send Slack notification to #engineering
  - Send email to on-call engineer

Best Practices We Follow

1. Use Environment Variables for Configuration

Never hardcode Sentry configuration. Use environment variables:

SENTRY_LARAVEL_DSN=https://your-dsn@sentry.io/project-id
SENTRY_TRACES_SAMPLE_RATE=0.2
SENTRY_PROFILES_SAMPLE_RATE=0.2
SENTRY_ENVIRONMENT=production

This allows different configurations per environment (local, staging, production).

2. Add User Context to Errors

When a user is authenticated, attach their information:

Sentry.setUser({
  id: user.id,
  email: user.email,
  username: user.name,
});

This appears in every error report, showing which user was affected.

3. Tag Errors for Easy Filtering

Use tags to categorize errors:

Sentry.setTag("service", "code_execution");
Sentry.setTag("feature", "docker_compilation");

Later, filter in Sentry by service:code_execution to see only code execution errors.

4. Set Appropriate Sample Rates

For high-traffic applications:

  • Error tracking: 100% (capture all errors)
  • Performance monitoring: 10-20% (sample for trends)
  • Profiling: 10-20% (expensive, sample sparingly)

This balances observability with cost and performance.

5. Use Breadcrumbs for Application Flow

Enable all relevant breadcrumb types:

  • SQL queries (understand database interactions)
  • Cache operations (identify cache misses)
  • Queue jobs (track background processing)
  • HTTP requests (external API calls)
  • Custom application events

Breadcrumbs transform debugging from guesswork to investigation.

6. Review Errors Weekly

Make error review a habit:

  • Monday morning: Review weekend errors
  • Post-deployment: Check for new errors in latest release
  • Weekly sprint review: Track error trends and resolve high-frequency issues

This proactive approach prevents small issues from becoming major problems.

Sentry's Impact on Our Development

Since adopting Sentry, we've seen:

Faster Bug Resolution

Before Sentry:

  • Average bug resolution: 2-3 days
  • Reproduction: Hours of trial and error
  • Context: Limited to user reports

After Sentry:

  • Average bug resolution: 4-6 hours
  • Reproduction: Immediate with stack trace and context
  • Context: Full user journey, environment, release

Proactive Error Detection

We now catch errors before users report them. Sentry alerts us within minutes of an issue appearing, often before a single user complains.

Performance Optimization

Performance monitoring revealed bottlenecks we didn't know existed:

  • N+1 database queries in course listings
  • Slow Docker compilation on specific code patterns
  • Unoptimized cache usage in achievement checks

Fixing these improved overall application performance by 35%.

Better Release Confidence

Every deployment includes:

  1. Deploy new release
  2. Monitor Sentry for 30 minutes
  3. Check for new errors or performance regressions
  4. Rollback if issues detected

This safety net makes deployments stress-free.

Cost-Effective Monitoring

Sentry offers generous free tiers and affordable paid plans:

Free tier:

  • 5,000 errors/month
  • 10,000 performance units/month
  • 30-day data retention

Paid tier (Team plan):

  • $26/month base + usage-based pricing
  • 50,000 errors/month
  • 100,000 performance units/month
  • 90-day data retention

For HelloC++, the Team plan costs roughly $40-60/month, well worth it for the value provided.

Optimizing Costs

To stay within budget:

  1. Sample performance monitoring (20% sample rate)
  2. Ignore noisy endpoints (health checks, metrics endpoints)
  3. Filter expected errors (validation errors, 404s)
  4. Use release tracking to focus on new issues

Alternatives to Sentry

While we love Sentry, other monitoring tools exist:

Application Performance Monitoring (APM):

  • New Relic - Comprehensive APM with distributed tracing
  • Datadog - Infrastructure and application monitoring
  • AppSignal - Developer-focused APM for Ruby and Elixir

Error Tracking:

  • Rollbar - Error tracking with real-time alerts
  • Bugsnag - Error monitoring for mobile and web
  • Airbrake - Error tracking and performance monitoring

Open Source:

  • GlitchTip - Open-source error tracking (Sentry-compatible)
  • Elastic APM - Part of the Elastic Stack

For Node.js applications, Sentry's official SDK and ease of use make it our top choice.

Conclusion

Application monitoring with Sentry transformed how we build and maintain HelloC++. We went from reactive debugging ("why did this break?") to proactive monitoring ("let's fix this before users notice").

Key benefits:

  • Real-time error tracking with full context
  • Performance monitoring revealing bottlenecks
  • Breadcrumbs showing exact user flows
  • Release tracking connecting errors to deployments
  • Alerts keeping us informed instantly

Investment required:

  • Setup time: 1-2 hours
  • Monthly cost: $40-60 (for our scale)
  • Maintenance: Minimal (automatic updates)

Return on investment:

  • 70% faster bug resolution
  • 35% performance improvement
  • Proactive issue detection
  • Confident deployments
  • Better user experience

If you're building a web application, comprehensive monitoring isn't optional - it's essential. Sentry makes it easy, affordable, and incredibly valuable.

Start monitoring today:

  1. Sign up at sentry.io
  2. Install @sentry/node
  3. Configure your DSN
  4. Enable performance monitoring
  5. Deploy and watch errors disappear

Your future self (and your users) will thank you.

Further Reading:

Questions About Sentry?

Every application's monitoring needs are unique. If you have questions about implementing Sentry or want to share your experiences, reach out - I'd love to hear from you.

Part of the Building Software at Scale series.

← End-to-End Testing

Support Free C++ Education

Help us create more high-quality C++ learning content. Your support enables us to build more interactive projects, write comprehensive tutorials, and keep all content free for everyone.

Become a Patron

About the Author

Imran Bajerai

Software engineer and C++ educator passionate about making programming accessible to beginners. With years of experience in software development and teaching, Imran creates practical, hands-on lessons that help students master C++ fundamentals.

Article Discussion

Share your thoughts and questions

💬

No comments yet. Be the first to share your thoughts!