Przejdź do treści głównej

Performance Testing and Optimization - Comprehensive Guide 2025

Load testing (k6, JMeter, Gatling), profiling, database optimization, caching strategies, CDN, Core Web Vitals. Practical performance guide based on real-world production experience and industry best practices.

Author: Michał Wojciechowski··16 min read
Performance monitoring dashboard with metrics and graphs

Why application performance decides business success?

Imagine losing one million dollars annually just because your application loads one second too slow.

This is not science fiction. Google proved that 53% of mobile users abandon a page that takes longer than 3 seconds to load. Amazon research showed that every 100ms delay costs them exactly 1% of revenue. Walmart noticed that improving load time by 1 second increases conversion by 2%.

Performance testing is not optional - it's a necessity. You systematically test application behavior under load, identify bottlenecks before your users see them.

In this guide, I'll show you proven tools (k6, JMeter, Gatling) and optimization techniques - from database optimization through caching strategies with Redis, to Core Web Vitals. Everything based on real projects serving millions of users. If you're looking for comprehensive infrastructure approach, check out our article Azure AKS Production.

Key performance optimization areas:

  • Load testing tools – k6, JMeter, Gatling for different scenarios and team profiles
  • Database optimization – indexing, query optimization, connection pooling, partitioning
  • Caching strategies – Redis, cache-aside pattern, cache invalidation, CDN
  • Frontend performance – Core Web Vitals, image optimization, code splitting, lazy loading
  • Monitoring & APM – New Relic, Datadog, Application Insights, Prometheus

Load Testing Tools - k6, JMeter, Gatling

Imagine Black Friday. Thousands of users simultaneously hitting "Buy now". Your server crashes.

Load testing tools simulate exactly this scenario - thousands of concurrent users attacking your application. They identify bottlenecks, memory leaks, database connection pool exhaustion, before real users discover them. Tool choice depends on your tech stack, team skills, and testing requirements.

k6 - Modern Developer-Friendly Tool

k6 is an open-source load testing tool from Grafana Labs. JavaScript/TypeScript scripting, minimal footprint, great CI/CD integration. Best choice for modern DevOps teams.

// k6 load test example
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '2m', target: 100 },  // ramp up to 100 users
    { duration: '5m', target: 100 },  // stay at 100 users
    { duration: '2m', target: 200 },  // spike to 200 users
    { duration: '5m', target: 200 },  // stay at 200 users
    { duration: '2m', target: 0 },    // ramp down to 0
  ],
  thresholds: {
    http_req_duration: ['p(95)<500', 'p(99)<1000'], // 95% < 500ms, 99% < 1s
    http_req_failed: ['rate<0.01'],  // error rate < 1%
  },
};

export default function () {
  const res = http.get('https://api.example.com/products');

  check(res, {
    'status is 200': (r) => r.status === 200,
    'response time < 500ms': (r) => r.timings.duration < 500,
  });

  sleep(1);
}

Advantages: JavaScript scripting (easy for web devs), minimal resource usage, built-in metrics, Grafana integration. Disadvantages: No GUI (code only), younger community vs JMeter.

Apache JMeter - Industry Standard

JMeter is a mature open-source tool from Apache Foundation. GUI-based test creation, huge community, support for legacy protocols (SOAP, FTP, LDAP). Ideal for enterprise with complex requirements.

Configuration: Thread Groups (users), Samplers (HTTP requests), Listeners (results visualization), Assertions (validation), Timers (think time). XML-based test plans versioned in Git.

Advantages: GUI for non-coders, plugins ecosystem, distributed testing, wide protocol support. Disadvantages: Heavy resource consumption, XML complexity, slow UI.

Gatling - High-Performance Scala Tool

Gatling is an open-source tool written in Scala. Asynchronous non-blocking architecture, DSL for test scenarios, excellent reporting. Best for high-load testing (10k+ concurrent users).

// Gatling Scala DSL example
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class ApiLoadTest extends Simulation {

  val httpProtocol = http
    .baseUrl("https://api.example.com")
    .acceptHeader("application/json")

  val scn = scenario("API Load Test")
    .exec(
      http("Get Products")
        .get("/products")
        .check(status.is(200))
        .check(responseTimeInMillis.lte(500))
    )
    .pause(1)

  setUp(
    scn.inject(
      rampUsers(100) during (2.minutes),
      constantUsersPerSec(50) during (5.minutes)
    )
  ).protocols(httpProtocol)
   .assertions(
     global.responseTime.percentile3.lt(1000),
     global.failedRequests.percent.lt(1)
   )
}

Advantages: Highest throughput, reactive architecture, beautiful HTML reports, efficient resource usage. Disadvantages: Scala learning curve, smaller community vs JMeter.

Which tool to choose?

k6 for DevOps teams with JavaScript skills, CI/CD integration priority, cloud-native apps. JMeter for enterprise with legacy systems, GUI requirement, QA teams without coding experience. Gatling for Scala/Java teams, extreme high load (10k+ users), performance-critical applications. In 90% of projects we recommend k6 - best balance between functionality and ease-of-use.

Application profiling and performance metrics

Profiling - Find bottlenecks in your application

Your application is slow. But where exactly is the problem?

Profiling is the process of analyzing execution time of each function in your application. It's like an X-ray for your code - identifies where the application spends most of its time. Slow database queries? Inefficient algorithms? Memory leaks? Excessive object allocations? The profiler will show you all of this with millisecond precision.

.NET Performance Profiling

Tools for .NET ecosystem:

  • dotTrace (JetBrains) - visual profiler, timeline view, call tree analysis
  • dotMemory (JetBrains) - memory profiler, heap snapshots, memory leaks detection
  • Visual Studio Profiler - built-in, CPU/memory/database profiling
  • BenchmarkDotNet - micro-benchmarking library, accurate performance measurement

Node.js/JavaScript Profiling

Tools for Node.js/JavaScript:

  • Chrome DevTools - Performance tab, flame graphs, memory snapshots
  • Node.js --inspect - built-in profiler, Chrome DevTools integration
  • Clinic.js - Doctor, Bubbleprof, Flame - diagnose performance issues
  • 0x - flame graph profiler for Node.js, low overhead

Database Query Profiling

Profiling slow database queries:

  • EXPLAIN/EXPLAIN ANALYZE (PostgreSQL) - query execution plan
  • Query Store (SQL Server) - query performance tracking
  • Slow Query Log (MySQL) - log queries exceeding threshold
  • pg_stat_statements (PostgreSQL) - track execution statistics

Production APM Tools

Continuous profiling in production:

  • New Relic - distributed tracing, transaction profiling, code-level visibility
  • Datadog APM - flame graphs, continuous profiler, anomaly detection
  • Application Insights - Azure-native, dependency tracking, profiler
  • Pyroscope - open-source continuous profiling, minimal overhead

Profiling Best Practices

Profile production environment, not just local - production has real data volumes, network latency, concurrent load. Use sampling profilers in production (low overhead 1-5%). Profile before and after optimization - measurements prove improvement. Focus on P95/P99 percentiles, not averages - tail latency matters for user experience. Profile database queries separately - often the biggest bottleneck, accounting for 60-80% of response time.

Database optimization - highest ROI in performance

Research shows something shocking: 60-70% of application performance problems are database problems.

Slow queries. Missing indexes. Exhausted connection pool. These are the most common bottlenecks. Good news? Database optimization gives the highest return on investment in application performance improvement. One index change can speed up a query from 5000ms to 5ms - that's a 1000% improvement.

Database indexing strategy

Indexes are the most important database optimization. Missing index on WHERE/JOIN columns = full table scan instead of fast lookup. Difference? 5ms vs 5000ms for a million-row table. That's a thousand-fold difference in application performance.

-- PostgreSQL: Find missing indexes
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE customer_id = 123 AND status = 'active';
-- Output: Seq Scan on orders (cost=0.00..25000.00)  <- BAD!

-- Add composite index
CREATE INDEX idx_orders_customer_status
ON orders (customer_id, status);

-- After adding index
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE customer_id = 123 AND status = 'active';
-- Output: Index Scan using idx_orders_customer_status (cost=0.42..8.44)  <- GOOD!

-- Index column order matters!
-- Most selective column first (customer_id before status)
-- Index on (A, B) supports queries on A or (A, B), but not B alone

Eliminating N+1 Queries

N+1 problem is a classic ORM trap. Instead of 1 query you execute 101 - one for the list, then one for each item. Entity Framework, Hibernate, TypeORM without eager loading do this automatically. Result? Your application makes 100 times more queries than it should.

// ❌ BAD: N+1 queries (1 + 100 = 101 queries)
const orders = await db.orders.findMany(); // 1 query
for (const order of orders) {
  const customer = await db.customers.findUnique({
    where: { id: order.customerId }
  }); // N queries (100x!)
  console.log(customer.name);
}

// ✅ GOOD: Eager loading (2 queries or 1 JOIN query)
const orders = await db.orders.findMany({
  include: { customer: true } // Eager load relationship
});
orders.forEach(order => {
  console.log(order.customer.name);
});

// ✅ ALTERNATIVE: DataLoader pattern (batching + caching)
const customerLoader = new DataLoader(async (ids) => {
  return await db.customers.findMany({ where: { id: { in: ids } } });
});
const orders = await db.orders.findMany();
const customers = await Promise.all(
  orders.map(o => customerLoader.load(o.customerId))
);

Connection Pooling

Creating a new database connection is expensive. TCP handshake, authentication, memory allocation - all this takes time. Connection pool reuses connections instead of creating and destroying them with each query. PgBouncer for PostgreSQL, HikariCP for Java - the standard in production environments.

// Node.js pg connection pool configuration
const { Pool } = require('pg');

const pool = new Pool({
  host: 'postgres.example.com',
  database: 'myapp',
  user: 'apiuser',
  password: process.env.DB_PASSWORD,
  max: 20,              // Max connections in pool
  min: 5,               // Min idle connections
  idleTimeoutMillis: 30000,    // Close idle connections after 30s
  connectionTimeoutMillis: 2000, // Timeout getting connection
  maxUses: 7500,        // Recycle connection after 7500 uses
});

// Use pool.query() instead of client.connect()
app.get('/api/orders', async (req, res) => {
  const result = await pool.query('SELECT * FROM orders LIMIT 100');
  res.json(result.rows);
});

// PgBouncer config (external pooler before PostgreSQL)
// pool_mode = transaction  (best for microservices)
// max_client_conn = 1000   (frontend connections)
// default_pool_size = 25   (backend connections per database)

Query Optimization & Denormalization

Sometimes a normalized database schema is not optimal for read-heavy applications. Denormalization, materialized views, read-only replicas - you trade consistency for application performance. This is a conscious trade-off that can speed up your application 10-100 times.

  • Materialized Views - precomputed query results refreshed periodically
  • Denormalization - duplicate data to avoid JOINs (product.category_name instead of JOIN categories)
  • Read Replicas - read queries on replicas, writes on primary (PostgreSQL streaming replication)
  • Partitioning - horizontal partition large tables by date/range (orders partitioned by month)
  • Computed Columns - store calculated values (total_price = quantity * unit_price)

Cache strategies - Redis and caching patterns

Want to speed up your application 50 times with one change? Implement caching.

Caching is the fastest way to improve application performance. Response time drops from 200-500ms to 5-10ms. Database load decreases by 70-90%. Redis is the industry standard - in-memory cache with sub-millisecond latency, data persistence, pub/sub, advanced data structures. Perfect fit for cloud solutions.

Cache-Aside Pattern

Most popular caching pattern: application checks cache before database query. Cache miss → query DB → save to cache with TTL.

// Cache-aside implementation with Redis
async function getProduct(productId) {
  // 1. Try cache first
  const cacheKey = `product:${productId}`;
  const cached = await redis.get(cacheKey);

  if (cached) {
    console.log('Cache HIT');
    return JSON.parse(cached);
  }

  // 2. Cache miss - query database
  console.log('Cache MISS - querying DB');
  const product = await db.products.findUnique({
    where: { id: productId }
  });

  // 3. Set cache with TTL (5 minutes)
  await redis.setex(cacheKey, 300, JSON.stringify(product));

  return product;
}

// Cache invalidation on update
async function updateProduct(productId, data) {
  const product = await db.products.update({
    where: { id: productId },
    data: data
  });

  // Invalidate cache
  await redis.del(`product:${productId}`);

  return product;
}

CDN Caching - performance at the network edge

CDN (Content Delivery Network) caches static assets (images, CSS, JS, fonts) in data centers close to users. Cloudflare, Azure CDN, AWS CloudFront reduce latency from 200ms to 20ms for users worldwide. It's like having a copy of your site in every city.

  • Static assets - long TTL (1 year), immutable files with hash in filename
  • API responses - short TTL (1-5 min), Cache-Control headers, purge on update
  • Image optimization - CDN automatic format conversion (WebP, AVIF), resizing, compression
  • Edge computing - Cloudflare Workers, AWS Lambda@Edge - compute at edge

Cache Invalidation Strategies

"There are only two hard things in Computer Science: cache invalidation and naming things" - Phil Karlton. Stale cache = wrong data shown to users.

  • TTL-based - cache expires after fixed time (300s). Simple, eventual consistency
  • Event-based - invalidate on data change (update/delete triggers cache delete)
  • Tag-based - associate cache entries with tags, invalidate all by tag (product:123 tagged with category:electronics)
  • Write-through - update cache synchronously during write (consistency guarantee, slower writes)

Redis Production Best Practices

Use Redis Cluster for high availability - automatic failover, sharding. Redis Sentinel for master-replica setup. Persistence: RDB snapshots + AOF log (trade durability for performance). Maxmemory policy: allkeys-lru for pure cache, noeviction for sessions. Monitor cache hit ratio (target 80%+) - low ratio = wrong TTL or wrong keys cached. Connection pooling (ioredis) - don't create new connection per request.

Web performance and Core Web Vitals optimization

Frontend Performance - Core Web Vitals impacts SEO

Since 2021, Google penalizes slow sites in search results.

Core Web Vitals are an official Google ranking factor. LCP (Largest Contentful Paint), INP (Interaction to Next Paint), CLS (Cumulative Layout Shift) - these metrics define user experience and affect search rankings. Lighthouse, WebPageTest, Chrome DevTools - measurement tools. The Next.js framework provides out-of-the-box optimization.

LCP - Largest Contentful Paint (target: <2.5s)

LCP measures loading performance - time until largest visible element renders. Usually hero image or main content block. Slow LCP = users bounce.

  • Image optimization - WebP/AVIF format, srcset responsive images, lazy loading (loading="lazy")
  • Preload critical resources - <link rel="preload" as="image" href="hero.webp">
  • Server-side rendering - Next.js SSR delivers HTML fast, no client-side fetch wait
  • CDN delivery - serve images/fonts from edge locations near users
  • Remove render-blocking resources - async/defer scripts, inline critical CSS

INP - Interaction to Next Paint (target: <200ms)

INP (replaced FID in 2024) measures responsiveness - time from user interaction to visual response. Long tasks (>50ms) blocking main thread = poor INP.

  • Code splitting - dynamic imports, load code on-demand (React.lazy, Next.js dynamic)
  • Minimize JavaScript - tree shaking, remove unused code, bundle size budget
  • Web Workers - offload heavy computation from main thread (data processing, image manipulation)
  • Debounce/throttle - limit high-frequency events (scroll, resize, input)
  • React optimization - useMemo, useCallback, React.memo, virtualization (react-window)

CLS - Cumulative Layout Shift (target: <0.1)

CLS measures visual stability - how much content jumps during loading. Layout shifts frustrate users - clicking button → ad loads → button moves → wrong click.

  • Image dimensions - always width/height attributes (reserve space before load)
  • Font loading - font-display: swap, preload fonts, system fonts fallback
  • Ad/embed containers - fixed height for dynamic content (ads, embeds, iframes)
  • Avoid inserting content - no content injected above existing (banners, notifications)
  • CSS contain - contain: layout for isolated components (prevents reflow propagation)

Core Web Vitals Monitoring

Use Real User Monitoring (RUM) for production data - Google Analytics 4, New Relic Browser, SpeedCurve. Lab data (Lighthouse) ≠ field data (real users). Chrome User Experience Report (CrUX) - official Google field data. Monitor P75 percentiles (75% users experience this or better). Set budgets - fail build if LCP > 2.5s. Continuous monitoring - performance degrades over time without vigilance.

Monitoring & APM - observe production performance

You can't improve what you don't measure.

APM (Application Performance Monitoring) is your eyes in production. You see response times, error rates, database queries, dependencies, user experience - all in real-time. New Relic, Datadog, Application Insights - industry leaders that show you exactly what's happening with your application under load.

New Relic - All-in-One APM

  • Distributed tracing - trace request across microservices
  • Transaction profiling - code-level visibility, slow methods
  • Real User Monitoring - actual user experience metrics (Core Web Vitals)
  • Infrastructure monitoring - servers, containers, Kubernetes
  • AI-powered insights - anomaly detection, alert correlation

Datadog APM - Cloud-Native

  • Kubernetes integration - pod metrics, service mesh visibility
  • Log aggregation - logs + metrics + traces in one place
  • Flame graphs - continuous profiler, CPU/memory profiles
  • Synthetic monitoring - scheduled health checks global locations
  • Custom metrics - business metrics (orders/min, revenue)

Application Insights - Azure Native

  • Automatic instrumentation - .NET, Node.js, Java, Python auto-tracked
  • Dependency tracking - HTTP, database, external APIs
  • Live metrics stream - real-time telemetry (requests, failures)
  • Application Map - visualize dependencies between services
  • Cost-effective - included with Azure App Service, pay-per-GB data

Prometheus + Grafana - Open Source

  • Prometheus - time-series database, pull-based metrics, PromQL query language
  • Grafana - visualization, dashboards, alerting, multi-datasource
  • Exporters - pre-built for Node.js, PostgreSQL, Redis, Nginx
  • Self-hosted - full control, no vendor lock-in, infrastructure cost only
  • Community - thousands ready dashboards (grafana.com/dashboards)

Monitoring Best Practices

Monitor Golden Signals: latency (response time percentiles P50/P95/P99), traffic (requests per second), errors (error rate %), saturation (resource utilization). Alerting: alert on symptoms not causes (users experiencing slow app, not high CPU). SLO-based alerts - Service Level Objectives (99.9% availability, P95 < 500ms). Dashboard hierarchy: executive dashboard (business metrics), team dashboard (service health), debugging dashboard (detailed metrics). Distributed tracing for microservices - understand request flow across services. See Azure AKS Production for production monitoring setup.

Frequently Asked Questions

Which load testing tool to choose - k6, JMeter or Gatling?

k6 is the best choice for modern DevOps teams - JavaScript scripting, great CI/CD integration, minimal resource footprint. JMeter works well in enterprise with legacy systems - GUI, XML config, huge community. Gatling is ideal for Scala/Java teams - high-performance, reactive protocols, Scala DSL. For most projects we recommend k6.

How to optimize database queries?

Key techniques: add indexes on columns used in WHERE/JOIN (EXPLAIN/EXPLAIN ANALYZE shows missing indexes), eliminate N+1 queries through eager loading, use connection pooling (PgBouncer for PostgreSQL), implement query caching for frequently repeated queries, denormalize data in read-heavy scenarios, partition large tables (partition by date/range).

How does Redis cache improve application performance?

Redis is an in-memory cache with sub-millisecond response time. Cache-aside pattern: application checks Redis before DB query, if miss then query DB and set to Redis with TTL. Typical use cases: session storage, API response caching, database query results, real-time analytics, rate limiting. Effect: 70-90% database load reduction, response time from 200-500ms to 5-10ms.

What are Core Web Vitals and how to improve them?

Core Web Vitals are Google metrics defining user experience: LCP (Largest Contentful Paint <2.5s) - main content loading speed, FID (First Input Delay <100ms) - interactivity responsiveness, CLS (Cumulative Layout Shift <0.1) - visual stability. Optimization: image optimization (WebP, lazy loading), code splitting, preload critical resources, CDN for static assets, minimize JavaScript execution, font optimization.

Which APM tools for performance monitoring to choose?

New Relic is the best all-in-one APM - distributed tracing, real user monitoring, infrastructure monitoring, AI-powered insights. Datadog is great for cloud-native - Kubernetes integration, log aggregation, custom metrics. Application Insights is ideal for Azure/.NET - automatic instrumentation, Visual Studio integration, cost-effective. Prometheus+Grafana for open-source stack - flexible, community-driven, self-hosted.

Performance is a continuous process, not a one-time fix

Performance optimization requires a systematic approach.

Load testing for capacity planning. Profiling for bottleneck identification. Database optimization for fast data access. Caching strategies for reducing repeated queries. Frontend optimization for best user experience.

Measurement and monitoring are crucial - implement APM tools, set performance budgets, alert on regressions.

Highest ROI comes from database optimization (indexes, query tuning, connection pooling) and caching (Redis, CDN). Frontend optimization (Core Web Vitals) directly impacts conversion rates and SEO ranking. Load testing prevents outages during traffic spikes. Check out our articles Azure AKS Production and Cloud Solutions for infrastructure best practices.

Losing money due to a slow application?

Every second of delay means lost orders. Every performance error means disappointed users.

We specialize in optimizing production application performance. We conduct comprehensive load testing (k6, JMeter, Gatling), database tuning, cache architecture (Redis, CDN), APM implementation (New Relic, Datadog), Core Web Vitals optimization. We'll help you achieve sub-100ms response times and 99.9% availability.

Related Articles

Performance Testing and Optimization - Comprehensive Guide 2025 | Wojciechowski.app