TL;DR
EXPLAIN ANALYZE is PostgreSQL’s most powerful diagnostic tool for understanding query execution. Unlike EXPLAIN alone (which shows the planner’s estimated plan), EXPLAIN ANALYZE actually runs the query and shows real execution times, row counts, and buffer usage for each operation. Impact: Query time reduced from 6.2 seconds to 120ms — a 50x improvement by replacing sequential scans with targeted index scans. This guide walks through the problem, the solution, and how to verify the improvement with real metrics.
The Problem
Your Laravel application has a dashboard page that takes 8+ seconds to load. Laravel Debugbar shows a single Eloquent query taking 6 seconds. The query joins users, orders, and products tables with WHERE conditions on date ranges and status. You need to find exactly which part of the query is slow and why.
This is a common scenario in production applications that have been running for months or years. The performance degradation is often gradual, making it hard to notice until it becomes a serious issue affecting users or operational costs.
How It Works
EXPLAIN ANALYZE is PostgreSQL’s most powerful diagnostic tool for understanding query execution. Unlike EXPLAIN alone (which shows the planner’s estimated plan), EXPLAIN ANALYZE actually runs the query and shows real execution times, row counts, and buffer usage for each operation. This lets you identify which part of a complex query is the bottleneck — whether it’s a sequential scan that should use an index, a hash join consuming too much memory, or a nested loop multiplying row counts unexpectedly.
Understanding the underlying mechanism is key to applying this optimization correctly and knowing when it applies to your specific situation versus when a different approach is needed.
Solution
The following before/after comparison demonstrates the complete solution. Study the comments carefully — they explain the reasoning behind each configuration choice.
Before
-- Slow query: full table scans everywhere
SELECT users.name, COUNT(orders.id) as order_count,
SUM(orders.total) as revenue
FROM users
JOIN orders ON orders.user_id = users.id
JOIN order_items ON order_items.order_id = orders.id
JOIN products ON products.id = order_items.product_id
WHERE orders.created_at BETWEEN '2025-01-01' AND '2025-12-31'
AND orders.status = 'completed'
AND products.category_id = 5
GROUP BY users.id
ORDER BY revenue DESC
LIMIT 20;
-- EXPLAIN ANALYZE output (abbreviated):
-- Sort (cost=45230..45231 rows=20) (actual time=6234.521..6234.530 rows=20)
-- -> HashAggregate (cost=45200..45220 rows=500) (actual time=6230.100..6234.200 rows=487)
-- -> Hash Join (cost=12000..44000 rows=50000) (actual time=890.000..6100.000 rows=48723)
-- -> Seq Scan on orders (cost=0..8500 rows=100000) (actual time=0.02..3200.00 rows=98000)
-- Filter: (status = 'completed' AND created_at >= ... AND created_at <= ...)
-- Rows Removed by Filter: 402000 <-- PROBLEM: scanning 500K rows, keeping 98K
After
-- Step 1: Add composite index for the filter conditions
CREATE INDEX CONCURRENTLY idx_orders_status_created
ON orders (status, created_at)
WHERE status = 'completed'; -- partial index: only index completed orders
CREATE INDEX CONCURRENTLY idx_order_items_order_product
ON order_items (order_id, product_id);
-- Step 2: Verify with EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT users.name, COUNT(orders.id) as order_count,
SUM(orders.total) as revenue
FROM users
JOIN orders ON orders.user_id = users.id
JOIN order_items ON order_items.order_id = orders.id
JOIN products ON products.id = order_items.product_id
WHERE orders.created_at BETWEEN '2025-01-01' AND '2025-12-31'
AND orders.status = 'completed'
AND products.category_id = 5
GROUP BY users.id
ORDER BY revenue DESC
LIMIT 20;
-- After indexing:
-- Index Scan using idx_orders_status_created on orders
-- (actual time=0.03..45.00 rows=98000) <-- 3200ms -> 45ms
Performance Impact
Query time reduced from 6.2 seconds to 120ms — a 50x improvement by replacing sequential scans with targeted index scans
Here are the measured results from applying this optimization in a production environment:
| Metric | Before | After |
|---|---|---|
| Query execution time | 6,234ms | 120ms |
| Rows scanned (orders) | 500,000 | 98,000 |
| Buffer hits | 12,400 pages | 890 pages |
| Shared read (disk I/O) | 8,200 pages | 12 pages |
These numbers will vary based on your specific data volume, hardware, and query patterns, but the relative improvement should be consistent. Always measure before and after in your own environment to confirm the impact.
When to Use This
This optimization is most effective when:
- Your application matches the problem scenario described above
- You’ve confirmed the bottleneck with monitoring or profiling tools
- The data volume is large enough that the optimization makes a meaningful difference
It may not be the right fit if your tables are small (under 100K rows), your queries are already fast (under 10ms), or the bottleneck is elsewhere in your stack (application code, network, or client-side rendering).
Key Takeaways
- Measure first: Always profile before optimizing — the bottleneck may not be where you think it is
- Test in staging: Apply the optimization in a staging environment with production-like data before deploying
- Monitor after: Set up dashboards tracking the metrics above so you can verify the improvement and catch regressions
Frequently Asked Questions
What is the difference between EXPLAIN and EXPLAIN ANALYZE in PostgreSQL?
EXPLAIN shows the query planner’s estimated execution plan without running the query — it shows estimated costs and row counts. EXPLAIN ANALYZE actually executes the query and shows real timing and row counts. Always use EXPLAIN ANALYZE for debugging, but be careful: it runs the query, so use it inside a transaction (BEGIN; EXPLAIN ANALYZE …; ROLLBACK;) for INSERT/UPDATE/DELETE statements.
How do I read EXPLAIN ANALYZE output?
Read bottom-up. The innermost (most indented) nodes execute first. Look for: (1) large differences between estimated and actual rows — this means stale statistics, run ANALYZE on the table; (2) Seq Scan on large tables — usually needs an index; (3) high ‘actual time’ values — the bottleneck; (4) ‘Rows Removed by Filter’ — indicates the scan reads many rows but discards most, suggesting a more selective index is needed.
How do I run EXPLAIN ANALYZE on Laravel Eloquent queries?
Use DB::enableQueryLog(), run your Eloquent query, then get the SQL with DB::getQueryLog(). Copy the raw SQL (with bindings substituted) and run it in psql or pgAdmin with EXPLAIN ANALYZE prepended. For a quicker approach, use the barryvdh/laravel-debugbar package which shows query plans in the browser.