{"id":148096,"date":"2026-05-14T11:13:46","date_gmt":"2026-05-14T11:13:46","guid":{"rendered":"\/tutorials\/?p=148096"},"modified":"2026-05-14T11:13:49","modified_gmt":"2026-05-14T11:13:49","slug":"node-js-performance-optimization","status":"publish","type":"post","link":"\/tutorials\/node-js-performance-optimization","title":{"rendered":"Node.js performance optimization: 12 ways to speed up apps"},"content":{"rendered":"<p>Node.js performance optimization means finding and fixing the parts of your app that slow it down, so it can respond faster, handle more traffic, use resources more efficiently, and stay stable under load.<\/p><p>The best approach is to measure first, then fix what the data shows you. Without measurement, you won&rsquo;t know whether the problem is a blocked event loop, heavy middleware, large API responses, repeated database queries, missing indexes, memory leaks, or CPU-heavy work on the main thread.<\/p><p>Tools like profilers, load testers, and APM software show you what your app is actually struggling with.<\/p><p>Once you have a baseline, focus on the fixes that match your bottlenecks:<\/p><ol class=\"wp-block-list\">\n<li>Use async code to keep the event loop free<\/li>\n\n\n\n<li>Clean up Express middleware and shrink API payloads<\/li>\n\n\n\n<li>Cache repeated queries and expensive calculations<\/li>\n\n\n\n<li>Speed up database calls with indexing and connection pooling<\/li>\n\n\n\n<li>Use streams instead of loading large files into memory<\/li>\n\n\n\n<li>Move CPU-heavy tasks off the main thread<\/li>\n\n\n\n<li>Scale across CPU cores with clustering<\/li>\n\n\n\n<li>Tune memory settings and find leaks<\/li>\n\n\n\n<li>Remove unnecessary dependencies<\/li>\n\n\n\n<li>Serve static files through a CDN with compression<\/li>\n\n\n\n<li>Choose the right hosting environment<\/li>\n\n\n\n<li>Monitor production performance continuously<\/li>\n<\/ol><p>Most apps don&rsquo;t need every optimization at once. Fix what profiling points to first, measure again, and move to the next bottleneck.<\/p><h2 class=\"wp-block-heading\" id=\"h-how-to-measure-node-js-performance-before-optimizing\">How to measure Node.js performance before optimizing<\/h2><p>Measuring Node.js performance means profiling your app under realistic load, tracking key metrics, and comparing results before and after each change.<\/p><p>Start with the Node.js built-in profiler and Chrome DevTools. They can show whether your app is slowing down due to CPU-intensive code, memory pressure, or event loop blocking.<\/p><p>Run the built-in profiler with:<\/p><p><code data-enlighter-language=\"shell\" class=\"EnlighterJSRAW\">node --prof app.js<\/code><\/p><p><code data-enlighter-language=\"shell\" class=\"EnlighterJSRAW\">node --prof-process isolate-*.log &gt; processed.txt<\/code><\/p><p>For a visual approach, start your app with:<\/p><p><code data-enlighter-language=\"shell\" class=\"EnlighterJSRAW\">node --inspect app.js<\/code><\/p><p>Then open <code>chrome:\/\/inspect<\/code> in Chrome.<\/p><p>If you need a clearer diagnosis, use Clinic.js. It&rsquo;s a free set of tools for finding Node.js performance issues:<\/p><ul class=\"wp-block-list\">\n<li><strong>Clinic Doctor<\/strong> shows whether the problem is CPU, I\/O, memory, or event loop related<\/li>\n\n\n\n<li><strong>Clinic Flame<\/strong> creates flame graphs so you can see which functions take the most time<\/li>\n\n\n\n<li><strong>Clinic Bubbleprof<\/strong> helps you find slow async chains<\/li>\n<\/ul><div class=\"wp-block-image wp-block-image aligncenter size-large\">\n<figure data-wp-context='{\"imageId\":\"6a05e1dd7f843\"}' data-wp-interactive=\"core\/image\" class=\"wp-lightbox-container\"><img decoding=\"async\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2026\/05\/1778753978115-0.png\" alt=\"Clinic.js landing page\"><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>After profiling, you&rsquo;ll know which routes, functions, or operations are slow. To see how they hold up under heavy load, run a load test with a tool like Autocannon. It sends a high volume of requests to your app and reports response times, request counts, and errors.<\/p><p>To get useful results from load testing:<\/p><ul class=\"wp-block-list\">\n<li>Run tests on hardware close to production<\/li>\n\n\n\n<li>Warm up the app before measuring<\/li>\n\n\n\n<li>Change one thing at a time<\/li>\n\n\n\n<li>Watch database and external API limits so they don&rsquo;t distort results<\/li>\n<\/ul><p>Once your app is live, keep measuring. APM tools like Datadog, New Relic, or AppSignal track route latency, database timing, errors, and resource usage under real traffic. They help you spot production issues, but they don&rsquo;t replace deeper profiling when you need to inspect code-level bottlenecks.<\/p><p>For memory issues, use heap snapshots through Chrome DevTools or <code>v8.writeHeapSnapshot()<\/code>. Full snapshots can be expensive on live traffic, so take them in staging or during low-traffic periods.<\/p><p>Track these metrics before and after each optimization:<\/p><figure tabindex=\"0\" class=\"wp-block-table\"><table><tbody><tr><td colspan=\"1\" rowspan=\"1\"><p><strong>Metric<\/strong><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><strong>What it measures<\/strong><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><strong>What it helps you find<\/strong><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Response time<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Request duration from start to finish<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Slow routes, heavy middleware<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Requests per second<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Requests handled per second<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Throughput limits<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>CPU usage<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Processor time used by the app<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>CPU-heavy code, blocking loops<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Memory usage (RSS)<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Total memory the process holds, including native buffers<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Memory leaks, large caches, native allocations<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Event loop delay<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Delay between scheduled and actual callback runs<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Blocking synchronous code<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Database query time<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Query response time<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Missing indexes, slow queries<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Error rate<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Percentage of failed requests<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Timeouts, unstable code paths<\/span><\/p><\/td><\/tr><\/tbody><\/table><\/figure><h3 class=\"wp-block-heading\">What are the most important Node.js performance metrics?<\/h3><p>The most important metrics are p95 and p99 latency, memory growth, CPU usage, event loop delay, database query duration, cache hit rate, and error rate.<\/p><p>Average response time is useful, but it can mask problems. A route might average 200ms while p99 requests take three seconds. p95 and p99 show the delays your slowest users actually experience during traffic spikes, which averages can miss.<\/p><p>If p95 or p99 latency is high, the other metrics help you trace slow requests back to likely causes:<\/p><ul class=\"wp-block-list\">\n<li>Memory that keeps rising points to a leak<\/li>\n\n\n\n<li>CPU spikes point to heavy computation on the main thread<\/li>\n\n\n\n<li>Event loop delay shows blocking code<\/li>\n\n\n\n<li>Database query duration and cache hit rate show repeated expensive work<\/li>\n\n\n\n<li>Error rate rising alongside traffic means your app is likely running out of resources<\/li>\n<\/ul><p>Check these regularly. When one changes, use it to decide which optimization to apply next. Start with code-level fixes, then move on to infrastructure and monitoring as needed.<\/p><h2 class=\"wp-block-heading\" id=\"h-1-use-asynchronous-code-to-avoid-blocking-the-event-loop\">1. Use asynchronous code to avoid blocking the event loop<\/h2><p>Async code keeps the event loop available while your app waits for file operations, database queries, network requests, or external APIs.<\/p><p><a href=\"\/tutorials\/what-is-node-js\" data-wpel-link=\"internal\" rel=\"follow\">Node.js<\/a> runs JavaScript on a single main thread. When that thread is busy with synchronous work, other requests wait. Blocking code inside request handlers slows down the entire app.<\/p><p>Blocking typically comes from synchronous calls or heavy computation inside request handlers. If your event loop delay is high, check for these first:<\/p><ul class=\"wp-block-list\">\n<li>Synchronous file reads<\/li>\n\n\n\n<li>Expensive loops<\/li>\n\n\n\n<li>Large JSON parsing<\/li>\n\n\n\n<li>Synchronous encryption<\/li>\n\n\n\n<li>CPU-heavy processing inside request handlers<\/li>\n<\/ul><p>Compare a blocking file read with an async version:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">\/\/ Blocking &ndash; other requests wait<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const data = fs.readFileSync('\/path\/to\/large-file.txt', 'utf8');<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">res.send(data);<\/code><\/p><p><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">\/\/ Non-blocking &ndash; the event loop stays available<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const data = await fs.promises.readFile('\/path\/to\/large-file.txt', 'utf8');<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">res.send(data);<\/code><\/p><p>The async version lets Node handle the file read outside the main JavaScript thread. While the file is being read, the event loop keeps handling other requests.<\/p><p>Use <code>async\/await<\/code> and promise-based Node.js APIs (like <code>fs.promises<\/code>) as your default.<\/p><p>Modern database drivers like <code>pg<\/code>, <code>mysql2<\/code>, and Mongoose work the same way, handling I\/O asynchronously, so waiting for the database usually won&rsquo;t block the event loop. Large result sets and heavy response processing can still cause blocking, though, so keep queries and payloads small.<\/p><p>Async code solves I\/O waiting, but not CPU-bound work. If a function spends 500ms crunching numbers, it still blocks the main thread even with <code>await<\/code> in front of it. Worker threads handle that by running the computation on a separate thread.<\/p><p>If your app uses synchronous calls inside request handlers, switching them to async versions can be one of the fastest wins.<\/p><h2 class=\"wp-block-heading\" id=\"h-2-optimize-express-middleware-and-api-responses\">2. Optimize Express middleware and API responses<\/h2><p>Trimming your Express setup and API payloads cuts response time by removing work the server shouldn&rsquo;t do on every request.<\/p><p>Middleware applied with <code>app.use()<\/code> runs on every matching request. If authentication, logging, body parsing, CORS, rate limiting, and compression all run globally, some routes may do work they don&rsquo;t need. Move middleware to specific routes where possible:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">\/\/ Instead of applying auth to every route<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.use(authMiddleware);<\/code><\/p><p><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">\/\/ Apply it only where needed<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.get('\/dashboard', authMiddleware, dashboardHandler);<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.get('\/public-page', publicHandler);<\/code><\/p><p>Now public routes skip authentication work entirely.<\/p><p>Next, look at what your API accepts and returns. Set a limit on incoming JSON payload size:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.use(express.json({ limit: '100kb' }));<\/code><\/p><p>Adjust the limit based on what your API actually accepts. On the response side, return only the fields the client needs. Sending full database records when the frontend uses three fields wastes processing time and bandwidth.<\/p><p>Paginate large responses so endpoints can&rsquo;t return thousands of records without a limit.<\/p><p>If you serve static files through Express, move them to a CDN or reverse proxy like Nginx. If a CDN or reverse proxy already handles compression, skip <code>compression()<\/code> in Express. Compress once, at one layer.<\/p><p>Even with all of this trimmed, response time depends on the full request chain: middleware, validation, database queries, external APIs, data formatting, and network transfer. Profile slow routes to find which part adds the most delay.<\/p><h3 class=\"wp-block-heading\">How can you reduce Node.js API response time?<\/h3><p>You reduce API response time by fixing the slowest part of each route. The highest-impact fixes are:<\/p><ul class=\"wp-block-list\">\n<li>Remove unnecessary middleware from the route<\/li>\n\n\n\n<li>Return only required fields<\/li>\n\n\n\n<li>Paginate large results<\/li>\n\n\n\n<li>Cache repeated responses<\/li>\n\n\n\n<li>Optimize database queries<\/li>\n\n\n\n<li>Set timeouts for slow external requests<\/li>\n<\/ul><p>If a route waits five seconds for an external API, set a timeout and return a fallback when possible. Users get a controlled response instead of waiting until the request fails.<\/p><h2 class=\"wp-block-heading\" id=\"h-3-cache-frequently-requested-data\">3. Cache frequently requested data<\/h2><p>Caching reduces repeated database queries, external API calls, and expensive calculations. If a query takes 50ms and returns the same result many times, caching lets you run it once and serve later requests from cache.<\/p><p>For small, short-lived values on a single server, a simple in-memory cache is often enough. Keep it limited with a TTL or maximum size so it doesn&rsquo;t grow into a memory problem.<\/p><p>Use Redis or Memcached when you run multiple processes, need shared cache across servers, or store session data outside the app process.<\/p><div class=\"wp-block-image wp-block-image aligncenter size-large\">\n<figure data-wp-context='{\"imageId\":\"6a05e1dd80ae7\"}' data-wp-interactive=\"core\/image\" class=\"wp-lightbox-container\"><img decoding=\"async\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2026\/05\/1778753985684-0.png\" alt=\"Redis landing page\"><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><figure tabindex=\"0\" class=\"wp-block-table\"><table><tbody><tr><td colspan=\"1\" rowspan=\"1\"><p><strong>Cache type<\/strong><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><strong>Best for<\/strong><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><strong>Main benefit<\/strong><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><strong>Watch out for<\/strong><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>In-memory<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Small, temporary values<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Fast reads, no network call<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Not shared across processes<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Redis\/Memcached<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Shared data, sessions<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Works across instances<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Adds a network hop<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>CDN<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Static files, public assets<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Lower origin load and latency<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Wrong headers serve stale files<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>HTTP headers<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Repeat visits, static assets<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Controls how responses are reused<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Misconfigured headers cache too much<\/span><\/p><\/td><\/tr><\/tbody><\/table><\/figure><p>Whichever type you use, caching needs two settings: TTL and invalidation.<\/p><p>TTL (time to live) decides how long a cached value stays valid. A short TTL keeps data fresh but means more repeated work. A long TTL improves speed but risks serving stale data. Invalidation is how you remove or update cached values when the source changes.<\/p><p>Track cache hit rate. If 90% of requests are served from cache, your database handles only 10% of that work. If the hit rate stays low, the TTL may be too short, the cache keys too specific, or the data may change too often to cache well.<\/p><h2 class=\"wp-block-heading\" id=\"h-4-optimize-database-queries-and-connection-handling\">4. Optimize database queries and connection handling<\/h2><p>Database calls are often one of the biggest sources of slow API responses because your app waits for the database before it can return a response.<\/p><p>Start with the highest-impact fixes:<\/p><ul class=\"wp-block-list\">\n<li>Select only the columns you need<\/li>\n\n\n\n<li>Avoid <code>SELECT *<\/code><\/li>\n\n\n\n<li>Add indexes to frequently filtered, sorted, or joined columns<\/li>\n\n\n\n<li>Avoid N+1 queries<\/li>\n\n\n\n<li>Paginate large result sets<\/li>\n\n\n\n<li>Use connection pooling<\/li>\n\n\n\n<li>Check execution plans for slow queries<\/li>\n<\/ul><p>Compare an unoptimized query with a cleaner one:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">\/\/ Slow &ndash; fetches every column and every row<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const users = await db.query('SELECT * FROM users');<br><br>\/\/ Faster &ndash; only the fields and rows needed<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const users = await db.query(<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">'SELECT id, name, email FROM users ORDER BY created_at DESC LIMIT 20'<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">);<\/code><\/p><p>On a table with, say, 100,000 rows and 30 columns, the first query makes your app fetch and send far more data than needed. The second returns 20 rows with three columns. With a useful index on <code>created_at<\/code>, the database can pull the most recent records without scanning the full table.<\/p><p>The query itself is only half the story. The other is how your app connects to the database. Without pooling, the app opens a new connection for each request, which adds delay and can overwhelm the database. A connection pool keeps connections ready and reuses them.<\/p><p>Most Node.js drivers, including <code>pg<\/code> and <code>mysql2<\/code>, support pooling. Tune the pool size based on traffic, database capacity, and the number of app instances. Too few connections cause requests to queue. Too many overload the database.<\/p><h3 class=\"wp-block-heading\">How do indexes improve Node.js application performance?<\/h3><p>Indexes help the database find matching rows faster, which reduces query time and speeds up API responses.<\/p><p>If your login route looks up users by email, for example, an index on <code>email<\/code> helps the database find the matching row without scanning every record:<\/p><p><code data-enlighter-language=\"sql\" class=\"EnlighterJSRAW\">SELECT id, name, email<\/code><\/p><p><code data-enlighter-language=\"sql\" class=\"EnlighterJSRAW\">FROM users<\/code><\/p><p><code data-enlighter-language=\"sql\" class=\"EnlighterJSRAW\">WHERE email = $1;<\/code><\/p><p>Email values are typically unique or close to unique, so this type of index is often useful.<\/p><p>For paginated queries that filter and sort, a composite index like <code>(active, created_at)<\/code> may help, depending on the database and data distribution.<\/p><p>Don&rsquo;t index every column. Indexes speed up reads but add write time and storage, because the database updates the index whenever rows change.<\/p><h2 class=\"wp-block-heading\" id=\"h-5-use-streams-for-large-files-and-large-responses\">5. Use streams for large files and large responses<\/h2><p>Streams process data in chunks instead of loading full files into memory. If you load a 500MB CSV file the normal way, for example, it takes 500MB of RAM. Streamed, it uses only a small buffer at any given time.<\/p><p>Avoid loading large files like this:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.get('\/download', async (req, res) =&gt; {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const data = await fs.promises.readFile('large-file.csv');<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">res.send(data);<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">});<\/code><\/p><p>Use a stream instead:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const fs = require('node:fs');<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const { pipeline } = require('node:stream\/promises');<\/code><\/p><p><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.get('\/download', async (req, res, next) =&gt; {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">try {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">await pipeline(<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">fs.createReadStream('large-file.csv'),<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">res<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">);<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">} catch (err) {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">next(err);<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">}<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">});<\/code><\/p><p>The streamed version starts sending data immediately and keeps memory lower because Node handles the file in chunks.<\/p><p>Streams also handle speed mismatches between sender and receiver, which is called backpressure. If a user has a slow connection, Node pauses reading until the response stream catches up. Use <code>pipeline()<\/code> for production code because it handles stream errors and cleanup better than <code>.pipe()<\/code>.<\/p><p>In production, also check that the file exists, set the right headers, and handle client disconnects.<\/p><h2 class=\"wp-block-heading\" id=\"h-6-reduce-cpu-heavy-work-in-the-main-thread\">6. Reduce CPU-heavy work in the main thread<\/h2><p>CPU-heavy tasks block the main thread, preventing the event loop from handling other requests. Async code does not fix this because the JavaScript engine still runs the computation on one thread.<\/p><p>Common CPU-heavy tasks include:<\/p><ul class=\"wp-block-list\">\n<li>Image processing<\/li>\n\n\n\n<li>PDF generation<\/li>\n\n\n\n<li>Encryption<\/li>\n\n\n\n<li>Compression<\/li>\n\n\n\n<li>Large JSON transformation<\/li>\n\n\n\n<li>Report generation<\/li>\n\n\n\n<li>Data-heavy calculations<\/li>\n<\/ul><p>Move heavy work off the main request path based on when and how the result is needed:<\/p><ul class=\"wp-block-list\">\n<li>Use worker threads when the user needs the result during the same request.<\/li>\n\n\n\n<li>Use background job queues when the task can be completed later, like report generation or email processing.<\/li>\n\n\n\n<li>Use external services when the workload is specialized or resource-heavy, such as video processing or machine learning.<\/li>\n<\/ul><p>For background jobs, use a queue like BullMQ with Redis so your API can respond quickly while another worker processes the task.<\/p><p>For CPU-heavy web tasks such as PDF generation, image resizing, or data transformation, worker threads and background queues usually let you stay in Node.js.<\/p><p>For workloads built around data science, machine learning, or numerical computing, it may be worth comparing <a href=\"\/tutorials\/node-js-vs-python\" data-wpel-link=\"internal\" rel=\"follow\">Node.js vs. Python<\/a> before deciding where that part of the system should run.<\/p><h3 class=\"wp-block-heading\">When should you use worker threads in Node.js?<\/h3><p>Use worker threads when your app needs to run CPU-heavy JavaScript without blocking the main event loop, and the user needs the result in the same request.<\/p><p>Worker threads and clustering solve different problems. Worker threads handle CPU-heavy tasks inside a single process. Clustering runs multiple Node.js processes to handle more concurrent requests across CPU cores.<\/p><p>Say your app generates PDF invoices. The main thread can stay available while a worker handles the generation. This example creates one worker for clarity. In production, use a worker pool for repeated jobs:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const { Worker } = require('node:worker_threads');<\/code><\/p><p><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">function generatePDF(data) {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">return new Promise((resolve, reject) =&gt; {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const worker = new Worker('.\/pdf-worker.js', {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">workerData: data,<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">});<\/code><\/p><p><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">worker.on('message', resolve);<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">worker.on('error', reject);<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">worker.on('exit', (code) =&gt; {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">if (code !== 0) {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">reject(new Error(`Worker stopped with exit code ${code}`));<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">}<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">});<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">});<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">}<\/code><\/p><p>Creating a worker for every small task can cost more than the task itself. For short operations (a few milliseconds or less), keep them on the main thread.<\/p><h2 class=\"wp-block-heading\" id=\"h-7-scale-node-js-with-clustering-and-load-balancing\">7. Scale Node.js with clustering and load balancing<\/h2><p>Clustering runs multiple Node.js worker processes across CPU cores, so your app can handle more concurrent requests.<\/p><p>How much you gain depends on the workload. I\/O-heavy apps (like API servers waiting on databases) benefit more than CPU-heavy apps, where each worker still competes for processing power.<\/p><p>Either way, you rarely get a perfect 4x improvement on four cores because shared resources like the database, network, and OS scheduling still create limits.<\/p><p>The Node.js cluster module forks your app into multiple processes that share the same port. On most platforms, Node&rsquo;s primary process distributes connections across workers using round-robin scheduling. PM2 makes this easier by adding process management, automatic restarts, monitoring, and zero-downtime reloads.<\/p><p>Beyond one server, a reverse proxy like Nginx or a load balancer can distribute traffic across multiple servers or containers. <a href=\"\/tutorials\/how-to-use-node-js-with-docker\" data-wpel-link=\"internal\" rel=\"follow\">Docker<\/a> and Kubernetes handle this at a larger scale.<\/p><div class=\"wp-block-image wp-block-image aligncenter size-large\">\n<figure data-wp-context='{\"imageId\":\"6a05e1dd81dcb\"}' data-wp-interactive=\"core\/image\" class=\"wp-lightbox-container\"><img decoding=\"async\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2026\/05\/1778753993383-0.png\" alt=\"nginx landing page\"><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>Clustering helps with throughput, but it won&rsquo;t fix slow queries, memory leaks, blocking CPU work inside each process, or missing caches. Four copies of a slow app are still slow, but they can handle more users at once.<\/p><p>\n\n\n\n    <p class=\"warning\">\n        <strong>Warning!<\/strong> If session data is stored only in process memory, clustering breaks sessions. Each worker has its own memory, so a user might hit a different worker on the next request and lose their session. Store sessions in Redis or another shared store instead.    <\/p>\n    \n\n\n\n<\/p><h3 class=\"wp-block-heading\">What is the difference between clustering and load balancing?<\/h3><p>Clustering runs multiple Node.js processes on one server to use more CPU cores. Load balancing distributes traffic across processes, servers, or containers.<\/p><p>Production apps often use both. Clustering fills the CPU cores on each server. Load balancing spreads traffic across servers for better availability and capacity.<\/p><h2 class=\"wp-block-heading\" id=\"h-8-tune-node-js-memory-and-garbage-collection\">8. Tune Node.js memory and garbage collection<\/h2><p>Memory tuning reduces garbage collection pauses, prevents crashes, and keeps your app stable under sustained traffic. Most apps don&rsquo;t need manual memory tuning unless memory becomes a problem.<\/p><p>V8, the JavaScript engine used by Node.js, manages memory automatically. Its heap includes two main areas:<\/p><ul class=\"wp-block-list\">\n<li><strong>New Space &ndash;<\/strong> stores short-lived objects and is collected often.<\/li>\n\n\n\n<li><strong>Old Space &ndash;<\/strong> stores objects that survive multiple garbage collection cycles and is collected less frequently.<\/li>\n<\/ul><p>High memory usage isn&rsquo;t always a leak. It may come from normal growth under load, from large caches, from native buffers, or from fragmentation. A true leak means memory keeps growing because the app holds references to objects it no longer needs.<\/p><p>You can adjust V8 memory limits with:<\/p><p><code data-enlighter-language=\"shell\" class=\"EnlighterJSRAW\">node --max-old-space-size=4096 app.js<\/code><\/p><p>This raises the Old Space limit. Use it when your app genuinely needs more heap, not to hide a leak. The default depends on your Node.js version and available system memory. In containers, Node.js adjusts the limit based on container memory.<\/p><p>The <code>--max-semi-space-size<\/code> flag influences New Space. A larger value reduces how often short-lived objects get promoted to Old Space, which means fewer slow garbage collection runs. But changing it without profiling can make performance worse. Test with your actual workload first.<\/p><h3 class=\"wp-block-heading\">How can you find memory leaks in Node.js?<\/h3><p>Memory leaks happen when your app keeps references to objects it no longer needs.<\/p><p>Common causes include:<\/p><ul class=\"wp-block-list\">\n<li>Global arrays or maps that keep growing<\/li>\n\n\n\n<li>In-memory caches without TTL or size limits<\/li>\n\n\n\n<li>Event listeners that are never removed<\/li>\n\n\n\n<li>Large objects stored in closures<\/li>\n\n\n\n<li>Timers that never clear<\/li>\n<\/ul><p>Use heap snapshots to compare memory over time:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const v8 = require('node:v8');<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">v8.writeHeapSnapshot();<\/code><\/p><p>Take one snapshot, run the app under load, then take another. Objects that keep growing between snapshots are candidates for leaks.<\/p><p>Monitor memory in production, too. If memory rises for hours or days and doesn&rsquo;t drop after garbage collection, investigate. Load testing with Autocannon can make leaks appear faster in staging.<\/p><h2 class=\"wp-block-heading\" id=\"h-9-reduce-dependency-and-code-overhead\">9. Reduce dependency and code overhead<\/h2><p>Removing unnecessary dependencies and repeated operations can make your app start faster, use less memory, and reduce security risk. Many of these fixes are small cleanup tasks, not rewrites.<\/p><p>Check <code>package.json<\/code> first. Remove packages your app no longer uses. Then look for large packages doing simple tasks. If you use Lodash only for <code>_.get()<\/code>, optional chaining may be enough:<\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const city = user?.address?.city;<\/code><\/p><p>If you use Moment.js only for basic date formatting, <code>Intl.DateTimeFormat<\/code> or a smaller library like <code>date-fns<\/code> may cover what you need.<\/p><p>Dependencies aren&rsquo;t the only source of wasted work. Your own code can repeat expensive operations too:<\/p><ul class=\"wp-block-list\">\n<li>Creating a new HTTP client for every outgoing request<\/li>\n\n\n\n<li>Reading and parsing the same config file on every request<\/li>\n\n\n\n<li>Compiling a regex inside a loop<\/li>\n<\/ul><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">\/\/ Wasteful &ndash; re-reads config on every request<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.get('\/settings', async (req, res) =&gt; {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const config = JSON.parse(await fs.promises.readFile('config.json', 'utf8'));<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">res.json({ theme: config.theme });<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">});<\/code><\/p><p><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">\/\/ Better &ndash; read once at startup<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">const config = JSON.parse(fs.readFileSync('config.json', 'utf8'));<\/code><\/p><p><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">app.get('\/settings', (req, res) =&gt; {<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">res.json({ theme: config.theme });<\/code><\/p><p><code data-enlighter-language=\"js\" class=\"EnlighterJSRAW\">});<\/code><\/p><p>Synchronous reads are fine during startup because no requests are being handled yet. The problem is running them inside request handlers, where they can block other requests.<\/p><p>Keep the dependencies you do use updated. Newer versions often include performance fixes and security patches.<\/p><h2 class=\"wp-block-heading\" id=\"h-10-use-a-cdn-and-compression-for-static-assets\">10. Use a CDN and compression for static assets<\/h2><p>A CDN serves static files from locations closer to users, which reduces latency and lowers load on your origin server. Use it for JavaScript, CSS, images, fonts, and static downloads. Your Node.js app shouldn&rsquo;t spend CPU time serving these when a CDN or reverse proxy can handle them.<\/p><p>Beyond location, file size also affects how fast assets load. Compression reduces file size before assets reach the browser. Gzip and Brotli are the main options. <\/p><p>Brotli compresses static files better than Gzip when you pre-compress them at build time. For dynamic responses compressed on the fly, Gzip is often faster because Brotli&rsquo;s encoding takes more CPU.<\/p><p>Compress text-based assets like HTML, CSS, JavaScript, JSON, and SVG. Skip files that are already compressed, like JPEG, PNG, MP4, and ZIP, where compression adds CPU cost without much size reduction.<\/p><p>For images, use modern formats like WebP or AVIF when possible, and resize images to the dimensions users actually see. If you upload a 4000&times;3000 image but display it at 400&times;300, the browser still downloads the full-size file unless you serve a smaller version. Resizing at build time or through an image CDN saves bandwidth.<\/p><p>Once your files are compressed and properly sized, set long cache headers so returning visitors don&rsquo;t download them again:<\/p><p><code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">Cache-Control: public, max-age=31536000, immutable<\/code><\/p><p>Use cache-busting filenames like <code>app.a3f2b1.js<\/code>. This lets browsers cache files for a long time while still receiving updates when filenames change.<\/p><h2 class=\"wp-block-heading\" id=\"h-11-choose-the-right-hosting-environment-for-node-js-performance\">11. Choose the right hosting environment for Node.js performance<\/h2><p>Hosting affects performance because CPU, RAM, storage speed, bandwidth, CDN access, and server region all determine how well your optimized app runs in production.<\/p><p>Optimizing your code makes the app faster. The right host makes sure that speed reaches your users. You need both.<\/p><p>The main decision is how much infrastructure you want to manage.<\/p><figure tabindex=\"0\" class=\"wp-block-table\"><table><tbody><tr><td colspan=\"1\" rowspan=\"1\"><p><strong>Factor<\/strong><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><strong>Managed Node.js hosting<\/strong><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><strong>VPS<\/strong><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Server control<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>The platform handles most of the setup<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Full root access<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Deployment<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Git or file-based, lower complexity<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>SSH, manual setup<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Maintenance<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Managed by the provider<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>You handle updates and patches<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Flexibility<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Works within platform limits<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Full control over runtime and server<\/span><\/p><\/td><\/tr><tr><td colspan=\"1\" rowspan=\"1\"><p><span>Best for<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Standard apps, faster deployment<\/span><\/p><\/td><td colspan=\"1\" rowspan=\"1\"><p><span>Custom stacks, Docker, PM2, Nginx<\/span><\/p><\/td><\/tr><\/tbody><\/table><\/figure><p><a href=\"\/web-apps-hosting\" data-wpel-link=\"internal\" rel=\"follow\">Node.js hosting<\/a> from Hostinger is one managed option. It supports deployment via GitHub, CDN, SSL, and DDoS protection, so you can focus on shipping code instead of maintaining the server. Node.js hosting is available on Business and Cloud plans.<\/p><div class=\"wp-block-image wp-block-image aligncenter size-large\">\n<figure data-wp-context='{\"imageId\":\"6a05e1dd832e7\"}' data-wp-interactive=\"core\/image\" class=\"wp-lightbox-container\"><img decoding=\"async\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-on-async--load=\"callbacks.setButtonStyles\" data-wp-on-async-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/www.hostinger.com\/tutorials\/wp-content\/uploads\/sites\/2\/2026\/05\/1778754001399-0.png\" alt=\"Hostinger Node.js web app landing page\"><button class=\"lightbox-trigger\" type=\"button\" aria-haspopup=\"dialog\" aria-label=\"Enlarge\" data-wp-init=\"callbacks.initTriggerButton\" data-wp-on-async--click=\"actions.showLightbox\" data-wp-style--right=\"state.imageButtonRight\" data-wp-style--top=\"state.imageButtonTop\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\"><\/path>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure><\/div><p>VPS hosting is better when you need root access, Docker workflows, PM2, Nginx, specific Node.js versions, or low-level performance tuning. The trade-off is that updates, patches, and security are on you.<\/p><p>You can<a href=\"\/tutorials\/deploy-node-js-application\" data-wpel-link=\"internal\" rel=\"follow\"> <\/a><a href=\"\/tutorials\/deploy-node-js-application\" data-wpel-link=\"internal\" rel=\"follow\">deploy a Node.js application<\/a> with either approach. Choose based on your app&rsquo;s needs, not just price. Region, CPU limits, memory, scaling, and monitoring all affect the experience users get.<\/p><figure class=\"wp-block-image size-large\"><a class=\"hgr-tutorials-cta hgr-tutorials-cta-vps-hosting\" href=\"\/vps-hosting\" target=\"_blank\" rel=\"noreferrer noopener\"><img decoding=\"async\" width=\"1024\" height=\"300\" src=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/public\" alt=\"\" class=\"wp-image-77934\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=1024,fit=scale-down 1024w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=150,fit=scale-down 150w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/2\/2023\/02\/VPS-hosting-banner.png\/w=768,fit=scale-down 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><h2 class=\"wp-block-heading\" id=\"h-12-monitor-node-js-performance-in-production\">12. Monitor Node.js performance in production<\/h2><p>Production monitoring catches problems that development testing misses: slow requests under real traffic, memory leaks, error spikes, and database queries that slow down as data grows.<\/p><p>APM tools like Datadog, New Relic, or AppSignal track route-level performance under real traffic. They won&rsquo;t catch everything on their own, so pair them with structured logs, traces, error monitoring, and uptime checks.<\/p><p>For lighter setups, structured logging with Pino or Winston, along with uptime monitoring, covers the basics.<\/p><p>Set alerts for:<\/p><ul class=\"wp-block-list\">\n<li>p95 or p99 latency increases<\/li>\n\n\n\n<li>Memory grows continuously<\/li>\n\n\n\n<li>CPU stays high<\/li>\n\n\n\n<li>Error rate rises<\/li>\n\n\n\n<li>Database query time increases<\/li>\n\n\n\n<li>Cache hit rate drops<\/li>\n<\/ul><p>Start with conservative thresholds and adjust as you learn your app&rsquo;s normal patterns. Noisy alerts get ignored. Targeted alerts catch problems early.<\/p><p>When metrics show something is slow, apply the relevant fix and measure again.<\/p><h2 class=\"wp-block-heading\" id=\"h-node-js-performance-optimization-checklist\">Node.js performance optimization checklist<\/h2><p>Use this checklist after profiling your app. Start with the items connected to the bottlenecks you found, then revisit the rest as traffic, features, and infrastructure change:<\/p><ul class=\"wp-block-list\">\n<li>Measure baseline performance before changing code<\/li>\n\n\n\n<li>Profile slow routes to find what&rsquo;s actually slow<\/li>\n\n\n\n<li>Check p95 and p99 latency, not only averages<\/li>\n\n\n\n<li>Replace synchronous operations inside request handlers<\/li>\n\n\n\n<li>Move route-specific middleware off the global stack<\/li>\n\n\n\n<li>Limit incoming JSON payload size<\/li>\n\n\n\n<li>Return only the fields each endpoint needs<\/li>\n\n\n\n<li>Paginate large list responses<\/li>\n\n\n\n<li>Compress text-based responses at one layer only<\/li>\n\n\n\n<li>Cache repeated queries and expensive calculations<\/li>\n\n\n\n<li>Track cache hit rate<\/li>\n\n\n\n<li>Add indexes for frequent filters, joins, and sorting<\/li>\n\n\n\n<li>Avoid N+1 queries<\/li>\n\n\n\n<li>Use database connection pooling<\/li>\n\n\n\n<li>Use streams for uploads, downloads, and large exports<\/li>\n\n\n\n<li>Move CPU-heavy work to worker threads or background queues<\/li>\n\n\n\n<li>Scale across CPU cores with clustering<\/li>\n\n\n\n<li>Store sessions in a shared store when clustering<\/li>\n\n\n\n<li>Check memory growth before tuning heap flags<\/li>\n\n\n\n<li>Remove unused dependencies<\/li>\n\n\n\n<li>Serve static files through a CDN or reverse proxy<\/li>\n\n\n\n<li>Monitor latency, errors, CPU, memory, database timing, and cache performance<\/li>\n<\/ul><h2 class=\"wp-block-heading\" id=\"h-what-to-do-after-optimizing-node-js\">What to do after optimizing Node.js<\/h2><p>Node.js performance optimization doesn&rsquo;t end after one round of fixes. New features, traffic growth, dependency updates, and database growth can all create new slowdowns over time.<\/p><p>Once you&rsquo;ve fixed the main performance issues, review the rest of your production setup: error handling, security, dependency management, environment configuration, logging, testing, and deployment workflows.<\/p><p>These areas may not always affect speed directly, but they play a big role in how reliably your app performs in production. A fast app still needs clear logs, safe configuration, stable dependencies, and predictable deployment processes to stay healthy as traffic and complexity grow.<\/p><p>Following <a href=\"\/tutorials\/node-js-best-practices\" data-wpel-link=\"internal\" rel=\"follow\">best practices for Node.js<\/a> development helps keep your app maintainable, secure, and production-ready as it grows. Keep measuring, fix the slowest bottleneck first, and repeat.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Node.js performance optimization means finding and fixing the parts of your app that slow it down, so it can respond [&#8230;]<\/p>\n<p><a class=\"btn btn-secondary understrap-read-more-link\" href=\"\/tutorials\/node-js-performance-optimization\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":624,"featured_media":148102,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"rank_math_title":"Node.js performance optimization: 12 ways to speed up apps","rank_math_description":"Learn how to optimize Node.js performance with profiling, caching, database tuning, clustering, memory management, monitoring, and better hosting.","rank_math_focus_keyword":"Node.js performance optimization","footnotes":""},"categories":[22646,22644],"tags":[],"class_list":["post-148096","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-pre-installed-applications","category-vps"],"hreflangs":[{"locale":"en-US","link":"https:\/\/www.hostinger.com\/tutorials\/node-js-performance-optimization","default":1},{"locale":"en-PH","link":"https:\/\/www.hostinger.com\/ph\/tutorials\/node-js-performance-optimization","default":0},{"locale":"en-MY","link":"https:\/\/www.hostinger.com\/my\/tutorials\/node-js-performance-optimization","default":0},{"locale":"en-UK","link":"https:\/\/www.hostinger.com\/uk\/tutorials\/node-js-performance-optimization","default":0},{"locale":"en-IN","link":"https:\/\/www.hostinger.com\/in\/tutorials\/node-js-performance-optimization","default":0},{"locale":"en-CA","link":"https:\/\/www.hostinger.com\/ca\/tutorials\/node-js-performance-optimization","default":0},{"locale":"en-AU","link":"https:\/\/www.hostinger.com\/au\/tutorials\/node-js-performance-optimization","default":0},{"locale":"en-NG","link":"https:\/\/www.hostinger.com\/ng\/tutorials\/node-js-performance-optimization","default":0}],"_links":{"self":[{"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/posts\/148096","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/users\/624"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/comments?post=148096"}],"version-history":[{"count":5,"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/posts\/148096\/revisions"}],"predecessor-version":[{"id":148111,"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/posts\/148096\/revisions\/148111"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/media\/148102"}],"wp:attachment":[{"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/media?parent=148096"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/categories?post=148096"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hostinger.com\/tutorials\/wp-json\/wp\/v2\/tags?post=148096"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}