Seamless Digital Experience.
Happy Customers.

Digital Experience and Error Monitoring Platform - Zipy

Powerful ways to improve Node.js performance at scale

Aryan Raj
~ 11 min read | Published on Mar 05, 2024





TABLE OF CONTENT

Fix bugs faster with Zipy!

  • Session replay
  • Network calls
  • Console Logs
  • Stack traces
  • User identification
Get Started for Free

Node.js is a popular and powerful runtime environment that has revolutionized the way web applications are developed. However, as your application grows in size and complexity, it can start to experience performance issues that affect its scalability and reliability.

In this blog post, we will explore some of the most effective ways to improve Node.js performance at scale. From optimizing your code and managing resources to using caching and load balancing, we'll cover a range of powerful techniques that can help you ensure your Node.js application runs smoothly and efficiently, no matter how large or complex it becomes. So let's dive in and learn how to take your Node.js performance to the next level.

1. Use Typescript

TypeScript is a powerful tool for improving code quality and overall development experience. It extends JavaScript by adding type definitions, allowing for early detection of potential issues. One major advantage of TypeScript is improved error detection, making it easier to catch typos and other errors, saving time and effort in the long run.

Another key benefit of TypeScript is enhanced code editor functionality. Type definitions provide a list of eligible properties, methods, and enumeration values, making the development process more efficient, especially when working with complex data structures. Additionally, TypeScript helps to organize code in a structured and maintainable manner, with type enforcement defining strict rules for the code.

In totality, TypeScript offers numerous benefits for developers, including improved error detection, enhanced code editor functionality, and organized code structure. Whether working on a personal project or a large-scale enterprise application, TypeScript is a valuable tool for creating high-quality code.

2. Optimize performance with server-side caching

Caching is a widely-used strategy for boosting the performance of a web application by reducing latency. The main goal of server-side caching is to speed up data retrieval, either by cutting down on computation time or I/O operations such as network or database access.

What is the use of a cache? A cache serves as high-speed storage for frequently accessed data, eliminating the need for constant retrieval from the slower primary source. Caching is most effective for data that does not change frequently, and it can significantly improve the responsiveness of repeated requests for such data. Additionally, the results of computationally intensive tasks can be stored in the cache, freeing up server resources from repeating the work.

For API requests to external systems, caching can also be a useful technique, especially if the responses can be reliably reused for subsequent requests. Storing API requests in a cache avoids the added network request and associated costs.

You can use your Node.js application by caching using an in-process solution such as node-cache, where actively used data is stored in memory for quick retrieval. However, in-process caches are tied to the application process and may not be suitable for distributed workflows. In these cases, a distributed caching solution such as Redis or Memcached may be used, as they run independently of the application and are more scalable for multiple servers.

3. Improve your Node.js development with asynchronous methods

Synchronous code refers to code that is executed in sequential order. When a piece of synchronous code is executed, the program will wait for it to complete before moving on to the next line of code. This can lead to blocking, where the program must wait for a task to finish before it can move on to the next task.

In contrast, asynchronous code allows the program to execute multiple tasks simultaneously. When an asynchronous task is executed, the program does not wait for it to complete before moving on to the next task. Instead, it continues to execute other tasks while the asynchronous task runs in the background. This can result in faster and more efficient execution of code, particularly when dealing with long-running operations such as network requests or file I/O.

In Node.js, asynchronous programming is particularly useful for handling I/O operations, such as reading and writing data to a database or file system, as it allows the program to handle multiple requests simultaneously without blocking the thread.

While synchronous code may be appropriate for certain tasks, such as interacting with a database, it is important to note that experienced Node.js developers generally prefer asynchronous methods for parallel or simultaneous operations. This is particularly true for larger, complex projects that require extensive execution. Here’s an example.

const fs = require('fs');

// Asynchronous method for reading a file
fs.readFile('example.txt', 'utf8', (error, data) => {
  if (error) {
    console.error('Error reading file:', error);
  } else {
    console.log('File contents:', data);
  }
});

// Other operations can be performed while the file is being read
console.log('Performing other operations...');

In this example, the fs.readFile method is used to read the contents of a file. This is an asynchronous method that accepts a callback function as an argument. When the file is read, the callback function is executed, and the contents of the file are passed as a parameter. While the file is being read in the background, other operations can be performed simultaneously, such as logging a message to the console.

For optimal performance, it is recommended to always use asynchronous code in your Node.js development. So, if you want to enhance the user experience and improve the efficiency of your web application, consider embracing asynchronous methods in your development process.

4. Monitor & measure your application performance

Before attempting to optimize your application's performance, it is crucial to assess its current performance status through monitoring and measuring. By doing so, you can identify any inefficiencies, flaws, or problematic areas that need improvement.

To evaluate your application's performance, you can conduct various types of tests, such as stress testing, load testing, spike testing, scalability testing, and volume testing. These tests help to measure the system's potential, tolerance, and recovery capabilities under different loads, workloads, and conditions.

Here are some tools that can be used for different types of performance testing:

Apache JMeter LoadRunner Gatling Siege Tsung
Stress testing Yes Yes Yes Yes Yes Yes
Load testing Yes Yes Yes Yes No Yes
Spike testing Yes Yes Yes Yes No No
Scalability testing Yes Yes Yes Yes No Yes
Volume testing Yes Yes Yes Yes No Yes

Note that these are just a few examples, and there are many other performance testing tools available in the market. The choice of tool depends on various factors such as the application technology stack, the testing objectives, and the testing budget.

The metrics obtained from these tests, such as response times, error rates, requests per second, CPU and memory usage, concurrent users, and throughput, provide insight into the application's performance status.

Once you have identified the areas that require optimization, you can implement specific optimizations and then re-run the tests to validate their effectiveness. This iterative process enables you to fine-tune the system and improve its overall performance.

5. Efficiently serving static content with Nginx or CDN

In order to optimize the performance of your Node.js servers, it is advisable to avoid serving static assets such as JavaScript, CSS, and image files directly from your application. This is because Node.js was not primarily designed for this purpose, and doing so may drain essential resources and hinder critical business computations.

Instead, it is recommended to delegate the task of serving static files to a dedicated web server like Nginx, which can apply specialized optimizations that are not feasible for Node.js to perform.

Alternatively, you could consider leveraging a CDN proxy such as Amazon CloudFront to cache your static content and serve it closer to end-users. By doing this, you can relieve the burden on your Node.js servers, allowing them to focus solely on processing dynamic requests.

6. Maximize throughput with clustering

Clustering is an effective technique that allows you to expand the capability of a Node.js server by dividing it into multiple child processes (workers) that run simultaneously and share the same port. It's a crucial approach to minimize downtime, delays, and interruptions by distributing the incoming connections among the worker processes and fully utilizing all available CPU cores.

By using the cluster module in the standard library, it's straightforward to cluster your Node.js server. For example, as shown in the official documentation, the following code demonstrates how to implement clustering:

const cluster = require("cluster");
const http = require("http");
const process = require("process");
const os = require("os");
 
const cpus = os.cpus;
 
const numCPUs = cpus().length;
 
if (cluster.isPrimary) {
  console.log(`Primary ${process.pid} is running`);
 
  // Fork workers.
  for (let i = 0; i < numCPUs; i++) {
    cluster.fork();
  }
 
  cluster.on("exit", (worker, code, signal) => {
    console.log(`worker ${worker.process.pid} died`);
  });
} else {
  // Workers can share any TCP connection
  // In this case it is an HTTP server
  http
    .createServer((req, res) => {
      res.writeHead(200);
      res.end("hello world\n");
    })
    .listen(8000);
 
  console.log(`Worker ${process.pid} started`);
}

When this program is executed, connections made to port 8000 will be distributed among the worker processes. This results in a more efficient management of requests in the application.

$ node server.js
Primary 15990 is running
Worker 15997 started
Worker 15998 started
Worker 16010 started
Worker 16004 started

However, one limitation of using the Node.js built-in cluster module is the requirement of writing a significant amount of code to create and control the worker processes, and it's not possible to adjust the number of workers dynamically.

But there are alternative solutions that can help overcome this limitation. One approach is to use a process manager like PM2 or Forever to manage the worker processes dynamically. These tools allow you to start, stop, and restart worker processes automatically based on various criteria, such as CPU usage, memory usage, and other metrics.

PM2, for example, provides a simple command-line interface to manage the worker processes and provides various features such as process clustering, load balancing, and automatic restarts. With PM2, you can also monitor the worker processes and collect metrics such as CPU usage, memory usage, and request rate.

Another approach is to use a container orchestration platform like Kubernetes, Docker Swarm, or Amazon ECS. These platforms provide a more comprehensive solution for managing and scaling applications in a distributed environment. They allow you to define and deploy containers as well as manage the scaling and orchestration of the containers based on various metrics and policies.

Overall, while the Node.js built-in cluster module can be useful in distributing the workload across multiple processes, using a process manager or container orchestration platform can help you manage the worker processes more dynamically and efficiently.

7. Maximize capacity through load balancing across multiple machines

Scaling a Node.js application horizontally across multiple machines is an effective way to improve its performance and increase its availability. By distributing the application's processes to run on several machines, it can handle a larger volume of traffic. The key to successful horizontal scaling is utilizing a load balancer to manage incoming traffic to the servers.

What does a load balancer do? A load balancer is responsible for directing incoming traffic to the appropriate servers, which helps ensure that the application can handle the increased traffic without experiencing downtime or slowdowns. It is also a good practice to have multiple load balancers in place, as this creates redundancy and helps avoid a single point of failure in the system.

8. HTTP/2 and SSL

Let’s look at the two main features of the HTTP/2 protocol that play a major role in improving performance:

  • HTTP/2 uses header compression: this feature eliminates unnecessary headers and compresses all HTTP headers, making them more efficient. It also reduces the overall size of the data being transmitted.
  • HTTP/2 is multiplexing: it allows multiple requests to be sent to retrieve resources and response messages simultaneously over a single TCP connection. By minimizing the number of requests made to the server, multiplexing reduces the time and resources needed to create an HTTP connection.

To use HTTP/2 with Node.js, it is necessary to implement the Transport Layer Security (TLS) and Secure Socket Layer (SSL) protocols. The good news is that Node.js core implementation makes it simple and straightforward to set up an HTTP/2 server.

Overall, by utilizing HTTP/2 with Node.js, you can improve the performance of your web applications, make them more efficient, and provide a better user experience for your visitors.

9. Increase effectiveness of WebSockets with server communication

In a typical HTTP request/response model, a new connection is established between the client and server for each request. This connection setup involves several steps, including establishing a TCP connection, sending an HTTP request, and receiving an HTTP response. Each of these steps incurs overhead, including network latency, data serialization/deserialization, and CPU processing.

When a web application requires frequent data exchange between the client and server, such as in a chat application or real-time game, this overhead of establishing and tearing down connections can become a bottleneck. This is because the cost of setting up and tearing down connections becomes significant relative to the time required for actual data exchange.

WebSockets, on the other hand, offer a persistent connection between the client and server, which eliminates the need for frequent connection setup/teardown. This not only reduces the overhead of creating and managing many short-lived connections but also allows for faster and more efficient communication between the client and server.

In a Node.js application, the overhead of managing short-lived connections can affect performance, especially when dealing with a large number of concurrent connections. By using WebSockets, the server can handle a much larger number of connections with a lower cost of performance, making it a suitable solution for high-concurrency real-time applications.

10. Scalable client-side authentication with JSON Web Tokens (JWT) and stateless approach

To create a personalized experience for users, many web applications need to maintain their state. For instance, when implementing stateful authentication, a random session identifier is usually generated to store session information on the server. Centralized storage solutions like Redis can be used to store session data, or the IP hash method can be employed to ensure that users always reach the same web server when scaling a stateful solution to a load-balanced application over multiple servers.

However, adopting a stateful approach has some drawbacks. For example, restricting users to a single server can be problematic if that server requires maintenance.

A more scalable approach is to use stateless authentication with JSON Web Tokens (JWT). This has the advantage of making data always accessible regardless of which system is serving a user. With JWT, when a user logs in, a token is generated using a typical implementation. This token is a base64-encoded JSON object that contains the necessary user information. The token is then sent to the client, which uses it to authenticate all API requests.

By utilizing JWT, there is no need to store session information on the server, thereby avoiding issues related to maintaining state. The token's content can be decrypted to obtain the user's information, allowing for efficient communication between the client and server.

11. Utilize worker threads for CPU-intensive tasks

Node.js applications can execute CPU-intensive tasks without blocking the main event loop using worker threads. These threads were first introduced in Node.js version 10.5.0 and were stabilized in version 12.0.0.

Worker threads operate independently from other workers and are created by the main thread. They can share memory by transferring ArrayBuffer instances or sharing SharedArrayBuffer instances. In addition, worker threads can communicate with their parent thread in both directions through a message channel.

Read the official documentation of the worker_threads module to learn more about using Node.js workers to your advantage.

Additional tips to better your Node.js app’s performance

Some more micro-optimizations that can assist in ramping up your Node.js app’s performance, and are as follows:

  • To improve performance in your Node.js application, always use the latest release of Node.js.
  • Choose performant dependencies when possible, and consider writing custom code instead of adding unnecessary dependencies.
  • Focus on optimizing the hotspots of your application, rather than trying to optimize everything.
  • Monitor your application over time to track changes in performance hotspots.
  • Use Node.js streams for efficient handling of large data sets.
  • Minimize memory allocations in hotspots to reduce the load on the garbage collector and lower latency.
  • Optimize your database queries and scale appropriately to avoid becoming a bottleneck.
  • Strive for a balance between performance, development costs, and continued maintenance to ensure reliability. If you want some more info, read our blog on 7 Ways to make HTTP requests in Node.js

Time to scale up your Node.js application performance

In conclusion, this article has presented several practical strategies to enhance the scalability of your Node.js application and handle more traffic effectively. Before applying any optimization, it's crucial to conduct thorough performance tests on your system and use the results to make informed decisions.

To ensure great performance, it is not enough to simply incorporate performance optimization strategies. It is important to also ensure that your application maintains optimal performance over time. You can use tools to monitor performance and debug your Node.js errors, to deliver a glitch-free user experience.

After all faster app performance, directly relates to happy customers and more recurring businesses. Upgrade your Node.js applications for performance and have fun coding!

Call to Action

Feel free to comment or write to us in case you have any further questions at support@zipy.ai. We would be happy to help you. In case you want to explore for your app, you can sign up or book a demo.











Fix bugs faster with Zipy!

Get Started for Free
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Want to solve customer bugs even before they're reported?

The unified digital experience platform to drive growth with Product Analytics, Error Tracking, and Session Replay in one.

SOC 2 Type 2
Zipy is GDPR and SOC2 Type II Compliant
© 2023 Zipy Inc. | All rights reserved
with
by folks just like you
// open links in new tab script