Seamless Digital Experience.
Happy Customers.

Digital Experience and Error Monitoring Platform - Zipy

Powerful ways to improve Node.js performance at scale

Aryan Raj
~ 11 min read | Published on Jun 05, 2024





TABLE OF CONTENT

Fix bugs faster with Zipy!

  • Session replay
  • Network calls
  • Console Logs
  • Stack traces
  • User identification
Get Started for Free

The development of web apps has undergone a revolution because to the widely used and potent runtime environment Node.js. However, as the size and complexity of your programme increase, it may encounter performance problems that compromise its scalability and dependability.

We'll look at some of the best strategies for enhancing Node.js performance at scale in this blog post. Regardless of how big or complicated your Node.js application gets, we'll cover a variety of effective strategies to make sure it runs smoothly and efficiently, like resource management, caching, and load balancing. Now let's get started and discover how to improve your Node.js performance.

1. Use Typescript

TypeScript is an effective tool for raising the standard of code and enhancing the whole development process. By including type definitions, it expands JavaScript and makes possible the early identification of possible problems. Improved error detection, which makes it simpler to find typos and other issues and ultimately saves time and effort, is one of TypeScript's main benefits.

Increased code editor capabilities is one of TypeScript's main advantages. Especially when working with complicated data structures, type definitions make the development process more efficient by providing a list of suitable attributes, methods, and enumeration values. Furthermore, TypeScript facilitates code organisation with type enforcement, which establishes stringent guidelines for the code.

All things considered, TypeScript has a lot to offer developers, such as better error detection, more functionality in the code editor, and well-organized code. When writing high-quality code, TypeScript is an invaluable tool, regardless of whether one is working on a personal project or a complex enterprise programme.

2. Optimize performance with server-side caching

One common tactic for improving a web application's performance by cutting latency is caching. The primary objective of server-side caching is to reduce computation time and/or I/O activities, such as database or network access, in order to speed up data retrieval.

Why would someone utilise a cache? For data that is often retrieved, a cache acts as high-speed storage, removing the need for continuous retrievals from the slower main source. Data that doesn't change much is best suited for caching, which can greatly increase how quickly repeated requests for that type of data are answered. Furthermore, the outcomes of computationally demanding jobs can be cached to save server resources from having to perform the work again.

Caching can also be a helpful strategy for API queries to external systems, particularly if the answers can be dependably reused for further requests. The additional network request and its accompanying expenses can be avoided by caching API queries.

You can use your Node.js application by caching using an in-process solution such as node-cache, where actively used data is stored in memory for quick retrieval. However, in-process caches are tied to the application process and may not be suitable for distributed workflows. In these cases, a distributed caching solution such as Redis or Memcached may be used, as they run independently of the application and are more scalable for multiple servers.

3. Improve your Node.js development with asynchronous methods

Sequentially executed code is referred to as synchronous code. The programme will wait for the execution of a synchronous code segment before proceeding to the subsequent code line. This may result in blockage, when the application is unable to proceed to the next task until a previous task has been completed.

Asynchronous code, on the other hand, enables the programme to carry out several activities at once. Asynchronous tasks are run without waiting for their completion by the programme; instead, it proceeds on to the next job. Rather, the asynchronous job operates in the background as it carries out other duties. This can lead to code that runs more quickly and effectively, especially when handling lengthy processes like file I/O or network requests.

Asynchronous programming in Node.js is very helpful for managing input/output tasks, such reading and writing data to a file system or database, since it enables the programme to manage several requests at once without causing the thread to stall.

It's crucial to remember that while synchronous code could be suitable for some jobs, like communicating with a database, skilled Node.js developers typically favour asynchronous techniques for parallel or simultaneous activities. This is especially true for more ambitious, intricate tasks that take a long time to complete. Here's an illustration.

const fs = require('fs');

// Asynchronous method for reading a file
fs.readFile('example.txt', 'utf8', (error, data) => {
  if (error) {
    console.error('Error reading file:', error);
  } else {
    console.log('File contents:', data);
  }
});

// Other operations can be performed while the file is being read
console.log('Performing other operations...');

The fs.readFile function is used in this example to read a file's contents. This method takes a callback function as a parameter and operates asynchronously. The callback function is called and the file's contents are supplied as a parameter when the file is read. Other tasks, such reporting a message to the console, can be completed concurrently with the background reading of the file.

It is advised to always utilise asynchronous code while developing Node.js applications for best performance. Therefore, think about using asynchronous methods in your development process if you want to increase both the user experience and the effectiveness of your online application.

4. Monitor & measure your application performance

It's important to monitor and measure your application's performance in order to determine its current state before trying to optimise it. You can then find any shortcomings, inefficiencies, or trouble spots that require fixing.

You can perform a variety of tests, including stress, load, spike, scalability, and volume testing, to assess the performance of your application. These tests aid in determining the system's capability, tolerance, and capacity for recovery under various workloads, loads, and environmental factors.

The following resources can be utilised for various forms of performance testing:

Apache JMeter LoadRunner Gatling Siege Tsung
Stress testing Yes Yes Yes Yes Yes Yes
Load testing Yes Yes Yes Yes No Yes
Spike testing Yes Yes Yes Yes No No
Scalability testing Yes Yes Yes Yes No Yes
Volume testing Yes Yes Yes Yes No Yes

Keep in mind that there are other additional performance testing tools on the market; these are just a few of examples. The application technology stack, the testing goals, and the testing budget are only a few of the variables that influence the tool selection.

These tests yielded metrics about the application's performance, including throughput, concurrent users, CPU and memory usage, response times, error rates, and requests per second.

After determining which areas need to be optimised, you can carry out particular optimisations and then rerun the tests to confirm their efficacy. Through this iterative approach, you may optimise the system's overall performance.

5. Efficiently serving static content with Nginx or CDN

In order to optimize the performance of your Node.js servers, it is advisable to avoid serving static assets such as JavaScript, CSS, and image files directly from your application. This is because Node.js was not primarily designed for this purpose, and doing so may drain essential resources and hinder critical business computations.

Instead, it is recommended to delegate the task of serving static files to a dedicated web server like Nginx, which can apply specialized optimizations that are not feasible for Node.js to perform.

Alternatively, you could consider leveraging a CDN proxy such as Amazon CloudFront to cache your static content and serve it closer to end-users. By doing this, you can relieve the burden on your Node.js servers, allowing them to focus solely on processing dynamic requests.

6. Maximize throughput with clustering

Clustering is an effective technique that allows you to expand the capability of a Node.js server by dividing it into multiple child processes (workers) that run simultaneously and share the same port. It's a crucial approach to minimize downtime, delays, and interruptions by distributing the incoming connections among the worker processes and fully utilizing all available CPU cores.

By using the cluster module in the standard library, it's straightforward to cluster your Node.js server. For example, as shown in the official documentation, the following code demonstrates how to implement clustering:

const cluster = require("cluster");
const http = require("http");
const process = require("process");
const os = require("os");
 
const cpus = os.cpus;
 
const numCPUs = cpus().length;
 
if (cluster.isPrimary) {
  console.log(`Primary ${process.pid} is running`);
 
  // Fork workers.
  for (let i = 0; i < numCPUs; i++) {
    cluster.fork();
  }
 
  cluster.on("exit", (worker, code, signal) => {
    console.log(`worker ${worker.process.pid} died`);
  });
} else {
  // Workers can share any TCP connection
  // In this case it is an HTTP server
  http
    .createServer((req, res) => {
      res.writeHead(200);
      res.end("hello world\n");
    })
    .listen(8000);
 
  console.log(`Worker ${process.pid} started`);
}

When this program is executed, connections made to port 8000 will be distributed among the worker processes. This results in a more efficient management of requests in the application.

$ node server.js
Primary 15990 is running
Worker 15997 started
Worker 15998 started
Worker 16010 started
Worker 16004 started

However, one limitation of using the Node.js built-in cluster module is the requirement of writing a significant amount of code to create and control the worker processes, and it's not possible to adjust the number of workers dynamically.

But there are alternative solutions that can help overcome this limitation. One approach is to use a process manager like PM2 or Forever to manage the worker processes dynamically. These tools allow you to start, stop, and restart worker processes automatically based on various criteria, such as CPU usage, memory usage, and other metrics.

PM2, for example, provides a simple command-line interface to manage the worker processes and provides various features such as process clustering, load balancing, and automatic restarts. With PM2, you can also monitor the worker processes and collect metrics such as CPU usage, memory usage, and request rate.

Another approach is to use a container orchestration platform like Kubernetes, Docker Swarm, or Amazon ECS. These platforms provide a more comprehensive solution for managing and scaling applications in a distributed environment. They allow you to define and deploy containers as well as manage the scaling and orchestration of the containers based on various metrics and policies.

Overall, while the Node.js built-in cluster module can be useful in distributing the workload across multiple processes, using a process manager or container orchestration platform can help you manage the worker processes more dynamically and efficiently.

7. Maximize capacity through load balancing across multiple machines

Scaling a Node.js application horizontally across multiple machines is an effective way to improve its performance and increase its availability. By distributing the application's processes to run on several machines, it can handle a larger volume of traffic. The key to successful horizontal scaling is utilizing a load balancer to manage incoming traffic to the servers.

What does a load balancer do? A load balancer is responsible for directing incoming traffic to the appropriate servers, which helps ensure that the application can handle the increased traffic without experiencing downtime or slowdowns. It is also a good practice to have multiple load balancers in place, as this creates redundancy and helps avoid a single point of failure in the system.

8. HTTP/2 and SSL

Let’s look at the two main features of the HTTP/2 protocol that play a major role in improving performance:

  • HTTP/2 uses header compression: this feature eliminates unnecessary headers and compresses all HTTP headers, making them more efficient. It also reduces the overall size of the data being transmitted.
  • HTTP/2 is multiplexing: it allows multiple requests to be sent to retrieve resources and response messages simultaneously over a single TCP connection. By minimizing the number of requests made to the server, multiplexing reduces the time and resources needed to create an HTTP connection.

To use HTTP/2 with Node.js, it is necessary to implement the Transport Layer Security (TLS) and Secure Socket Layer (SSL) protocols. The good news is that Node.js core implementation makes it simple and straightforward to set up an HTTP/2 server.

Overall, by utilizing HTTP/2 with Node.js, you can improve the performance of your web applications, make them more efficient, and provide a better user experience for your visitors.

9. Increase effectiveness of WebSockets with server communication

In a typical HTTP request/response model, a new connection is established between the client and server for each request. This connection setup involves several steps, including establishing a TCP connection, sending an HTTP request, and receiving an HTTP response. Each of these steps incurs overhead, including network latency, data serialization/deserialization, and CPU processing.

When a web application requires frequent data exchange between the client and server, such as in a chat application or real-time game, this overhead of establishing and tearing down connections can become a bottleneck. This is because the cost of setting up and tearing down connections becomes significant relative to the time required for actual data exchange.

WebSockets, on the other hand, offer a persistent connection between the client and server, which eliminates the need for frequent connection setup/teardown. This not only reduces the overhead of creating and managing many short-lived connections but also allows for faster and more efficient communication between the client and server.

In a Node.js application, the overhead of managing short-lived connections can affect performance, especially when dealing with a large number of concurrent connections. By using WebSockets, the server can handle a much larger number of connections with a lower cost of performance, making it a suitable solution for high-concurrency real-time applications.

10. Scalable client-side authentication with JSON Web Tokens (JWT) and stateless approach

To create a personalized experience for users, many web applications need to maintain their state. For instance, when implementing stateful authentication, a random session identifier is usually generated to store session information on the server. Centralized storage solutions like Redis can be used to store session data, or the IP hash method can be employed to ensure that users always reach the same web server when scaling a stateful solution to a load-balanced application over multiple servers.

However, adopting a stateful approach has some drawbacks. For example, restricting users to a single server can be problematic if that server requires maintenance.

A more scalable approach is to use stateless authentication with JSON Web Tokens (JWT). This has the advantage of making data always accessible regardless of which system is serving a user. With JWT, when a user logs in, a token is generated using a typical implementation. This token is a base64-encoded JSON object that contains the necessary user information. The token is then sent to the client, which uses it to authenticate all API requests.

By utilizing JWT, there is no need to store session information on the server, thereby avoiding issues related to maintaining state. The token's content can be decrypted to obtain the user's information, allowing for efficient communication between the client and server.

11. Utilize worker threads for CPU-intensive tasks

Node.js applications can execute CPU-intensive tasks without blocking the main event loop using worker threads. These threads were first introduced in Node.js version 10.5.0 and were stabilized in version 12.0.0.

Worker threads operate independently from other workers and are created by the main thread. They can share memory by transferring ArrayBuffer instances or sharing SharedArrayBuffer instances. In addition, worker threads can communicate with their parent thread in both directions through a message channel.

Read the official documentation of the worker_threads module to learn more about using Node.js workers to your advantage.

Additional tips to better your Node.js app’s performance

Some more micro-optimizations that can assist in ramping up your Node.js app’s performance, and are as follows:

  • To improve performance in your Node.js application, always use the latest release of Node.js.
  • Choose performant dependencies when possible, and consider writing custom code instead of adding unnecessary dependencies.
  • Focus on optimizing the hotspots of your application, rather than trying to optimize everything.
  • Monitor your application over time to track changes in performance hotspots.
  • Use Node.js streams for efficient handling of large data sets.
  • Minimize memory allocations in hotspots to reduce the load on the garbage collector and lower latency.
  • Optimize your database queries and scale appropriately to avoid becoming a bottleneck.
  • Strive for a balance between performance, development costs, and continued maintenance to ensure reliability. If you want some more info, read our blog on 7 Ways to make HTTP requests in Node.js

Time to scale up your Node.js application performance

In conclusion, this article has presented several practical strategies to enhance the scalability of your Node.js application and handle more traffic effectively. Before applying any optimization, it's crucial to conduct thorough performance tests on your system and use the results to make informed decisions.

To ensure great performance, it is not enough to simply incorporate performance optimization strategies. It is important to also ensure that your application maintains optimal performance over time. You can use tools to monitor performance and debug your Node.js errors, to deliver a glitch-free user experience.

After all faster app performance, directly relates to happy customers and more recurring businesses. Upgrade your Node.js applications for performance and have fun coding!

Call to Action

Feel free to comment or write to us in case you have any further questions at support@zipy.ai. We would be happy to help you. In case you want to explore for your app, you can sign up or book a demo.











Fix bugs faster with Zipy!

Get Started for Free

You might also like

Wanna try Zipy?

Zipy provides you with full customer visibility without multiple back and forths between Customers, Customer Support and your Engineering teams.

The unified digital experience platform to drive growth with Product Analytics, Error Tracking, and Session Replay in one.

SOC 2 Type 2
Zipy is GDPR and SOC2 Type II Compliant
© 2023 Zipy Inc. | All rights reserved
with
by folks just like you
// open links in new tab script