Integrating Rust into Next.js: How-To Developer Guide

Integrating Rust into Next.js: How-To Developer Guide

Featured on Hashnode

Rust is one of the latest languages that got a lot of spotlight in the developer community. When talking performance it rivals C/C++ but it is also accessible enough to be used for a wide range of tasks and can run on many different platforms. One of those platforms is, of course, the web. In this post we will dive into how you can start using Rust in your web projects and platforms you are already familiar with.


Web development was caught in the Rust storm and so far it looks like a great fit. Rust entered on a high note with standard JavaScript tooling like bundlers, compilers, test runners, and even runtimes being (re)written in Rust with massive performance gains. As a long-time programmer who worked in production projects with a bunch of different languages like PHP, Python, Objective C, Swift, and JavaScript my goal with learning a new language is always to start writing production code as soon as possible. Don't get me wrong, fundamentals are important but where you see and learn the nuances of the language is solving real problems.

Recently my focus has been on the backend and APIs so my goal with Rust was to start writing new API endpoints for my projects in Rust instead of the usual JavaScript/Node.js. If you read any of my articles you might already know that I use Vercel to host most of my projects. I like the versatility it provides, allows me to deploy a bunch of different frameworks and mostly forget about the hurdle of managing infrastructure for the apps I am working on and just focus on delivering features. Naturally, I was curious if it is possible to build, deploy, and run Rust code on the Vercel platform and if I can integrate it into my current (mostly) JavaScript-based projects.

To my delight, there are multiple ways to deploy Rust code on Vercel. We will go through different options and then go into details with the option I chose to go with.

  • Using WebAssembly (Wasm) at the Edge. This allows us to write Rust code which we can then compile into .wasm binary which we can import as any other package/file into our JavaScript code.

  • Using custom Rust runtime. Aside from deploying Node.js and frameworks such as Next.js, Vercel supports a range of custom runtimes that allow the deployment of native code compiled from other languages like Go, Python, and also Rust. Some of those runtimes are Vercel-maintained and some, like Rust runtime, are community-maintained. Both work pretty well (I used PHP and Python runtimes before).

  • Using Rust runtime in the Next.js project. While using Rust runtime with Vercel is great, a lot of my projects are actually on frameworks like Next.js. To boost the number of projects in which I can use Rust I decided to integrate a template using Rust runtime with Next.js. That way I could gradually implement some features for my projects in Rust. It will also allow me to learn Rust more by using it each day.

Setup

We are going to start with a fresh create-next-app project to show the modifications needed to make this work. This will also provide you with steps you can take to do it in your project as well. Of course, I also have my template linked on GitHub below.

When you have your Next.js project up and running we can start to put Rust in the mix. If you were wondering this should all work on Next.js version 12.x, 13.x and 14.x and also with pages or app directory. To make use of custom runtimes we need to add vercel.json configuration file to our project.

{
  "functions": {
    "api/**/*.rs": {
      "runtime": "vercel-rust@4.0.6"
    }
  }
}

We are also going to run npm install vercel -D to add vercel CLI to our dev dependencies. We will use it to run a local server for our Rust API endpoints. What this will do is tell Vercel to deploy any of our .rs files as serverless API functions with Rust runtime. The next thing is to create top-level api/ directory inside of our project. This differs from pages/api/ directory that you might already have in your Next.js project. If you have any code inside pages/api/ just keep it there as is and proceed to create top-level /api directory for our Rust runtime functions.

You will also need to have Rust installed on your local machine. The quickest way is with rustup, so you can go ahead and install it if you don't have it already. Rust uses cargo as a package manager and build system. We get everything we need to build and run our Rust code locally with a single command. To enable cargo we need to create Cargo.toml configuration file in the root of our project.

[package]
name = "next-rust"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = { version = "1", features = ["macros"] }
serde_json = { version = "1", features = ["raw_value"] }
# Documentation: https://docs.rs/vercel_runtime/latest/vercel_runtime
vercel_runtime = { version = "1.1.0" }

# Each handler has to be specified as [[bin]]
[[bin]]
name = "crab"
path = "api/crab.rs"

If you are familiar with package.json it has some similarities. We add some general metadata like name and version, and we then add dependencies we need to build our API endpoints. In Rust dependencies are called "crates" and are hosted on crates.io We use tokio as async runtime, serde_json for JSON parsing/transform and vercel_runtime which is Rust equivalent of Vercel or Next.js APIs to manipulate requests and responses. Also, if you are using VSCode you can install rust-analyzer extension which will enable all the Rust language features and give you some more help and context when writing code.

Next, we are going to create our first Rust API endpoint. We will name it api/crab.rs as we wrote in our cargo config file.

use serde_json::json;
use vercel_runtime::{run, Body, Error, Request, Response, StatusCode};

#[tokio::main]
async fn main() -> Result<(), Error> {
    run(handler).await
}

pub async fn handler(_req: Request) -> Result<Response<Body>, Error> {
    Ok(Response::builder()
        .status(StatusCode::OK)
        .header("Content-Type", "application/json")
        .body(
            json!({ "message": "crab is the best!" }).to_string()
            .into(),
        )?)
}

Now, if you never wrote a line of code in Rust this might not be 100% clear but if you used Vercel or some HTTP server in any language it should be mostly readable. We create a handler function that will be called to respond when API request is made to GET /api/crab. It responds with 200 OK status code and returns a message in JSON format.

We are going to quickly test it by running npx vercel dev. If you created a new project this will prompt you to set up your Vercel project, just follow the prompts to set it up. When it goes through it will say something like "Ready! Available at localhost:3000". If you open http://localhost:3000/api/crab in your browser after a few seconds you should see the response.

Browser window with JSON response from our Rust API endpoint.

Response from our Rust API endpointIf you refresh again you will see the response comes back almost instantly. That is because vercel dev under the hood used cargo to build and run our Rust file on the fly. This also automatically detects changes so it will only recompile when you change something in the code. So just write code and enjoy. You might have noticed there is a target/ folder in your root directory now, this is the standard output folder for Rust binaries so you want to add it to your .gitignore file. You can also add those to your .vercelignore (check the template at the end for reference).

Now that we have everything up and running let's clean it up a bit and run it with the rest of our Next.js codebase. Rust runtime will take care of running our Rust code when deployed to Vercel but for dev, we will modify or npm run dev command a bit.

{
  "name": "next-api-rust",
  "version": "0.1.0",
  "private": true,
  "scripts": {
    "dev": "next dev & npm run dev:rust",
    "dev:rust": "vercel dev --listen 3001",
  }
}

What we do here is that we run Rust API on different port with new dev:rust script. Then in addition to running next dev we also add it to our standard dev script. Now we can run everything in a single command that we are used to in Next.js projects.

Another thing is that we don't want to go to localhost:3001 for our Rust endpoints which will be a different port than the default localhost:3000 for Next.js. We will fix it by adding a rewrite inside our next.config.js.

module.exports = {
    rewrites: async () => {
        const rewrites = {
            afterFiles: [
                // apply any of your existing rewrites here
            ],
            fallback: []
        }

        // dev only, this allows for local api calls to be proxied to
        // api routes that use rust runtime
        if (process.env.NODE_ENV === 'development') {
            rewrites.fallback.push({
                source: '/api/:path*',
                destination: 'http://0.0.0.0:3001/api/:path*'
            })
        }

        return rewrites
    }
}

We apply rewrite only in development. On Vercel it will all be deployed to the same deployment so no need for a rewrite. Now you can go to http://localhost:3000/api/crab again and it will work the same way even though it is running on a different dev server. Adding rewrite to fallback also makes sure that any of your existing pages/api endpoints do not get overwritten by Rust API endpoints in development.

Now that we have everything up and running let's take a look at some common patterns you might need (or would use) in day-to-day API development.


Caching

Any production-ready API uses some kind of caching. When deploying on Vercel we can use their Edge Network (CDN) to cache API responses. It works with standard cache-control headers and s-maxage directives in response. To cache our API endpoint to 1 hour we can do the following.

pub async fn handler(_req: Request) -> Result<Response<Body>, Error> {
    Ok(Response::builder()
        .header(
            "Cache-Control",
            format!(
                "public, max-age=0, must-revalidate, s-maxage={s_maxage}",
                s_maxage = 1 * 60 * 60
            ),
        )
        .body(
            json!({ "message": "crab is the best!" })
            .to_string()
            .into(),
        )?)
}

Reading request parameters

vercel_runtime allows us to read data from incoming requests. Here are some common examples.


// reading incoming req headers
let headers = req.headers();
let authHeader = headers.get("authorization");

// parsing req url and path
let url = Url::parse(&_req.uri().to_string());

// read url query params
let query_params = url
    .query_pairs()
    .into_owned()
    .collect::<HashMap<String, String>>();
let id = query_params.get("id")

Common utils/libs

With bigger projects, you would probably have a set of utils that are used across different API endpoints. With Rust, we can also register local crates in our Cargo config.

[lib]
path = "src/rs/utils.rs"

I decided to put all of my shared Rust code inside src/rs directory to separate from other JavaScript files. In this crate I can define functions and macros to be reused in my Rust API endpoints. To use these functions just import them as from any other Rust crate. If you installed rust-analyzer or similar language integration in your IDE just starting to type the function name will allow you to auto-import.

Environment variables

You can access all of your env variables from a local .env file or your Vercel deployment. I recommend dotenv crate which is nice and easy to use.

Error handling

You don't want your JSON APIs to crash with ugly unhandled exceptions. Rust has a specific way of handling errors but to catch or throw structured errors in my API endpoints I added a macro to my utils which I use all around the project.

pub fn throw_error(
    message: &str,
    error: Option<Error>,
    status_code: StatusCode,
) -> Result<Response<Body>, Error> {
    if let Some(error) = error {
        eprintln!("error: {error}");
    }

    Ok(Response::builder()
        .status(status_code)
        .header("Content-Type", "application/json")
        .body(
            json!({ "message": message })
            .to_string()
            .into(),
        )?)
}

It accepts message, error, and HTTP status code which then get logged and return a response with the nice error message and status code that was provided. In addition, few macros are available for common cases like throwing generic 500 errors.

Different request methods

You can support different request methods like POST, PUT and DELETE. Each API function can handle multiple request methods so let's see how to validate and limit which methods API endpoint accepts. First, we will define a separate function for each of our request methods. For example for POST method.

fn route_post(_req: Request) -> Result<Response<Body>, Error> {
    // this is where you would do standard Response::builder()
    // and return response
}

Then in our handler, we validate and map correct functions to request methods.

pub async fn handler(_req: Request) -> Result<Response<Body>, Error> {
    let response = match _req.method().to_owned() {
        Method::POST => route_post(_req),
        _ => {
            // you can also see throw_error macro in action
            return Ok(throw_error!(
                "method not allowed",
                None,
                StatusCode::METHOD_NOT_ALLOWED
            )?);
        }
    };

    // then you return response as usual here
}

Communicating with other services

API endpoints can communicate with other services over different protocols. Most of the time it is over HTTP and for that, I recommend reqwest crate which works well with other packages we already use. You can see some examples inside my linked template project.

Cold boot and startup

Deploying serverless comes with a cost. Sometimes your users will hit an endpoint without a prepared instance and will have to experience a cold boot delay. One of the factors affecting cold boots is the size. From my testing compiled serverless functions on Vercel with Rust runtime are smaller than Node.js functions with a similar amount of code but larger than Edge functions.

Breakdown of serverless function size for Node.js, Rust and Edge runtimes. From worse to best in that order.

Cold boots vary between Node.js and Edge runtimes on VercelAs for cold boot performance I measured it ranges from 500–1000ms. When hitting warm/ready instance performance is great and if your project gets a lot of requests (together with caching) you will not have any issues with your Rust-implemented API endpoints.

I have been using this setup in some of my projects successfully for the past few months. One of those https://shareimdb.com/ is fully run on Rust runtime in terms of API and I am using JavaScript/React only for pages needing HTML rendering. The project profited a lot by doing underlying scraping and parsing with Rust instead of Node.js. For example, I would have trouble fitting headless browser runtime in a severless function 50mb limit, but with Rust, it was not an issue since the size was smaller. The most important thing is to pick the right tool for the job. Anyone who worked on any kind of long-term production project will know that the "rewrite everything mindset" is not viable. With that in mind this kind of partial integration and usage of different technologies/languages on a single project is perfect because they can co-exist and if picked correctly can make the project better.

Issues

I want to end this by pointing out some issues that I encountered while using Rust in production on Vercel. As said, I have been running this successfully for the last few months but I want to point out some pain points I had and bugs I encountered to hopefully save you some time in decisions and debugging.

One issue that I noticed is that sometimes vercel dev running Rust runtime functions would just crash for no obvious reason. For example, when you add specific dependencies through cargo it might require restart of vercel dev. Sometimes when adding a new file to the cargo config or as a lib I would also need to re-run my dev command or kill node process that is running the server. While this is a community runtime (acknowledged by Vercel) I hope in the future they also add Rust as an official runtime.

Another bigger issue is that there is a problem with Vercel when deploying dynamic routes with any custom runtime (not just Rust). You can read the details on the bug report I opened on Vercel GitHub. There are also some alternatives to how to avoid this inside the ticket.


All in all, I loved that I was able to add Rust to my existing development stack and projects. You can take a look at my Next.js + Rust template created from this blog post for more code and examples. You can also use it as a template for your new projects.

If you enjoyed this post I would appreciate the like or share and if you are interested in learning more about this subject here are some links to continue reading.

And some of my other posts/projects: