Blog post

What is server middleware

2022-08-04T15:00:00

20 minute read

In this post we will take a general look into what middleware is, the benefits of the pattern and then how to use middleware in a Rust server application.

What is middleware?

A web server generally provides responses to requests. Very often, the protocol of choice is HTTP. A handler (sometimes called a response callback) is a function which takes a request's data and returns a response.

Most server frameworks have a system called a 'router' which routes requests based on various parameters - usually the URL path. In HTTP routing is typically a combination of the path and the request method (GET, POST, PUT etc.). The benefit of a router is that it allows splitting per path logic up, which makes building large systems with lots of endpoints easier to manage.

Individual path handlers are great, but sometimes you want logic which applies to a group of paths or indeed all paths. This is where middleware comes in. Unlike a handler, middleware is called on every request and path that it's registered on. Like handlers, middleware are functions.

Middleware is very much implementor dependent. We will have a look at some concrete examples, but different frameworks have opted for different tradeoffs in their middleware implementation. Some middleware implementations work on an immutable state and act as a transformer on request and responses. Other frameworks treat the inputs as mutable and can freely modify / mutate the request objects. Some frameworks implement middleware that can fail or short circuit.

Middleware as a stack

Middleware tends to be well-ordered. That is, a request or response passes through middleware in a well-defined order, as each layer processes the request or response and passes it onto the next layer:

1        requests

2           |

3           v

4+----- layer_three -----+

5| +---- layer_two ----+ |

6| | +-- layer_one --+ | |

7| | |               | | |    

8| | |    handler    | | |

9| | |               | | |

10| | +-- layer_one --+ | |

11| +---- layer_two ----+ |

12+----- layer_three -----+

13           |

14           v

15        responses
16

Applications of middleware

Authentication

Many routes may want user information. The incoming request contain user information via cookies or http authentication. Rather than every path handler having to deal with extracting the information we can abstract this logic to a request middleware and pass it down to subsequent handlers.

Logging

Information about which paths users are going to and when can be very useful. With logging middleware we can log and store request information for later analysis.

Similar to logging is server response timings. This is a field / http header, which is standardized for holding timing information about requests. Here our middleware can log the start time of an incoming request and the end time on the response. Then the middleware can modify the outgoing response to include the timing. This header is often highlighted in developer tools, which can be useful while debugging. It can also be used in chunked / streamed responses where the header of a request might have already been sent by using Trailers.

Compression and other response optimizations

Middleware can also amend outgoing responses and compress the output via algorithms like gzip and brotli. This removes the responsibility out of handlers and provides a convenient default for all responses.

And it doesn't have to just be compression of responses, another use case is image resizing. Identifying mobile viewports using information on the request, outgoing responses can instead return smaller images rather than huge 4k images, in the end reducing bandwidth.

Structuring applications

As mentioned above the benefits of the middleware system is that while it is possible to do this stuff individually in each handler, abstracting it moves the responsibility away from the handlers. This can make management simpler and fewer lines of code!

1fn index() {

2    let index_page = "...";

3    return compress(index_page);

4}

5

6fn about() {

7    let about_page = "...";

8    return compress(about_page);

9}

10

11fn search() {

12    let search_page = "...";

13    return compress(search_page);

14}

15

16Application::build()

17    .routes([index, about, search])
18

vs

1fn index() { return "..."; }

2fn about() { return "..."; }

3fn search() { return "..."; }

4

5Application::build()

6    .routes([index, about, search])

7    .add_middleware(CompressionMiddleware::new())
8

Separating out code

The benefit of middleware just being functions is that they can be separated out to different modules or even crates. Many 3rd party services may choose to expose their service as a middleware rather than a system of complicated functions, and having to deal with users passing the correct state into them.

1    .add_middleware(hot_new_server_logging_framework_start_up::Middleware::new())
2

Comparing middleware implementations in libraries

Rocket

Rocket is a server framework. Rocket's middleware implementation is known as fairings (yes there are many rocket related puns in the crate).

From Rocket's fairing documentation:

Rocket’s fairings are a lot like middleware from other frameworks, but they bear a few key distinctions:

Fairings cannot terminate or respond to an incoming request directly. Fairings cannot inject arbitrary, non-request data into a request. Fairings can prevent an application from launching. Fairings can inspect and modify the application's configuration.

To make a fairing in Rocket you have to implement the fairing trait:

1struct MyCounterFaring {

2    get_requests: AtomicUsize,

3}

4

5#[rocket::async_trait]

6impl Fairing for MyCounterFaring {

7    fn info(&self) -> Info {

8        Info {

9            name: "GET Counter",

10            kind: Kind::Request

11        }

12    }

13

14    async fn on_request(&self, request: &mut Request<'_>, _: &mut Data<'_>) {

15        if let Method::Get = request.method() {

16            self.get.fetch_add(1, Ordering::Relaxed);

17        }

18    }

19}
20

Using the .attach method it's really simple to add a fairing to a application.

1#[launch]

2fn rocket() -> _ {

3    rocket::build()

4        .attach(MyCounterFaring {

5            get_requests: AtomicUsize::new(0),

6        })

7        .attach(other_fairing)

8}
9

Rocket's fairings have several hooks. Each of them has a default implementation so can be left out (you don't have to explicitly write a method for each hook).

Requests using on_request

This fires when a request is received. This hook has a mutable reference to the request and so can modify the request. "It cannot abort or respond directly to the request; these issues are better handled via request guards or via response callbacks.".

As an aside, Rocket has a different non-middleware implementation that can be better suited for handlers that might short circuit an error rather than running a handler afterwards. We won't go into it here but if your middleware is fallible request guards might be a better option

Response using on_response

Similar to on_request this has mutable access to the response object (it also has immutable access to the request). Using this hook you can inject headers or amend partial responses (aka 404).

General server hooks

Rocket's fairings go beyond request and responses and can act as hooks into application startup and closing:

  • Ignite (on_ignite). Runs before starting the server. Can validate config values, set initial state or abort.
  • Liftoff (on_liftoff). After server has launched (started) "A liftoff callback can be a convenient hook for launching services related to the Rocket application being launched."
  • Shutdown (on_shutdown). This hook can be used to wind down services and save state before the application closes. Runs concurrently and no requests are returned before.

All Rocket fairings have a info field. The kind property decides which hooks the fairing can fire.

Ad hoc fairings

Simpler middleware using functions can be added using ad-hoc fairings. If the fairing doesn't have state / data with it, you can bypass needing to create a structure and writing a trait implementation for it and instead write a function.

Using AdHoc and any of the names of the above mentioned hooks we can instead creating a function using a function (+ a string info):

1.attach(AdHoc::on_liftoff("Liftoff Printer", |_| Box::pin(async move {

2    println!("...annnddd we have liftoff!");

3})))
4

Axum

Similar to Rocket, Axum is a HTTP framework for web applications. Axum middleware is based of tower which is a separate crate which deals with lower level base for networking in Rust. Axum and tower middleware is refereed to a 'layers'.

There are several ways to write middleware for Axum. Similar to standard fairings you can create a type that implements the Layer trait. The layer trait decorates / acts apon the Service trait.

This demo was taken from the Tower docs and before you get scared off we will see a much simpler way to implement middleware shortly.

1pub struct LogLayer {

2    target: &'static str,

3}

4

5impl<S> Layer<S> for LogLayer {

6    type Service = LogService<S>;

7

8    fn layer(&self, service: S) -> Self::Service {

9        LogService {

10            target: self.target,

11            service

12        }

13    }

14}

15

16// This service implements the Log behavior

17pub struct LogService<S> {

18    target: &'static str,

19    service: S,

20}

21

22impl<S, Request> Service<Request> for LogService<S>

23where

24    S: Service<Request>,

25    Request: fmt::Debug,

26{

27    type Response = S::Response;

28    type Error = S::Error;

29    type Future = S::Future;

30

31    fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {

32        self.service.poll_ready(cx)

33    }

34

35    fn call(&mut self, request: Request) -> Self::Future {

36        // Insert log statement here or other functionality

37        println!("request = {:?}, target = {:?}", request, self.target);

38        self.service.call(request)

39    }

40}
41

We can register our mew layer (middleware) on a to a Axum application using .layer (similar to .attach in Rocket).

1use axum::{routing::get, Router};

2

3async fn handler() {}

4

5let app = Router::new()

6    .route("/", get(handler))

7    .layer(LogLayer { target: "our site" })

8    // `.route_layer` will only run the middleware if a route is matched

9    .route_layer(TimeOutLayer)
10

There is also ServiceBuilder which is the recommended way to chain layers. They are executed in the reverse order to which they are attached (layer_one runs first).

1Router::new()

2    .route("/", get(handler))

3    .layer(

4        ServiceBuilder::new()

5            .layer(layer_three)

6            .layer(layer_two)

7            .layer(layer_one)

8    )
9

A simpler way

Similar to Rocket's trait fairings and ad hoc fairings there are two ways to write middleware for Axum using middleware::from_fn.

Using a demo from the Axum docs.

1async fn auth<B>(req: Request<B>, next: Next<B>) -> Result<Response, StatusCode> {

2    let auth_header = req.headers()

3        .get(http::header::AUTHORIZATION)

4        .and_then(|header| header.to_str().ok());

5

6    match auth_header {

7        Some(auth_header) if token_is_valid(auth_header) => {

8            Ok(next.run(req).await)

9        }

10        _ => Err(StatusCode::UNAUTHORIZED),

11    }

12}
13
1let app = Router::new()

2    .route("/", get(|| async { /* ... */ }))

3    .route_layer(middleware::from_fn(auth));
4

Existing ready to use layers:

As Axum is built on tower there are some great readily importable middleware that can be added as layers.

One of those is that TraceLayer that logs requests coming in and responses going out:

1Mar 05 20:50:28.523 DEBUG request{method=GET path="/foo"}: tower_http::trace::on_request: started processing request

2Mar 05 20:50:28.524 DEBUG request{method=GET path="/foo"}: tower_http::trace::on_response: finished processing request latency=1 ms status=200
3

There are a bunch of layers in the tower_http crate that can be used instead of writing your own.

Building authentication using our own middleware

Let's play around with a realistic example and build a middleware layer for our own application that manages authentication. In our route handlers we might want to know detailed information about the user that made the request. Rather than having to deal with passing around request information we can encapsulate this logic in middleware.

We'll be using Axum for this demo. The demo is not public at the moment, look out for a future post about authentication for when the full demo will be public!

Cookies as user state

Cookies can be used for maintaining user state. When a user cookie is set on the frontend it's sent with every request. We'll skip over how the cookie got there 😅 and leave it for a future tutorial.

Either way we want to add middleware which injects the following the struct into current request.

1#[derive(Clone)]

2struct AuthState(Option<(SessionId, Arc<OnceCell<User>>)>, Database);
3

We have got a bit fancy here. Rather than making a database request on every request we instead save the database pool a mutable store (OnceCell) and the session id. With all this information it means that getting user state can be lazy or not done at all.

We will build an auth function which builds up this lazy AuthState struct with the required information by parsing the headers of a request.

1async fn auth<B>(

2    mut req: Request<B>,

3    next: Next<B>,

4    database: Database,

5) -> axum::response::Response {

6    // Assuming we only have one cookie

7    let key_pair_opt = req

8        .headers()

9        .get("Cookie")

10        .and_then(|value| value.to_str().ok())

11        .map(|value| 

12            value

13                .split_once(';')

14                .map(|(left, _)| left)

15                .unwrap_or(value)

16        )

17        .and_then(|kv| kv.split_once('='));

18

19    let auth_state = if let Some((key, value)) = key_pair_opt {

20        if key != USER_COOKIE_NAME {

21            None

22        } else if let Ok(value) = value.parse::<u128>() {

23            Some(value)

24        } else {

25            None

26        }

27    } else {

28        None

29    };

30    

31    req.extensions_mut().insert(AuthState(

32        auth_state

33            .map(|v| (

34                v, 

35                Arc::new(OnceCell::new()),

36                database

37            )),

38    ));

39    next.run(req).await

40}
41

this is a bit ad hoc parsing, proper parsing should account for multiple cookies etc and could be neater 😆.

At the end we do two important things. First we extend the request with this lazy auth state: req.extensions_mut().insert(...). Secondly we run the rest of the request stack: next.run(req).await.

Unlike Rocket fairings, in Axum we could return our own Response from the middleware and not run the handler by skipping next.run(req).await.

Attaching the middleware

We first attach it to out Axum application using:

1let middleware_database = database_pool.clone();

2

3Router::new()

4    .layer(middleware::from_fn(move |req, next| {

5        auth(req, next, middleware_database.clone())

6    }))
7

Because our middleware also needs application state (in this case the database pool) we create a intermediate function which pulls that in.

Using the middleware

We can now use the state injected by the middleware using a Extension parameter.

1async fn me(

2    Extension(current_user): Extension<AuthState>,

3) -> Result<impl IntoResponse, impl IntoResponse> {

4    if let Some(user) = current_user.get_user().await {

5        Ok(show_user(user))

6    } else {

7        Err(error_page("Not logged in"));

8    }

9}
10

I was actually surprised when this worked, Axum's handler parameter system is quite magic.

Conclusion

In summary middleware helps you abstract common logic for paths into reusable stateful and stateless objects. Middleware might not be applicative for every scenario but when you need it, it is super useful!

Shuttle: Stateful Serverless for Rust

Deploying and managing your Rust web apps can be an expensive, anxious and time consuming process.

If you want a batteries included and ops-free experience, try out Shuttle.

Share this article

Let's make Rust the next language of cloud-native

We love you Go, but Rust is just better.