Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Middleware for caching succesful GET responses with Redis automatically #630

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
12 changes: 12 additions & 0 deletions cache/redis-cache-middleware/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[package]
authors = ["Jatutep Bunupuradah <[email protected]>"]
name = "main"
version = "0.1.0"
edition = "2021"

[dependencies]
actix-web = "*"
actix-rt = "*"
env_logger = "*"
actix-web-lab = "*"
redis = { version = "*", features = ["tls-native-tls"] }
bangbaew marked this conversation as resolved.
Show resolved Hide resolved
34 changes: 34 additions & 0 deletions cache/redis-cache-middleware/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Redis Cache Middleware

This project demonstrates how to implement Redis cache middleware to handle cache responses synchronously.
The application should be able to function properly even if Redis is not running; however, the caching process will be disabled in such cases.


## Setting up
Configure the environment variable `REDIS_HOST`, or if not set, the default host `redis://localhost:4379` will be used. TLS is supported using the `rediss://` protocol.
Run the server using `cargo run`.

## Endpoints

### `GET /fibonacci/{number}`

To test the demo, send a GET request to `/fibonacci/{number}`, where {number} is a positive integer of type u64.

## Request Directives

- `Cache-Control: no-cache` will return the most up-to-date response while still caching it. This will always yield a `miss` cache status.
- `Cache-Control: no-store` will prevent caching, ensuring you always receive the latest response.

## Verify Redis Contents

When making the first GET request to `/fibonacci/47`, it may take around 8 seconds to respond.
If Redis is running and the connection is established, subsequent requests should return the cached result immediately, a `hit` cache status will be returned, but with content type `application/json`.

## Known issues

- Connecting to a remote Redis server might introduce significant overhead and could lead to prolonged connection times or even failure to reach the remote server.

## Further implementations

- Implement asynchronous insertion of cache responses.
- Explore using an in-memory datastore within the application process to reduce reliance on Redis.
189 changes: 189 additions & 0 deletions cache/redis-cache-middleware/src/main.rs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some tests would be good if you feel like the challenge. We run redis in CI for other examples so should be easy enough to copy those.

Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
use actix_web::{
body::{self, MessageBody},
dev::{ServiceRequest, ServiceResponse},
http::{
header::{CacheDirective, HeaderValue, CACHE_CONTROL, CACHE_STATUS, CONTENT_TYPE},
Method, StatusCode,
},
middleware, web, App, Error, HttpResponse, HttpServer, Responder,
};
use actix_web_lab::middleware::{from_fn, Next};
use redis::{Client as RedisClient, Commands, RedisError};
use std::env;

fn fib_recursive(n: u64) -> u64 {
if n <= 1 {
return n;
}
let a = n - 1;
let b = n - 2;
println!("{a} + {b}");
fib_recursive(a) + fib_recursive(b)
}

/* fn fib_iterative(n: u64) -> u64 {
let mut a = 0;
let mut b = 1;
let mut c;

if n == 0 {
return a;
}

for i in 2..=n {
print!("{i} ");
c = a + b;
a = b;
b = c;
}
b
} */
bangbaew marked this conversation as resolved.
Show resolved Hide resolved

async fn an_expensive_function(n: web::Path<u64>) -> impl Responder {
let result = fib_recursive(n.to_owned());
HttpResponse::Ok().body(result.to_string())
}

#[actix_rt::main]
async fn main() -> std::result::Result<(), std::io::Error> {
env_logger::init();

let redis_client =
redis::Client::open(env::var("REDIS_HOST").unwrap_or("redis://localhost:6379".to_owned()))
.unwrap();

let listen_port: String = env::var("LISTEN_PORT").unwrap_or(8080.to_string());
let listen_address: String = ["0.0.0.0", &listen_port].join(":");

println!("Server is listening at {}...", listen_address);
HttpServer::new(move || {
App::new()
.wrap(from_fn(cache_middleware))
.app_data(redis_client.to_owned())
.service(web::resource("/fibonacci/{n}").route(web::get().to(an_expensive_function)))
.wrap(middleware::Logger::default())
})
.bind(listen_address)?
.run()
.await?;

Ok(())
}

async fn cache_middleware(
req: ServiceRequest,
next: Next<impl MessageBody>,
) -> Result<ServiceResponse<impl MessageBody>, Error> {
// Defining cache key based on requst path and query string
let key = format!("{}?{}", req.path(), req.query_string());
println!("cache key: {key:?}");

// Get "Cache-Control" request header and get cache directive
let headers = req.headers().to_owned();
let cache_directive = match headers.get(CACHE_CONTROL) {
Some(cache_control_header) => cache_control_header.to_str().unwrap_or(""),
None => "",
};

// If cache directive is not "no-cache" and not "no-store"
if cache_directive != CacheDirective::NoCache.to_string()
&& cache_directive != CacheDirective::NoStore.to_string()
{
// Initialize Redis Client from App Data
let redis_client = req.app_data::<RedisClient>();
// This should always be Some, so let's unwrap
let mut redis_conn = redis_client.unwrap().get_connection();
let redis_ok = redis_conn.is_ok();

// If Redis connection succeeded and request method is GET
if redis_ok && req.method() == Method::GET {
// Unwrap the connection
let redis_conn = redis_conn.as_mut().unwrap();

// Try to get the cached response by defined key
let cached_response: Result<Vec<u8>, RedisError> = redis_conn.get(key.to_owned());
if let Err(e) = cached_response {
// If cache cannot be deserialized
println!("cache get error: {}", e);
} else if cached_response.as_ref().unwrap().is_empty() {
// If cache body is empty
println!("cache not found");
} else {
// If cache is found
println!("cache found");

// Prepare response body
let res = HttpResponse::new(StatusCode::OK).set_body(cached_response.unwrap());
let mut res = ServiceResponse::new(req.request().to_owned(), res);

// Define content-type and headers here
res.headers_mut()
.append(CONTENT_TYPE, HeaderValue::from_static("application/json"));
res.headers_mut()
.append(CACHE_CONTROL, HeaderValue::from_static("max-age=86400"));
res.headers_mut()
.append(CACHE_STATUS, HeaderValue::from_static("hit"));

return Ok(res);
}
}
}

// If Redis connection fails or cache could not be found
// Call the next service
let res = next.call(req).await?;

// deconstruct response into parts
let (req, res) = res.into_parts();
let (res, body) = res.into_parts();

// Convert body to Bytes
let body = body::to_bytes(body).await.ok().unwrap();
// Use bytes directly for caching instead of converting to a String
let res_body_enc = body.to_vec();

// Prepare response body
let res = res.set_body(res_body_enc.to_owned());
let mut res = ServiceResponse::new(req.to_owned(), res);

// If a GET request succeeded and cache directive is not "no-store"
if req.method() == Method::GET
&& StatusCode::is_success(&res.status())
&& cache_directive != CacheDirective::NoStore.to_string()
{
// Define response headers here
res.headers_mut()
.append(CACHE_CONTROL, HeaderValue::from_static("max-age=86400"));
res.headers_mut()
.append(CACHE_STATUS, HeaderValue::from_static("miss"));

// Initialize Redis Client from App Data
let redis_client = req.app_data::<RedisClient>();
// This should always be Some, so let's unwrap
let redis_conn = redis_client.unwrap().get_connection();
let redis_ok = redis_conn.is_ok();

// If Redis connection succeeded
if redis_ok {
// Try to insert the response body into Redis
let mut redis_conn = redis_conn.unwrap();
let insert = redis::Cmd::set(key, res_body_enc);
let insert = insert.query::<String>(&mut redis_conn);

if let Err(e) = insert {
// If cache insertion failed
println!("cache insert error: {}", e);
} else {
// This should print "cache insert success: OK"
println!("cache insert success: {}", insert.unwrap());
}
} else if let Err(e) = redis_conn {
// If Redis connection failed
println!("RedisError: {}", e);
}
} else {
// If the request method is not "GET" or the operation failed or cache directive is "no-store"
println!("not inserting cache");
}
Ok(res)
}