Introduction
Modrpc is a RPC (remote procedure call) framework much like gRPC, Thrift, and Cap'n Proto. It aims to make building your complex mesh of event-driven apps a breeze. You describe your applications' interfaces in modrpc's interface definition language (IDL) and glue code is generated which provides the framework to implement the interfaces' participants.
Currently modrpc is a tech-demo, not something that can be used in a professional setting. Rust is the only supported language, but there are tentative plans to support Python and TypeScript.
Check out chat-modrpc-example to see a full working example application.
Check out the Getting Started tutorial to build your first modrpc server and client.
Modrpc focuses on the following areas:
- modularity
- portability
- application meshes
- RPC over multicast
- performance
Here's a sample of what a modrpc schema looks like:
import std ".modrpc/std.modrpc"
// Define a Chat interface with Client and Server roles
interface Chat @(Client, Server) {
objects {
// Define a client -> server request for clients to register their alias.
register: std.Request<
RegisterRequest,
result<void, RegisterError>,
> @(Client, Server),
// Define a client -> server request for clients to send a message.
// All clients will see each others' requests and the corresponding responses,
so no separate mechanism for the server to broadcast sent messages is required.
send_message: std.Request<
SendMessageRequest,
result<void, SendMessageError>,
> @(Client, Server),
}
state {
// Server will provide to clients the full list of currently online users when
// they connect.
users: [RegisteredUser],
}
}
struct RegisteredUser {
endpoint: u64,
alias: string,
}
struct RegisterRequest {
alias: string,
}
enum RegisterError {
Internal { token: string },
UserAlreadyExists,
}
struct SendMessageRequest {
content: string,
}
enum SendMessageError {
Internal { token: string },
InsufficientRizz,
}
Modularity
Modrpc allows users to define interfaces that can be implemented once in Rust and imported and reused as components of larger interfaces. In the earlier example, we encountered the std.Request
interface from modrpc's standard library. As you might've guessed, this interface provides the request-response pattern that comes baked into traditional RPC frameworks.
The definition of std.Request
:
interface Request<Req, Resp> @(Client, Server) {
events @(Client) -> @(Client, Server) {
private request: Request<Req>,
}
events @(Server) -> @(Client) {
private response: Response<Resp>,
}
impl @(Server) {
// Downstream interface roles acting as a Request Server must supply an async
// `handler` function to respond to requests.
handler: async Req -> Resp,
}
methods @(Client) {
// Downstream interface roles acting as a Request Client can invoke the `call`
// method to send a request event and asynchronously wait for the server's
// response.
call: async Req -> Resp,
}
}
All interfaces boil down to events that can be sent from some set of roles to some other set of roles. In the case of std.Request
, only clients can send requests, and both clients and servers will receive them. This allows clients to observe requests made by other clients when doing RPC-over-multicast.
Common logic for a reusable interface is hand-written once in Rust, and downstream interfaces and applications can invoke the logic via methods that the reusable interface exposes. An example of this is the call
method on std.Request
.
If the reusable interface requires application-specific logic, these are specified via impl
blocks. In the case of std.Request
, implementers of a request server must provide an async function that receives a request and produces a response.
Under the hood, the reusable Rust implementation of the Request
interface handles request payload encapsulation and response tracking / notification at the client and calling the user's supplied request handler at the server.
Portability
This refers to making modrpc usable in as many situations as possible. There are a few design goals aimed at that (some are currently aspirational):
- Don't allocate after startup (important for embedded use-cases)
- Be transport-agnostic - support communication over shared-memory, TCP, WebSockets, or even an embedded radio.
- Try to be as lightweight as possible.
Application meshes
Modrpc's runtime is multitenant in the sense that a single runtime can drive many (potentially short-lived) transports and instances of interface roles, and multiple interfaces can be multiplexed over a single transport.
RPC over multicast
Here "multicast" means events for a modrpc interface being broadcasted to all interested peers. This allows you to, for example, have all clients see each others' requests and the server's responses to those requests. This is useful when developing collaborative apps and in many cases allows you to avoid building additional mechanisms to sync state among peers.
Performance
Modrpc aims to be lightweight and easy to setup in simple single-threaded scenarios, but also tries to be easily scalable to support high-throughput, high concurrency scenarios - think tens to hundreds of thousands of tasks interacting with the RPC system. The runtime is written in Rust and is async
and thread-per-core. Some examples of performance-oriented design decisions:
- Make heap allocation after startup optional
- Fixed-size, async buffer pools are used to allocate messages.
- Messages can be serialized and deserialized without allocations.
- Batching
- Threads grab buffers from the shared pool in batches to allocate messages on.
- Multiple messages can be backed by a single buffer.
- Written buffers are flushed for sending out on a transport in batches.
- Inter-thread message queues automatically batch under high load.
- Thread-local waker registrations
- Use non-
Sync
async datastructures where possible. - In some datastructures (for example
modrpc::HeapBufferPool
), when there are many tasks on multiple threads that need to be notified, only one task per thread will register itself in a thread-safe queue - others will wait in a thread-local queue.
- Use non-
- Scaling across cores / load balancing
- Message handlers on multiple threads can subscribe to a load-balancing mpmc queue to receive work.
- A message will be processed directly on the received thread if possible to circumvent the mpmc queue and some buffer reference counting.
Check out the local-benchmark example application to get a sense for what configuring a multithreaded modrpc runtime with multiple transports looks like.
To get an idea of current performance, try out the p2p-benchmark example application - it has a single-threaded server being spammed with very cheap requests by a multi-threaded client. On my laptop the server is able to serve 3.8M+ requests/second.
Getting Started
This guide will walk you through starting your first modrpc project. It assumes you have already installed Rust - see rustup.rs.
1. Install the modrpc tooling
cargo install modrpcc
2. Create the structure of your modrpc project
mkdir my-modrpc-app
cd my-modrpc-app
cargo new --bin --name my-server server
cargo new --bin --name my-client client
3. Download the modrpc standard library
Currently the modrpc standard library is unversioned. Download the latest proof-of-concept schema straight from the GitHub repo - it will be kept in-sync with the latest std-modrpc
on crates.io.
mkdir .modrpc
curl https://raw.githubusercontent.com/modrpc-org/modrpc/refs/heads/main/proto/std.modrpc -o .modrpc/std.modrpc
4. Define your first interface
Populate my-app.modrpc
:
import std ".modrpc/std.modrpc"
interface MyApp @(Client, Server) {
objects {
compute_fizzbuzz: std.Request<
ComputeFizzbuzzRequest,
result<ComputeFizzbuzzSuccess, ComputeFizzbuzzError>,
> @(Client, Server),
}
}
struct ComputeFizzbuzzRequest {
i: u64,
}
struct ComputeFizzbuzzSuccess {
message: string,
}
enum ComputeFizzbuzzError {
InvalidRequest,
}
5. Generate the modrpc glue library for your interface
This will generate a Rust crate at my-app-modrpc/rust
:
modrpcc --language rust --output-dir . --name my-app my-app.modrpc
6. Implement your server
Move to the server's directory:
cd server
Populate Cargo.toml
:
[package]
name = "my-server"
version = "0.1.0"
edition = "2024"
[dependencies]
modrpc = { version = "0.0", features = ["tcp-transport"] }
modrpc-executor = { version = "0.0", features = ["tokio"] }
my-app-modrpc = { path = "../my-app-modrpc/rust" }
std-modrpc = "0.0"
tokio = "1"
Populate src/main.rs
:
use modrpc_executor::ModrpcExecutor; fn main() { let mut ex = modrpc_executor::TokioExecutor::new(); let _guard = ex.tokio_runtime().enter(); let buffer_pool = modrpc::HeapBufferPool::new(256, 4, 4); let rt = modrpc::RuntimeBuilder::new_with_local(ex.spawner()); let (rt, _rt_shutdown) = rt.start::<modrpc_executor::TokioExecutor>(); ex.run_until(async move { let tcp_server = modrpc::TcpServer::new(); let listener = tokio::net::TcpListener::bind("0.0.0.0:9090").await .expect("tcp listener"); loop { println!("Waiting for client..."); let (stream, client_addr) = match listener.accept().await { Ok(s) => s, Err(e) => { println!("Failed to accept client: {}", e); continue; } }; stream.set_nodelay(true).unwrap(); let _ = tcp_server.accept_local::<my_app_modrpc::MyAppServerRole>( &rt, buffer_pool.clone(), stream, start_my_app_server, my_app_modrpc::MyAppServerConfig { }, my_app_modrpc::MyAppInitState { }, ) .await .unwrap(); println!("Accepted client {}", client_addr); } }); } fn start_my_app_server( cx: modrpc::RoleWorkerContext<my_app_modrpc::MyAppServerRole>, ) { cx.stubs.compute_fizzbuzz.build(cx.setup, async move |_source, request| { use my_app_modrpc::{ComputeFizzbuzzError, ComputeFizzbuzzSuccess}; let Ok(i) = request.i() else { return Err(ComputeFizzbuzzError::InvalidRequest); }; println!("Received request: {i}"); let response = if i % 3 == 0 && i % 5 == 0 { ComputeFizzbuzzSuccess { message: "FizzBuzz".to_string() } } else if i % 3 == 0 { ComputeFizzbuzzSuccess { message: "Fizz".to_string() } } else if i % 5 == 0 { ComputeFizzbuzzSuccess { message: "Buzz".to_string() } } else { ComputeFizzbuzzSuccess { message: format!("{i}") } }; println!(" response: {response:?}"); Ok(response) }); }
Build and run the server:
cargo run --release
7. Implement your client
In another terminal session, move to the client's directory:
cd /path/to/my-modrpc-app/client
Populate Cargo.toml
:
[package]
name = "my-client"
version = "0.1.0"
edition = "2024"
[dependencies]
modrpc = { version = "0.0", features = ["tcp-transport"] }
modrpc-executor = { version = "0.0", features = ["tokio"] }
my-app-modrpc = { path = "../my-app-modrpc/rust" }
std-modrpc = "0.0"
tokio = "1"
Populate src/main.rs
:
use modrpc_executor::ModrpcExecutor; fn main() { let mut ex = modrpc_executor::TokioExecutor::new(); let _guard = ex.tokio_runtime().enter(); let buffer_pool = modrpc::HeapBufferPool::new(256, 4, 4); let rt = modrpc::RuntimeBuilder::new_with_local(ex.spawner()); let (rt, _rt_shutdown) = rt.start::<modrpc_executor::TokioExecutor>(); ex.run_until(async move { let stream = tokio::net::TcpStream::connect("127.0.0.1:9090").await .expect("tcp stream connect"); stream.set_nodelay(true).unwrap(); println!("Connected to server"); let connection = modrpc::tcp_connect::<my_app_modrpc::MyAppClientRole>( &rt, buffer_pool, modrpc::WorkerId::local(), my_app_modrpc::MyAppClientConfig { }, stream, ) .await .unwrap(); let my_app_client = connection.role_handle; for i in 1..=15 { let response = my_app_client.compute_fizzbuzz.call( my_app_modrpc::ComputeFizzbuzzRequest { i } ) .await .expect("fizzbuzz failed"); println!("{}", response.message); } }); }
Build and run the client:
cargo run --release
The end!
Motivating example - task manager
Imagine you have an app (or several) that manages many different kinds of long-running tasks. And you're a stickler for avoiding unnecessary communications, so you want the client to keep track of all pending tasks and only receive status updates, never poll. So you write a reusable TaskManager
interface to encapsulate the common logic of spawning, tracking, and optionally canceling tasks.
interface TaskManager<SpawnPayload, State, Error> @(Client, Server) {
objects {
spawn: std.Request<
SpawnRequest<SpawnPayload>,
result<void, SpawnError<Error>>,
> @(Client, Server),
cancel: std.Request<
TaskId,
result<void, CancelError>,
> @(Client, Server),
// server -> client ordered streams (per-task) of status updates
task_status_update: std.MultiStream<TaskStatus<State>> @(Client, Server),
}
impl @(Server) {
// Each TaskManager Server must specify how to spawn its tasks.
new_task: NewTask<SpawnPayload> -> void,
}
methods @(Server) {
// The server's tasks can invoke this method to send status updates to clients.
post_status: PostTaskStatus<State> -> void,
}
// Clients can request to spawn and cancel tasks.
methods @(Client) {
spawn: async SpawnPayload -> result<SpawnSuccess, SpawnError<Error>>,
cancel: async TaskId -> result<void, CancelError>,
}
state {
// Clients will be provided a list of all pending tasks when they first connect.
pending_tasks: [TaskId],
}
}
struct TaskId { id: u64 }
struct SpawnSuccess {
task_id: TaskId,
status_stream: std.MultiStreamId,
}
// ... some definitions omitted
One of your apps is a worker for a pipeline that downloads and transcodes cat videos:
interface CatVideoTranscoder @(Client, Server) {
objects {
cat_video_downloads:
TaskManager<
CatUrl, CatVideoDownloadState, CatDownloadSpawnError,
> @(Client, Server),
h264_to_h265_transcoding: TaskManager<
CatVideoId, VideoTranscodingState, VideoTranscodingSpawnError,
> @(Client, Server),
}
}
// ... some definitions omitted