Overview
Crux is a framework for building cross-platform applications with better testability, higher code and behavior reuse, better safety, security, and more joy from better tools.
It splits the application into two distinct parts, a Core built in Rust, which drives as much of the business logic as possible, and a Shell, built in the platform native language (Swift, Kotlin, TypeScript), which provides all interfaces with the external world, including the human user, and acts as a platform on which the core runs.

The aim is to separate three kinds of code in a typical app, which have different goals:
- the presentation layer in the user interface,
- the pure logic driving behaviour and state updates in response to the user's actions, and
- the effects (or I/O) layer where network communication, storage, interactions with real-world time, and other similar things are handled
The Core handles the behaviour logic, the Shell handles the presentation layer and effect execution (but not orchestration, that is part of the behaviour and therefore in the Core). This strict separation makes the behaviour logic much easier to test without any of the other layers getting involved.
The interface between the Core and the Shell is a native FFI (Foreign Function Interface) with message passing semantics, where simple data structures are passed across the boundary, supported by cross-language code generation and type checking.
To get playing with Crux quickly, follow Part I of the book, from the Getting Started chapter onward. It will take you from zero to a basic working app on your preferred platform quickly. From there, continue on to Part II – building the Weather App, which builds on the basics and covers the more advanced features and patterns needed in a real world app.
If you just want to understand why we set out to build Crux in the first place and what problems it tries to solve, before you spend any time trying it (no hard feelings, we would too), read our original Motivation.
API docs
There are two places to find API documentation: the latest published version on docs.rs, or the very latest master docs if you too like to live dangerously.
- crux_core - the main Crux crate: latest release | latest master
- crux_http - HTTP client capability: latest release | latest master
- crux_kv - Key-value store capability: latest release | latest master
- crux_time - Time capability: latest release | latest master
You can see the latest version of this book (generated from the master branch) on Github Pages.
Crux is open source on Github. A good way to learn Crux is to explore the code, play with the examples, and raise issues or pull requests. We'd love you to get involved.
You can also join the friendly conversation on our Zulip channel.
Design overview

The architecture is event-driven, with state management based on event sourcing, similar to Elm or Redux. The Core holds the majority of state, which is updated in response to events happening in the Shell. The interface between the Core and the Shell is message-based.
Native UI
The user interface layer is built natively, with modern declarative UI frameworks such as Swift UI, Jetpack Compose and React/Svelte or a WASM based framework on the web. The UI layer is as thin as it can be, and all behaviour logic is implemented by the shared Core. The one restriction is that the Core is side–effect free. This is both a technical requirement (to be able to target WebAssembly), and an intentional design goal, to separate behaviour from effects and make them both easier to test in isolation.
Managed effects
Crux uses managed side-effects – the Core requests side-effects from the Shell, which executes them. The basic difference is that instead of doing the asynchronous work, the core describes the intent for the work with data (which also serves as the input for the effect), and passes this to the Shell to be performed. The Shell performs the work, and returns the outcomes back to the Core. This approach using deferred execution is inspired by Elm, and similar to how other purely functional languages deal with effects and I/O (e.g. the IO monad in Haskell). It is also similar in its laziness to how iterators work in Rust.
Type generation
The Core exports types for the messages it can understand. The Shell can call the Core and pass one of the messages. In return, it receives a set of side-effect requests to perform. When the work is completed, the Shell sends the result back into the Core, which responds with further requests if necessary.
Updating the user interface is considered one of the side-effects the Core can request. The entire interface is strongly typed and breaking changes in the core will result in build failures in the Shell.
Goals
We set out to find a better way of building apps across platforms. You can read more about our motivation. The overall goals of Crux are to:
- Build the majority of the application code once, in Rust
- Encapsulate the behavior of the app in the Core for reuse
- Follow the Ports and Adapters pattern, also known as Hexagonal Architecture to facilitate pushing side-effects to the edge, making behavior easy to test
- Strictly separate the behavior from the look and feel and interaction design
- Use the native UI tool kits to create user experience that is the best fit for a given platform
- Use the native I/O libraries to be good citizens of the ecosystem and get the benefit of any OS-provided services
Path to 1.0
Crux is used in production apps today, and we consider it production ready. However, we still have a number of things to work on to call it 1.0, with a stable API and excellent DX expected from a mature framework.
Below is a list of some of the things we know we want to do before 1.0:
- Better code generation with additional features, and support for more languages (e.g. C#, Dart, even C++...) and in turn more Shells (e.g. .NET, Flutter) which will also enable Desktop apps for Windows
- Improved documentation, code examples, and example apps for newcomers
- Improved onboarding experience, with less boilerplate code that end users have to write or copy from an example
Until then, we hope you will work with us on the rough edges, and adapt to the necessary API updates as we evolve. We strive to minimise the impact of changes as much as we can, but before 1.0, some breaking changes will be unavoidable.
Motivation
We set out to prove this approach to building apps largely because we've seen the drawbacks of all the other approaches in real life, and thought "there must be a better way". The two major available approaches to building the same application for iOS and Android are:
- Build a native app for each platform, effectively doing the work twice.
- Use React Native or Flutter to build the application once1 and produce native looking and feeling apps which behave nearly identically.
The drawback of the first approach is doing the work twice. In order to build every feature for iOS and Android at the same time, you need twice the number of people, either people who happily do Swift and Kotlin (and they are very rare), or more likely a set of iOS engineers and another set of Android engineers. This typically leads to forming two separate, platform-focused teams. We have witnessed situations first-hand, where those teams struggle with the same design problems, and despite one encountering and solving the problem first, the other one can learn nothing from their experience (and that's despite long design discussions).
We think such experiences with the platform native approach are common, and the reason why people look to React Native and Flutter.
The issues with the second approach are two-fold:
- Only mostly native user interface
- In the case of React Native, the JavaScript ecosystem tooling disaster
React Native (we'll focus the discussion on it, but most of the below applies to Flutter too) effectively takes over, and works hard to insulate the engineer from the native platform underneath and pretend it doesn't really exist, but of course, inevitably, it does exist and the user interface ends up being built in a combination of 90% JavaScript/TypeScript and 10% Kotlin/Swift. This was a major win when React Native was first introduced, because the platform native UI toolkits were imperative, following a version of MVC architecture, and generally made it quite difficult to get UI state management right. React on the other hand is declarative, leaving much less space for errors stemming from the UI getting into an undefined state (although as apps got more complex and codebases grew, React's state management model got more complex with them). The benefit of declarative UI was clearly recognised by iOS and Android, and both introduced their own declarative UI toolkit - Swift UI and Jetpack Compose. Both of them are quite good, matching that particular advantage of React Native, and leaving only building things once (in theory). But in exchange, they have to be written in JavaScript (and adjacent tools and languages).
Why not build all apps in JavaScript?
The main issue with the JavaScript ecosystem is that it's built on sand. The underlying language is quite loose and has a lot of inconsistencies. It came with no package manager originally, now it has three. To serve code to the browser, it gets bundled, and the list of bundlers is too long to include here, and even 10 years since the introduction of ES modules, the ecosystem is still split and the competing module standards make all tooling more complex and difficult to configure.
JavaScript was built as a dynamic language. This means a lot of basic human errors,
which are made while writing the code are only discovered when running the code.
Static type systems aim to solve that problem and TypeScript
adds this onto JavaScript, but the types only go so far (until they hit an any type,
or dependencies with no type definitions), and they disappear at runtime, so you don't
get a type based conditional (well, kind of).
In short, upgrading JavaScript to something modern, capable of handling a large app codebase with multiple people or even teams working on it is possible, but takes a lot of tooling. Getting all this tooling set up and ready to build things is an all day job, and so more tooling, like Vite has popped up providing this configuration in a box, batteries included. Perhaps the final admission of this problem is the Biome toolchain (formerly the Rome project), attempting to bring all the various tools under one roof (and Biome itself is built in Rust...).
It's no wonder that even a working setup of all the tooling has sharp edges, and cannot afford to be nearly as strict as tooling designed with strictness in mind, such as Rust's. The heart of the problem is that computers are strict and precise instruments, and humans are sloppy creatures. With enough humans (more than 10, being generous) and no additional help, the resulting code will be sloppy, full of unhandled edge cases, undefined behaviour being relied on, circular dependencies preventing testing in isolation, etc. (and yes, these are not hypotheticals).
Contrast that with Rust, which is as strict as it gets, and generally backs up
the claim that if it compiles it will work (and if you struggle to get it past
the compiler, it's probably a bad idea). The tooling and package management is
built in with cargo. There are fewer decisions to make when setting up a Rust
project.
In short, we think the JS ecosystem has jumped the shark, the "complexity toothpaste" is out of the tube, and it's time to stop. But there's no real viable alternative.
Crux is our attempt to provide one.
-
In reality it's more like 1.4x effort build the same app for two platforms. ↩
Getting started
We generally recommend building Crux apps from inside out, starting with the Core.
This part will first take you through setting up the tools and building the Core, and writing tests to make sure everything works as expected. Finally, once we're confident we have a working core, we'll set up the necessary bindings for the shell and build the UI for your chosen platform.
But first, we need to make sure we have all the necessary tools
Install the tools
This is an example of a
rust-toolchain.toml
file, which you can add at the root of your repo. It should ensure that the
correct rust channel and compile targets are installed automatically for you
when you use any rust tooling within the repo.
You may not need all the targets if you're not planning to build a fully cross platform app.
[toolchain]
channel = "stable"
components = ["rustfmt", "rustc-dev"]
targets = [
"aarch64-apple-darwin",
"aarch64-apple-ios",
"aarch64-apple-ios-sim",
"aarch64-linux-android",
"wasm32-unknown-unknown",
"x86_64-apple-ios",
]
profile = "minimal"
For testing, we also recommend to install cargo-nextest, the test runner we'll be using
in the examples.
cargo install cargo-nextest
Create the core crate
We need a crate to hold our application's core, but since one of our shell options later will be rust based, we'll set up a cargo workspace to have some isolation between the core and the other Rust based modules
The workspace and library manifests
First, create a workspace and start with a /Cargo.toml file, at the monorepo
root, to add the new library to our workspace.
It should look something like this:
# /Cargo.toml
[workspace]
resolver = "3"
members = ["shared"]
[workspace.package]
edition = "2024"
rust-version = "1.88"
[workspace.dependencies]
anyhow = "1.0.100"
crux_core = "0.17.0"
serde = "1.0.228"
The shared library
The first library to create is the one that will be shared across all platforms,
containing the behavior of the app. You can call it whatever you like, but we
have chosen the name shared here. You can create the shared rust library, like
this:
cargo new --lib shared
The library's manifest, at /shared/Cargo.toml, should look something like the
following,
# /shared/Cargo.toml
[package]
name = "shared"
version = "0.1.0"
edition.workspace = true
rust-version.workspace = true
[lib]
crate-type = ["cdylib", "lib", "staticlib"]
name = "shared"
[dependencies]
crux_core.workspace = true
serde = { workspace = true, features = ["derive"] }
Note the crate-type in the [lib] section. This is in preparation for linking with the
shells:
libis the default rust library when linking into a rust binarystaticlibis a static library (libshared.a) for use with iOS appscdylibis a C-ABI dynamic library (libshared.so) for use with JNA in an Android app
The basic files
The only missing part now is your src/lib.rs file. This will eventually
contain a fair bit of configuration for the shell interface, so we tend to
recommend reserving it to this job and creating a a src/app.rs module
for your app code.
For now, the lib.rs file looks as follows:
#![allow(unused)] fn main() { // src/lib.rs pub mod app; }
and app.rs can be empty, but let's put our app's main type in it,
call it Counter:
#![allow(unused)] fn main() { // src/app.rs #[derive(Default)] pub struct Counter; }
Running
cargo build
should build your Core. Let's make it do something now.
A very basic app
The basic app we'll build as an example to demonstrate the interaction between the Shell and the Core and the state management will be the well known and loved counter app. A simple counter we can increment, decrement and reset.
Code of the app
You can find the full code for this part of the guide here
In the last chapter, we started with the main type
#[derive(Default)]
pub struct Counter;
We need to implement Default so that Crux can construct the app for us.
To turn it into a Crux app, we need to implement the App trait from the
crux_core crate.
use crux_core::App;
impl App for Counter {
}
If you're following along, the compiler is now screaming at you that you're
missing four associated types for the trait — Event, Model, ViewModel,
and Effect.
Let's add them and talk about them one by one.
Event
Event defines all the possible events the app can respond to. It is essentially the Core's public API.
In our case it will look as follows:
#[derive(Serialize, Deserialize, Clone, Debug)]
pub enum Event {
Increment,
Decrement,
Reset,
}
Those are the three things we can do with the counter. None of them need any additional
information, so this simple enum will do. It is serializable, because it will
eventually be crossing the FFI boundary. We will get to that soon.
Model
Model holds our application's internal state. You can probably guess what this will look like:
#![allow(unused)] fn main() { #[derive(Default)] pub struct Model { count: isize, } }
It is a simple counter after all. Model stays in the core, so it doesn't need to serialize.
You can derive (or implement) Default and have Crux create an instance of your app and your model for you, or you can explicitly create a core with specified App and Model instances (this may be useful if you need to set up some initial state).
ViewModel
ViewModel represents the user interface at any one point in time. This is our indirection between the internal state and the UI on screen. In the case of the counter, this is pretty academic, there is no practical reason for making them different, but for the sake of the example, let's add some formatting in the mix and make it a string.
#[derive(Serialize, Deserialize, Clone, Default)]
pub struct ViewModel {
pub count: String,
}
The difference between Model and ViewModel will get a lot more pronounced once we introduce
some navigation into the mix in Part II.
Effect
For now, the counter has no side effects. Except it wants to update the user interface, and that is also a side effect. We'll go with this:
#![allow(unused)] fn main() { use crux_core::macros::effect; use crux_core::render::RenderOperation; #[effect(typegen)] #[derive(Debug)] pub enum Effect { Render(RenderOperation), } }
We're saying "the only side effect of our behaviour is rendering the user interface".
The Effect type is worth understanding further, but in order to do that we need to
talk about what makes Crux different from most UI frameworks.
Managed side-effects
One of the key design choices in Crux is that the Core is free of side-effects (besides its internal state). Your application can never perform anything that directly interacts with the environment around it - no network calls, no reading/writing files, not even updating the screen. Actually doing all those things is the job of the Shell, the core can only ask for them to be done.
This makes the core portable between platforms, and, importantly, very easy to test. It also separates the intent – the "functional" requirements – from the implementation of the side-effects and the "non-functional" requirements (NFRs).
For example, your application knows it wants to store data in a SQL database, but it doesn't need to know or care whether that database is local or remote. That decision can even change as the application evolves, and be different on each platform. We won't go into the detail at this point, because we don't need the full extent of side effects just yet. If you want to know more now, you can jump ahead to the chapter on Managed Effects, but it's probably a bit much at this point. Up to you.
All you need to know for now is that for us to ask the Shell for side effects, it will need to know what side effects it needs to handle, so we will need to list the possible kinds of effects (as an enum). Effects are simply messages describing what should happen. In our case the only option is asking for a UI update (or, more precisely, telling the shell a new view model is available).
That's enough about effects for now, we will spend a lot more time with them later on.
Implementing the App trait
We now have all the building blocks to implement the App trait. Here is
where we end up (straight from the actual example code):
impl App for Counter {
type Event = Event;
type Model = Model;
type ViewModel = ViewModel;
type Effect = Effect;
fn update(&self, event: Event, model: &mut Model) -> Command<Effect, Event> {
match event {
Event::Increment => model.count += 1,
Event::Decrement => model.count -= 1,
Event::Reset => model.count = 0,
}
render()
}
fn view(&self, model: &Model) -> ViewModel {
ViewModel {
count: format!("Count is: {}", model.count),
}
}
}
The update function is the heart of the app, it manages the state transitions
of the app. It responds to events by (optionally) updating the state. You
may have noticed the strange return type: Command<Effect, Event>.
This is the request for some side-effects. We seem to be accumulating terminology, so let's do a quick recap:
- Effect - a request for a type of side-effect (e.g. a HTTP request)
- Operation - carried by the Effect, specifies the data for the effect (e.g. the URL, method, headers, body...)
- Command - a bundle of effect requests which execute together, sequentially, in parallel or in a more complex coordination
In real apps, we typically use a few kinds of effects over and over,
and so it's necessary to allow reuse. That's what the Effect enum does, it
bundles together effects of the same type, defined by the same module or crate (We
call those modules Capabilities, but lets not worry about those yet).
The other thing that happens in real apps is mixing different kinds of effects in workflows, chaining them, running them concurrently, even racing them. That's what commands allow you to do.
Our update function looks at the event it got, updates the model.count, and
since the count has changed, the UI needs to update, so it calls render(). The
render() call returns a Command, which update just passes on to the caller.
The view function's job is to return the representation of what we want the Shell to show
on screen. It's up to the Shell to call it when ready. Our view does a bit of string
formatting and wraps it in a ViewModel.
That's a working counter done. It's obviously really basic, but it's enough for us to test it.
Testing the Counter app
In this chapter we'll write some basic tests for our counter app. It is tempting to skip reading this, but please don't. Testing and testability is one of the most important benefits of Crux, and even in this simple case, subtle things are going on, which we'll build on later.
The first test
Technically, we've already broken the rules and written code without having a failing test for it. We're going to let that slip in the name of education, but let's fix that before someone alerts the TDD authorities.
The first test we're going to write will check that resetting the count renders the UI.
#[cfg(test)]
mod test {
use super::*;
#[test]
fn renders() {
let app = Counter;
let mut model = Model::default();
let mut cmd = app.update(Event::Reset, &mut model);
// Check update asked us to `Render`
cmd.expect_one_effect().expect_render();
}
}
We create an instance of the app, and an instance of the model. Then we call update with the Event::Reset event.
As you may remember we get back a Command, which we expect to carry a request for a render operation. Using the
expectation helper API of the Command type, we check we got one effect, and that the effect is a render. Both methods will panic if they don't succeed (they are also #[cfg(test)] only, don't use them outside of tests).
That test should pass (check with cargo nextest run). Next up, we can check that the view model is rendered
correctly
#[test]
fn shows_initial_count() {
let app = Counter;
let model = Model::default();
let actual_view = app.view(&model).count;
let expected_view = "Count is: 0";
assert_eq!(actual_view, expected_view);
}
This is a lot more basic, just a simple equality assertion. Let's try something a bit more interesting
#[test]
fn increments_count() {
let app = Counter;
let mut model = Model::default();
let mut cmd = app.update(Event::Increment, &mut model);
// Check update asked us to `Render`
cmd.expect_one_effect().expect_render();
let actual_view = app.view(&model).count;
let expected_view = "Count is: 1";
assert_eq!(actual_view, expected_view);
}
When we send the increment event, we expect to be told to render, and we expect the view to show "Count is: 1".
You could just as well test just the model state, this is really up to you, what is more convenient and whether you prefer your tests to know about how your state works and to what extent.
By now you get the gist, so here's all the tests to satisfy ourselves that the app does in fact work:
#[cfg(test)]
mod test {
use super::*;
#[test]
fn renders() {
let app = Counter;
let mut model = Model::default();
let mut cmd = app.update(Event::Reset, &mut model);
// Check update asked us to `Render`
cmd.expect_one_effect().expect_render();
}
#[test]
fn shows_initial_count() {
let app = Counter;
let model = Model::default();
let actual_view = app.view(&model).count;
let expected_view = "Count is: 0";
assert_eq!(actual_view, expected_view);
}
#[test]
fn increments_count() {
let app = Counter;
let mut model = Model::default();
let mut cmd = app.update(Event::Increment, &mut model);
// Check update asked us to `Render`
cmd.expect_one_effect().expect_render();
let actual_view = app.view(&model).count;
let expected_view = "Count is: 1";
assert_eq!(actual_view, expected_view);
}
#[test]
fn decrements_count() {
let app = Counter;
let mut model = Model::default();
let mut cmd = app.update(Event::Decrement, &mut model);
// Check update asked us to `Render`
cmd.expect_one_effect().expect_render();
let actual_view = app.view(&model).count;
let expected_view = "Count is: -1";
assert_eq!(actual_view, expected_view);
}
#[test]
fn resets_count() {
let app = Counter;
let mut model = Model::default();
let _ = app.update(Event::Increment, &mut model);
let _ = app.update(Event::Reset, &mut model);
// Was the view updated correctly?
let actual = app.view(&model).count;
let expected = "Count is: 0";
assert_eq!(actual, expected);
}
#[test]
fn counts_up_and_down() {
let app = Counter;
let mut model = Model::default();
let _ = app.update(Event::Increment, &mut model);
let _ = app.update(Event::Reset, &mut model);
let _ = app.update(Event::Decrement, &mut model);
let _ = app.update(Event::Increment, &mut model);
let _ = app.update(Event::Increment, &mut model);
// Was the view updated correctly?
let actual = app.view(&model).count;
let expected = "Count is: 1";
assert_eq!(actual, expected);
}
}
You can see that occasionally, we test for the render to be requested. This will be important later, because we'll be able to not only check for the effects, but also resolve them – provide the value they requested, for example the response to a HTTP request.
That will let us test entire user flows calling web APIs, working with local storage and timers, and anything else, all at the speed of unit test and without ever touching the external world or writing a single fake (and maintaining it later).
For now though, let's actually give this thing some user interface. Time to build a Shell.
Preparing to add the Shell
So far, we've built a basic app in relatively basic Rust. If we now want to expose it to a Shell written in a different language, we'll have to set up the necessary plumbing, starting with the foreign function interface.
The core FFI bindings
From the work so far, you may have noticed the app has a pretty limited API,
basically the update and view methods. There's one more for resolving
effects (called resolve), but that really is it. We need to make those three methods available
to the Shell, but once that's done, we don't have to touch it again.
Let's briefly talk about what we want from this interface. Ideally, in our language of choice we would:
- have a native equivalent of the
update,viewandresolvefunction - have an equivalent for our
Event,EffectandViewModeltypes - not have to worry about what black magic is happening behind the scenes to make that work
Crux provides code generation support for all of the above.
It isn't in any way actual black magic. What happens is Crux exposes FFI calls taking and returning
the values serialized with bincode (by default), and generated "foreign" (Swift, Kotlin, ...)
types handling the foreign side of the serialization.
Yes, this introduces some extra work to the FFI, but generally, for each user interaction we make a relatively small number of round-trips (almost certainly less than ten), and our benchmarks say we can make thousands of them per second. The real throughput is dependent on how much data gets serialized, but it only becomes a problem with really large messages, and advanced workarounds exists. You most likely don't need to worry about it, at least not for now.
Preparing the core
We will prepare the core for both kinds of supported shells - native ones and WebAssembly ones.
To help with the native setup, Crux uses Mozilla's Uniffi to generate the bindings. For WebAssembly, it uses wasm-bingen.
First, lets update our Cargo.toml:
# shared/Cargo.toml
[package]
name = "shared"
version = "0.1.0"
authors.workspace = true
edition.workspace = true
rust-version.workspace = true
repository.workspace = true
license.workspace = true
keywords.workspace = true
[lints]
workspace = true
[lib]
crate-type = ["cdylib", "lib", "staticlib"]
# ANCHOR: typegen_bin
[[bin]]
name = "codegen"
required-features = ["codegen"]
# ANCHOR_END: typegen_bin
# ANCHOR: typegen
[features]
facet_typegen = ["crux_core/facet_typegen"]
# ANCHOR_END: typegen
uniffi = ["dep:uniffi"]
wasm_bindgen = ["dep:wasm-bindgen"]
codegen = [
"crux_core/cli",
"dep:clap",
"dep:log",
"dep:pretty_env_logger",
"uniffi",
]
# ANCHOR: typegen_deps
[dependencies]
facet = "=0.31"
# ANCHOR_END: typegen_deps
crux_core.workspace = true
serde = { workspace = true, features = ["derive"] }
# optional dependencies
clap = { version = "4.6.0", optional = true, features = ["derive"] }
log = { version = "0.4.29", optional = true }
pretty_env_logger = { version = "0.5.0", optional = true }
uniffi = { version = "=0.29.4", optional = true }
wasm-bindgen = { version = "0.2.114", optional = true }
A lot has changed! The key things we added are:
- a
bintarget calledcodegen, which is how we're going to run all the code generation - feature flags to optionally enable
uniffiandwasm_bindgen, and grouped those undercodegenalongside some dependencies which are optional depending on that feature flag being enabled - dependencies we need for the code generation
And since we've declared the codegen target, we need to add the code for it.
// shared/src/bin/codegen.rs use std::path::PathBuf; use clap::{Parser, ValueEnum}; use crux_core::{ cli::{BindgenArgsBuilder, bindgen}, type_generation::facet::{Config, TypeRegistry}, }; use log::info; use uniffi::deps::anyhow::Result; use shared::Counter; #[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, ValueEnum)] enum Language { Swift, Kotlin, Typescript, } #[derive(Parser)] #[command(version, about, long_about = None)] struct Args { #[arg(short, long, value_enum)] language: Language, #[arg(short, long)] output_dir: PathBuf, } fn main() -> Result<()> { pretty_env_logger::init(); let args = Args::parse(); let typegen_app = TypeRegistry::new().register_app::<Counter>()?.build()?; let name = match args.language { Language::Swift => "App", Language::Kotlin => "com.crux.examples.counter", Language::Typescript => "app", }; let config = Config::builder(name, &args.output_dir) .add_extensions() .add_runtimes() .build(); match args.language { Language::Swift => { info!("Typegen for Swift"); typegen_app.swift(&config)?; } Language::Kotlin => { info!("Typegen for Kotlin"); typegen_app.kotlin(&config)?; info!("Bindgen for Kotlin"); let bindgen_args = BindgenArgsBuilder::default() .crate_name(env!("CARGO_PKG_NAME").to_string()) .kotlin(&args.output_dir) .build()?; bindgen(&bindgen_args)?; } Language::Typescript => { info!("Typegen for TypeScript"); typegen_app.typescript(&config)?; } } Ok(()) }
This is essentially boilerplate for a CLI we can use to run the binding generation and type generation. But it's also a place where you can customize how they work if you have some more advanced needs.
It uses the facet based type generation from crux_core to scan the App for types which will cross
the FFI boundary, collect them and then, depending on what language should be generated builds the code
for it and places it into a specified output_dir directory.
We will call this CLI from the shell projects shortly.
Codegen, typegen, bindgen, which is it?
You'll here these terms thrown around here and there in the docs, so it's worth clarifying what we mean
bindgen – "bindings generation" – provides APIs in the foreign language to call the core's Rust FFI APIs.
For most platforms we use UniFFI, except for WebAssembly, where we use wasm_bindgen
typegen – "type generation" – The core's FFI interface operates on bytes, but both Rust and the languages we're targeting are generally strongly typed. To facilitate the serialization / deserialization, we generate type definition reflecting the Rust types from the core in the foreign language (Swift, Kotlin, TypeScript, ...), which all serialize consistently.
codegen – you guessed it, "code generation" – is the two things above combined.
Bindings code
Now we need to add the Rust side of the bindings into our code. Update your lib.rs to look like this:
// shared/src/lib.rs
mod app;
pub mod ffi;
pub use app::*;
pub use crux_core::Core;
#[cfg(feature = "uniffi")]
const _: () = assert!(
uniffi::check_compatible_version("0.29.4"),
"please use uniffi v0.29.4"
);
#[cfg(feature = "uniffi")]
uniffi::setup_scaffolding!();
This code uses our feature flags to conditionally initialize the UniFFI bindings and check the version in use.
More importantly, it introduced a new ffi.rs module. Let's look at it closer:
#![allow(unused)] fn main() { // shared/src/ffi.rs use crux_core::{ Core, bridge::{Bridge, EffectId}, }; use crate::Counter; /// The main interface used by the shell #[cfg_attr(feature = "uniffi", derive(uniffi::Object))] #[cfg_attr(feature = "wasm_bindgen", wasm_bindgen::prelude::wasm_bindgen)] pub struct CoreFFI { core: Bridge<Counter>, } impl Default for CoreFFI { fn default() -> Self { Self::new() } } #[cfg_attr(feature = "uniffi", uniffi::export)] #[cfg_attr(feature = "wasm_bindgen", wasm_bindgen::prelude::wasm_bindgen)] impl CoreFFI { #[cfg_attr(feature = "uniffi", uniffi::constructor)] #[cfg_attr( feature = "wasm_bindgen", wasm_bindgen::prelude::wasm_bindgen(constructor) )] #[must_use] pub fn new() -> Self { Self { core: Bridge::new(Core::new()), } } /// Send an event to the app and return the effects. /// # Panics /// If the event cannot be deserialized. /// In production you should handle the error properly. #[must_use] pub fn update(&self, data: &[u8]) -> Vec<u8> { let mut effects = Vec::new(); match self.core.update(data, &mut effects) { Ok(()) => effects, Err(e) => panic!("{e}"), } } /// Resolve an effect and return the effects. /// # Panics /// If the `data` cannot be deserialized into an effect or the `effect_id` is invalid. /// In production you should handle the error properly. #[must_use] pub fn resolve(&self, id: u32, data: &[u8]) -> Vec<u8> { let mut effects = Vec::new(); match self.core.resolve(EffectId(id), data, &mut effects) { Ok(()) => effects, Err(e) => panic!("{e}"), } } /// Get the current `ViewModel`. /// # Panics /// If the view cannot be serialized. /// In production you should handle the error properly. #[must_use] pub fn view(&self) -> Vec<u8> { let mut view_model = Vec::new(); match self.core.view(&mut view_model) { Ok(()) => view_model, Err(e) => panic!("{e}"), } } } }
Broad strokes: we define a type for core with FFI, which holds a Bridge wrapping our Counter, and
provide implementations of the three API methods taking and returning byte buffers.
The translation between rust types and the byte buffers is the job of the bridge (it also holds the effect requests inside the core under an id, which can be sent out to the Shell and used to resolve the effect, but more on that later).
Notice the Shell is in charge of creating the instance of this type, so in theory your Shell can have several instances of the app if it wants to.
There are many attribute macros annotating the FFI type for uniffi and wasm_bindgen, which generate
the actual code making them available as FFIs. We recommend the respective documentation if you're
interested in the detail of how this works. The notable part is that both libraries have a level of support for
various basic and structured data types which we don't use, and instead we serialize the data with Serde,
and generate types with facet_generate to make the support consistent.
It's not essential for you to understand the detail of the above code now. You won't need to change it, unless you're doing something fairly advanced, by which time you'll understand it.
Platform native part
Okay, with that plumbing, the Core part of adding a shell is complete. It's not a one liner, but you will only set this up once, and most likely won't touch it again, but having the ability, should you need to, is important.
Now we can proceed to the actual shell for your platform of choice:
- iOS with Swift and SwiftUI
- Android with Kotlin and Jetpack Compose
- Web with TypeScript, React and Next.js
- Rust in WebAssembly with Leptos
iOS/macOS with SwiftUI
In this section, we'll set up Xcode to build and run the simple counter app we built so far, targeting both iOS and macOS from a single project.
We think that using XcodeGen may be the simplest way to create an Xcode project to build and run a simple Apple app that calls into a shared core.
If you'd rather set up Xcode manually, you can do that, but most of this section will still apply. You just need to add the Swift package dependencies into your project by hand.
When we use Crux to build Apple apps, the Core API bindings are generated in Swift (with C headers) using Mozilla's UniFFI.
The shared core, which we built in previous chapters, is compiled to a static library and linked into the app binary.
The shared types are generated by Crux as a Swift package, which we can add to our project as a dependency. The Swift code to serialize and deserialize these types across the boundary is also generated by Crux as Swift packages.
Compile our Rust shared library
When we build our app, we also want to build the Rust core as a static library so that it can be linked into the binary that we're going to ship.
Other than Xcode and the Apple developer tools, we will use
cargo-swift to generate a
Swift package for our shared library, which we can add in Xcode.
To match our current version of UniFFI, we need to install version 0.9 of cargo-swift. You can install it with
cargo install cargo-swift --version '=0.9'
To run the various steps, we'll also use the Just task runner.
cargo install just
Let's write the Justfile and we can look at what happens. Here are the key tasks (the full Justfile also includes linting, CI and cleanup targets):
# /apple/Justfile
# generates Swift types via codegen binary
typegen:
cargo run --package shared --bin codegen \
--features codegen,facet_typegen \
-- --language swift --output-dir generated
# builds the shared library as a Swift package using cargo-swift
package:
cargo swift package \
--name Shared \
--platforms ios macos \
--lib-type static \
--features uniffi
# rebuilds the Xcode project from project.yml
generate-project:
xcodegen
# generates types, builds shared package, and regenerates Xcode project
generate: typegen package generate-project
# builds the project (generates first)
build: generate
xcodebuild \
-project CounterApp.xcodeproj \
-scheme CounterApp-macOS \
-configuration Debug \
build
# local development workflow
dev: build
The main task is dev which we'll use shortly. It runs build,
which in turn runs typegen, package and generate-project.
typegen will use the codegen CLI we
prepared earlier, and package will use
cargo swift to create a Shared package with our app binary and
the bindgen code. That package will be our Swift interface to the
core.
Finally generate-project will run xcodegen to give us an Xcode
project file. They are famously fragile files and difficult to
version control, so generating it from a less arcane source of truth
seems like a good idea (yes, even if that source of truth is YAML).
Here's the project file:
# /apple/project.yml
name: CounterApp
packages:
Shared:
path: ./generated/Shared
App:
path: ./generated/App
options:
bundleIdPrefix: com.crux.examples.counter
attributes:
BuildIndependentTargetsInParallel: true
targetTemplates:
app:
type: application
sources:
- path: CounterApp
excludes:
- "Info-*.plist"
scheme:
management:
shared: true
dependencies:
- package: Shared
- package: App
targets:
CounterApp-iOS:
templates: [app]
platform: iOS
deploymentTarget: 18.0
info:
path: CounterApp/Info-iOS.plist
properties:
UISupportedInterfaceOrientations:
- UIInterfaceOrientationPortrait
- UIInterfaceOrientationLandscapeLeft
- UIInterfaceOrientationLandscapeRight
UILaunchScreen: {}
CounterApp-macOS:
templates: [app]
platform: macOS
deploymentTarget: "15.0"
info:
path: CounterApp/Info-macOS.plist
properties:
NSSupportsAutomaticGraphicsSwitching: true
settings:
OTHER_LDFLAGS: [-w]
ENABLE_USER_SCRIPT_SANDBOXING: NO
Nothing too special, other than linking a couple packages and using them as dependencies.
With that, you can run
just dev
Simple - just dev! So what exactly happened?
The core built, including the FFI and the extra CLI binary, which was then called
to generate Swift code, and that was then packaged as a Swift package. You can
look at the generated directory, and you'll see two Swift packages - Shared and App,
just like we asked in project.yml. The Shared package has our app as a static lib and all the
generated FFI code for our FFI bindings, and the App package has the key types we will need.
No need to spend much time in here, but this is all the low-level glue code sorted out. Now we need to actually build some UI and we can run our app.
Building the UI
To add some UI, we need to do three things: wrap the core with a simple Swift interface, build a basic View to give us something to put on screen, and use that view as our main app view.
Wrap the core
The generated code still works with byte buffers, so lets give ourselves a nicer interface for it:
// apple/CounterApp/core.swift
import App
import Foundation
import Shared
@MainActor
class Core: ObservableObject {
@Published var view: ViewModel
private var core: CoreFfi
init() {
self.core = CoreFfi()
// swiftlint:disable:next force_try
self.view = try! .bincodeDeserialize(input: [UInt8](core.view()))
}
func update(_ event: Event) {
// swiftlint:disable:next force_try
let effects = [UInt8](core.update(data: Data(try! event.bincodeSerialize())))
// swiftlint:disable:next force_try
let requests: [Request] = try! .bincodeDeserialize(input: effects)
for request in requests {
processEffect(request)
}
}
func processEffect(_ request: Request) {
switch request.effect {
case .render:
DispatchQueue.main.async {
// swiftlint:disable:next force_try
self.view = try! .bincodeDeserialize(input: [UInt8](self.core.view()))
}
}
}
}
This is mostly just serialization code. But the processEffect method is interesting.
That is where effect execution goes. At the moment the switch statement has a single
lonely case updating the view model whenever the .render variant is requested,
but you can add more in here later, as you expand your Effect type.
Build a basic view
Xcode should've generated a ContentView file for you in apple/CounterApp/ContentView.swift.
Change it to look like this:
import App
import SwiftUI
struct ContentView: View {
@ObservedObject var core: Core
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundColor(.accentColor)
Text(core.view.count)
HStack {
ActionButton(label: "Reset", color: .red) {
core.update(.reset)
}
ActionButton(label: "Inc", color: .green) {
core.update(.increment)
}
ActionButton(label: "Dec", color: .yellow) {
core.update(.decrement)
}
}
}
}
}
struct ActionButton: View {
var label: String
var color: Color
var action: () -> Void
init(label: String, color: Color, action: @escaping () -> Void) {
self.label = label
self.color = color
self.action = action
}
var body: some View {
Button(action: action) {
Text(label)
.fontWeight(.bold)
.font(.body)
.padding(EdgeInsets(top: 10, leading: 15, bottom: 10, trailing: 15))
.background(color)
.cornerRadius(10)
.foregroundColor(.white)
.padding()
}
}
}
#Preview {
ContentView(core: Core())
}
And finally, make sure apple/CounterApp/CounterApp.swift looks like this to use
the ContentView:
import SwiftUI
@main
struct CounterApp: App {
var body: some Scene {
WindowGroup {
ContentView(core: Core())
}
}
}
The one interesting part of this is the @ObservedObject var core: Core. Since the Core is
an ObservableObject, we can subscribe to it to refresh our view. And we've marked the view
property as @Published, so whenever we set it, the View will draw.
The view then simply shows the core.view.count in a Text and whenever we press a button, we directly
call core.update() with the appropriate action.
You should then be able to run the app in the simulator, on an iPhone, or as a macOS app, and it should look like this:

Android — Kotlin and Jetpack Compose
This section has not been fully updated to match the rest of the documentation and some parts may not match how Crux works any more.
Bear with us while we update — use the iOS/macOS section as the most up-to-date template to follow.
When we use Crux to build Android apps, the Core API bindings are generated in Kotlin using Mozilla's UniFFI.
The shared core (that contains our app's behaviour) is compiled to a dynamic library, using Mozilla's Rust gradle plugin for Android and the Android NDK. The library is loaded at runtime using Java Native Access.
The shared types are generated by Crux as Kotlin packages, which we
can add to our Android project using sourceSets. The Kotlin code
to serialise and deserialise these types across the boundary is also
generated by Crux.
These are the steps to set up Android Studio to build and run a simple Android app that calls into a shared core.
We want to make setting up Android Studio to work with Crux really easy. As time progresses we will try to simplify and automate as much as possible, but at the moment there is some manual configuration to do. This only needs doing once, so we hope it's not too much trouble. If you know of any better ways than those we describe below, please either raise an issue (or a PR) at https://github.com/redbadger/crux.
This walkthrough uses Mozilla's excellent Rust gradle plugin
for Android, which uses Python. However, pipes has recently been removed from Python (since Python 3.13)
so you may encounter an error linking your shared library.
If you hit this problem, you can either:
- use an older Python (<3.13)
- wait for a fix (see this issue)
- or use a different plugin — there is a PR in the Crux repo that
explores the use of
cargo-ndkand thecargo-ndk-androidplugin that may be useful.
Create an Android App
The first thing we need to do is create a new Android app in Android Studio.
Open Android Studio and create a new project, for "Phone and Tablet", of type "Empty Activity". In this walk-through, we'll call it "SimpleCounter"
- "Name":
SimpleCounter - "Package name":
com.example.counter - "Save Location": a directory called
Androidat the root of our monorepo - "Minimum SDK"
API 34 - "Build configuration language":
Kotlin DSL (build.gradle.kts)
Your repo's directory structure might now look something like this (some files elided):
.
├── Android
│ ├── app
│ │ ├── build.gradle.kts
│ │ └── src
│ │ └── main
│ │ ├── AndroidManifest.xml
│ │ └── java/com/crux/examples/counter
│ │ └── MainActivity.kt
│ ├── build.gradle.kts
│ ├── gradle.properties
│ ├── settings.gradle.kts
│ └── shared
│ └── build.gradle.kts
├── Cargo.lock
├── Cargo.toml
└── shared
├── Cargo.toml
├── uniffi.toml
└── src
├── app.rs
├── bin
│ └── codegen.rs
├── ffi.rs
└── lib.rs
Add a Kotlin Android Library
This shared Android library (aar) is going to wrap our shared Rust library.
Under File -> New -> New Module, choose "Android Library" and give it the "Module name"
shared. Set the "Package name" to match the one from your
/shared/uniffi.toml, which in this example is com.example.counter.shared.
Again, set the "Build configuration language" to Kotlin DSL (build.gradle.kts).
For more information on how to add an Android library see https://developer.android.com/studio/projects/android-library.
We can now add this library as a dependency of our app.
Edit the app's build.gradle.kts (/Android/app/build.gradle.kts) to look like
this:
import org.jetbrains.kotlin.gradle.dsl.JvmTarget
plugins {
alias(libs.plugins.android.application)
alias(libs.plugins.kotlin.android)
alias(libs.plugins.kotlin.compose)
}
android {
namespace = "com.crux.examples.counter"
compileSdk {
version = release(36)
}
defaultConfig {
applicationId = "com.crux.examples.counter"
minSdk = 34
targetSdk = 36
versionCode = 1
versionName = "1.0"
testInstrumentationRunner = "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
isMinifyEnabled = false
proguardFiles(
getDefaultProguardFile("proguard-android-optimize.txt"),
"proguard-rules.pro"
)
}
}
compileOptions {
sourceCompatibility = JavaVersion.VERSION_11
targetCompatibility = JavaVersion.VERSION_11
}
kotlin {
compilerOptions {
jvmTarget = JvmTarget.JVM_11
}
}
buildFeatures {
compose = true
}
}
dependencies {
// our shared library
implementation(project(":shared"))
// added dependencies
implementation(libs.lifecycle.viewmodel.compose)
// original dependencies
implementation(libs.androidx.core.ktx)
implementation(libs.androidx.lifecycle.runtime.ktx)
implementation(libs.androidx.activity.compose)
implementation(platform(libs.androidx.compose.bom))
implementation(libs.androidx.compose.ui)
implementation(libs.androidx.compose.ui.graphics)
implementation(libs.androidx.compose.ui.tooling.preview)
implementation(libs.androidx.compose.material3)
testImplementation(libs.junit)
androidTestImplementation(libs.androidx.junit)
androidTestImplementation(libs.androidx.espresso.core)
androidTestImplementation(platform(libs.androidx.compose.bom))
androidTestImplementation(libs.androidx.compose.ui.test.junit4)
debugImplementation(libs.androidx.compose.ui.tooling)
debugImplementation(libs.androidx.compose.ui.test.manifest)
}
In our Gradle files, we are referencing a "Version Catalog" to manage our dependency versions, so you will need to ensure this is kept up to date.
Our catalog (Android/gradle/libs.versions.toml) will end up looking like this:
[versions]
agp = "8.13.2"
kotlin = "2.3.0"
coreKtx = "1.17.0"
junit = "4.13.2"
junitVersion = "1.3.0"
espressoCore = "3.7.0"
lifecycleRuntimeKtx = "2.10.0"
activityCompose = "1.12.3"
composeBom = "2026.01.01"
jna = "5.18.1"
lifecycle = "2.10.0"
rustAndroid = "0.9.6"
[libraries]
androidx-core-ktx = { group = "androidx.core", name = "core-ktx", version.ref = "coreKtx" }
junit = { group = "junit", name = "junit", version.ref = "junit" }
androidx-junit = { group = "androidx.test.ext", name = "junit", version.ref = "junitVersion" }
androidx-espresso-core = { group = "androidx.test.espresso", name = "espresso-core", version.ref = "espressoCore" }
androidx-lifecycle-runtime-ktx = { group = "androidx.lifecycle", name = "lifecycle-runtime-ktx", version.ref = "lifecycleRuntimeKtx" }
androidx-activity-compose = { group = "androidx.activity", name = "activity-compose", version.ref = "activityCompose" }
androidx-compose-bom = { group = "androidx.compose", name = "compose-bom", version.ref = "composeBom" }
androidx-compose-ui = { group = "androidx.compose.ui", name = "ui" }
androidx-compose-ui-graphics = { group = "androidx.compose.ui", name = "ui-graphics" }
androidx-compose-ui-tooling = { group = "androidx.compose.ui", name = "ui-tooling" }
androidx-compose-ui-tooling-preview = { group = "androidx.compose.ui", name = "ui-tooling-preview" }
androidx-compose-ui-test-manifest = { group = "androidx.compose.ui", name = "ui-test-manifest" }
androidx-compose-ui-test-junit4 = { group = "androidx.compose.ui", name = "ui-test-junit4" }
androidx-compose-material3 = { group = "androidx.compose.material3", name = "material3" }
jna = { module = "net.java.dev.jna:jna", version.ref = "jna" }
lifecycle-viewmodel-compose = { module = "androidx.lifecycle:lifecycle-viewmodel-compose", version.ref = "lifecycle" }
[plugins]
android-application = { id = "com.android.application", version.ref = "agp" }
kotlin-android = { id = "org.jetbrains.kotlin.android", version.ref = "kotlin" }
kotlin-compose = { id = "org.jetbrains.kotlin.plugin.compose", version.ref = "kotlin" }
android-library = { id = "com.android.library", version.ref = "agp" }
rust-android = { id = "org.mozilla.rust-android-gradle.rust-android", version.ref = "rustAndroid" }
The Rust shared library
We'll use the following tools to incorporate our Rust shared library into the Android library added above. This includes compiling and linking the Rust dynamic library and generating the runtime bindings and the shared types.
- The Android NDK
- Mozilla's Rust gradle plugin
for Android
- This plugin depends on Python 3, make sure you have a version installed
- Java Native Access
- Uniffi to generate Java bindings
The NDK can be installed from "Tools, SDK Manager, SDK Tools" in Android Studio.
Let's get started.
Add the four rust android toolchains to your system:
$ rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android
Edit the project's build.gradle.kts (/Android/build.gradle.kts) to look like
this:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
plugins {
alias(libs.plugins.android.application) apply false
alias(libs.plugins.kotlin.android) apply false
alias(libs.plugins.kotlin.compose) apply false
alias(libs.plugins.android.library) apply false
alias(libs.plugins.rust.android) apply false
}
Edit the library's build.gradle.kts (/Android/shared/build.gradle.kts) to look
like this:
import com.android.build.gradle.tasks.MergeSourceSetFolders
import com.nishtahir.CargoBuildTask
import com.nishtahir.CargoExtension
import org.jetbrains.kotlin.gradle.dsl.JvmTarget
plugins {
alias(libs.plugins.android.library)
alias(libs.plugins.kotlin.android)
alias(libs.plugins.rust.android)
}
android {
namespace = "com.crux.examples.counter"
compileSdk {
version = release(36)
}
ndkVersion = "29.0.14206865"
defaultConfig {
minSdk = 34
}
compileOptions {
sourceCompatibility = JavaVersion.VERSION_11
targetCompatibility = JavaVersion.VERSION_11
}
kotlin {
compilerOptions {
jvmTarget = JvmTarget.JVM_11
}
}
sourceSets {
getByName("main") {
// types are now generated in kotlin
kotlin.srcDirs("${projectDir}/../generated")
}
}
}
dependencies {
implementation(libs.jna) {
artifact {
type = "aar"
}
}
}
extensions.configure<CargoExtension>("cargo") {
// workspace, so build at root, with `--package shared`
module = "../.."
libname = "shared"
profile = "debug"
// these are the four recommended targets for Android that will ensure your library works on all mainline android devices
// make sure you have included the rust toolchain for each of these targets: \
// `rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android`
targets = listOf("arm", "arm64", "x86", "x86_64")
extraCargoBuildArguments = listOf("--package", "shared", "--features", "uniffi")
cargoCommand = System.getProperty("user.home") + "/.cargo/bin/cargo"
rustcCommand = System.getProperty("user.home") + "/.cargo/bin/rustc"
pythonCommand = "python3"
}
afterEvaluate {
// The `cargoBuild` task isn't available until after evaluation.
android.libraryVariants.configureEach {
var productFlavor = ""
productFlavors.forEach { flavor ->
productFlavor += flavor.name.replaceFirstChar { char -> char.uppercaseChar() }
}
val buildType = buildType.name.replaceFirstChar { char -> char.uppercaseChar() }
tasks.named("generate${productFlavor}${buildType}Assets") {
dependsOn(tasks.named("cargoBuild"))
}
// The below dependsOn is needed till https://github.com/mozilla/rust-android-gradle/issues/85 is resolved this fix was got from #118
tasks.withType<CargoBuildTask>().forEach { buildTask ->
tasks.withType<MergeSourceSetFolders>().configureEach {
inputs.dir(
File(
File(layout.buildDirectory.asFile.get(), "rustJniLibs"),
buildTask.toolchain?.folder!!
)
)
dependsOn(buildTask)
}
}
}
}
// The below dependsOn is needed till https://github.com/mozilla/rust-android-gradle/issues/85 is resolved this fix was got from #118
tasks.matching { it.name.matches(Regex("merge.*JniLibFolders")) }.configureEach {
inputs.dir(File(layout.buildDirectory.asFile.get(), "rustJniLibs/android"))
dependsOn("cargoBuild")
}
You will need to set the ndkVersion to one you have installed, go to "Tools, SDK Manager, SDK Tools" and check "Show Package Details" to get your installed version, or to install the version matching build.gradle.kts above.
If you now build your project you should see the newly built shared library object file.
$ ls --tree Android/shared/build/rustJniLibs
Android/shared/build/rustJniLibs
└── android
└── arm64-v8a
└── libshared.so
└── armeabi-v7a
└── libshared.so
└── x86
└── libshared.so
└── x86_64
└── libshared.so
You should also see the generated types in the Android/generated
folder — note that the sourceSets directive in the shared library
gradle file (above) allows us to build our shared library against
these generated types.
$ ls --tree Android/generated
Android/generated
└── com
├── crux
│ └── examples
│ └── simplecounter
│ ├── Requests.kt
│ ├── shared.kt
│ └── Simplecounter.kt
└── novi
├── bincode
│ ├── BincodeDeserializer.kt
│ └── BincodeSerializer.kt
└── serde
├── BinaryDeserializer.kt
├── BinarySerializer.kt
├── ...
└── Unsigned.kt
Create some UI and run in the Simulator
Wrap the core to support capabilities
First, let's add some boilerplate code to wrap our core and handle the
capabilities that we are using. For this example, we only need to support the
Render capability, which triggers a render of the UI.
Let's create a file "File, New, Kotlin Class/File, File" called Core.
This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.
Edit Android/app/src/main/java/com/crux/examples/counter/Core.kt to look like
the following. This code sends our (UI-generated) events to the core, and
handles any effects that the core asks for. In this simple example, we aren't
calling any HTTP APIs or handling any side effects other than rendering the UI,
so we just handle this render effect by updating the published view model from
the core.
package com.crux.examples.counter
import androidx.compose.runtime.getValue
import androidx.compose.runtime.mutableStateOf
import androidx.compose.runtime.setValue
open class Core : androidx.lifecycle.ViewModel() {
private var core: CoreFfi = CoreFfi()
var view: ViewModel by mutableStateOf(
ViewModel.bincodeDeserialize(core.view())
)
private set
fun update(event: Event) {
val effects = core.update(event.bincodeSerialize())
val requests = Requests.bincodeDeserialize(effects)
for (request in requests) {
processEffect(request)
}
}
private fun processEffect(request: Request) {
when (val effect = request.effect) {
is Effect.Render -> {
this.view = ViewModel.bincodeDeserialize(core.view())
}
}
}
}
That when statement, above, is where you would handle any other
effects that your core might ask for. For example, if your core needs
to make an HTTP request, you would handle that here.
Edit /Android/app/src/main/java/com/crux/examples/counter/MainActivity.kt to
look like the following:
package com.crux.examples.counter
import android.os.Bundle
import androidx.activity.ComponentActivity
import androidx.activity.compose.setContent
import androidx.compose.foundation.layout.Arrangement
import androidx.compose.foundation.layout.Column
import androidx.compose.foundation.layout.Row
import androidx.compose.foundation.layout.fillMaxSize
import androidx.compose.foundation.layout.padding
import androidx.compose.material3.Button
import androidx.compose.material3.ButtonDefaults
import androidx.compose.material3.MaterialTheme
import androidx.compose.material3.Surface
import androidx.compose.material3.Text
import androidx.compose.runtime.Composable
import androidx.compose.runtime.rememberCoroutineScope
import androidx.compose.ui.Alignment
import androidx.compose.ui.Modifier
import androidx.compose.ui.graphics.Color
import androidx.compose.ui.tooling.preview.Preview
import androidx.compose.ui.unit.dp
import androidx.lifecycle.viewmodel.compose.viewModel
import com.crux.examples.counter.ui.theme.CounterTheme
import kotlinx.coroutines.launch
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
CounterTheme {
Surface(
modifier = Modifier.fillMaxSize(),
color = MaterialTheme.colorScheme.background
) { View() }
}
}
}
}
@Composable
fun View(core: Core = viewModel()) {
val scope = rememberCoroutineScope()
Column(
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center,
modifier = Modifier.fillMaxSize().padding(10.dp),
) {
Text(text = core.view.count, modifier = Modifier.padding(10.dp))
Row(horizontalArrangement = Arrangement.spacedBy(10.dp)) {
Button(
onClick = { scope.launch { core.update(Event.RESET) } },
colors =
ButtonDefaults.buttonColors(
containerColor = MaterialTheme.colorScheme.error
)
) { Text(text = "Reset", color = Color.White) }
Button(
onClick = { scope.launch { core.update(Event.INCREMENT) } },
colors =
ButtonDefaults.buttonColors(
containerColor = MaterialTheme.colorScheme.primary
)
) { Text(text = "Increment", color = Color.White) }
Button(
onClick = { scope.launch { core.update(Event.DECREMENT) } },
colors =
ButtonDefaults.buttonColors(
containerColor = MaterialTheme.colorScheme.secondary
)
) { Text(text = "Decrement", color = Color.White) }
}
}
}
@Preview(showBackground = true)
@Composable
fun DefaultPreview() {
CounterTheme { View() }
}
Web — TypeScript and React (Next.js)
These are the steps to set up and run a simple TypeScript Web app that calls into a shared core.
This walk-through assumes you have already set up the
shared library and codegen as described in
Shared core and types.
Create a Next.js App
For this walk-through, we'll use the
pnpm package manager for no
reason other than we like it the most!
Let's create a simple Next.js app for TypeScript,
using pnpx (from pnpm). You can probably accept
the defaults.
pnpx create-next-app@latest
Compile our Rust shared library
When we build our app, we also want to compile the Rust core to WebAssembly so that it can be referenced from our code.
To do this, we'll use
wasm-pack,
which you can install like this:
# with homebrew
brew install wasm-pack
# or directly
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
Now that we have wasm-pack installed, we can build
our shared library to WebAssembly for the browser.
wasm-pack build \
--target web \
--out-dir ../web-nextjs/generated/pkg \
../shared \
--features wasm_bindgen
Generate the Shared Types
To generate the shared types for TypeScript, we use the codegen CLI we prepared earlier:
cargo run --package shared --bin codegen \
--features codegen,facet_typegen \
-- --language typescript \
--output-dir generated/types
Both the Wasm package and the generated types are
referenced as local dependencies in package.json:
{
"dependencies": {
"shared": "file:generated/pkg",
"shared_types": "file:generated/types"
}
}
Install the dependencies:
pnpm install
Create some UI
Counter example
A simple app that increments, decrements and resets a counter.
Wrap the core to handle effects
First, let's add some boilerplate code to wrap our core
and handle the effects that it produces. For this
example, we only need to support the Render effect,
which triggers a render of the UI.
This code that wraps the core only needs to be written once — it only grows when we need to support additional effects.
Edit src/app/core.ts to look like the following.
This code sends our (UI-generated) events to the core,
and handles any effects that the core asks for. In this
example, we aren't calling any HTTP APIs or handling
any side effects other than rendering the UI, so we
just handle this render effect by updating the
component's view hook with the core's ViewModel.
Notice that we have to serialize and deserialize the data that we pass between the core and the shell. This is because the core is running in a separate WebAssembly instance, and so we can't just pass the data directly.
import type { Dispatch, SetStateAction } from "react";
import { CoreFFI } from "shared";
import type { Effect, Event } from "shared_types/app";
import { EffectVariantRender, Request, ViewModel } from "shared_types/app";
import { BincodeDeserializer, BincodeSerializer } from "shared_types/bincode";
import init_core from "shared/shared";
export class Core {
core: CoreFFI | null = null;
initializing: Promise<void> | null = null;
setState: Dispatch<SetStateAction<ViewModel>>;
constructor(setState: Dispatch<SetStateAction<ViewModel>>) {
// Don't initialize CoreFFI here - wait for WASM to be loaded
this.setState = setState;
}
initialize(shouldLoad: boolean): Promise<void> {
if (this.core) {
return Promise.resolve();
}
if (!this.initializing) {
const load = shouldLoad ? init_core() : Promise.resolve();
this.initializing = load
.then(() => {
this.core = new CoreFFI();
this.setState(this.view());
})
.catch((error) => {
this.initializing = null;
console.error("Failed to initialize wasm core:", error);
});
}
return this.initializing;
}
view(): ViewModel {
if (!this.core) {
throw new Error("Core not initialized. Call initialize() first.");
}
return deserializeView(this.core.view());
}
update(event: Event) {
if (!this.core) {
throw new Error("Core not initialized. Call initialize() first.");
}
const serializer = new BincodeSerializer();
event.serialize(serializer);
const effects = this.core.update(serializer.getBytes());
const requests = deserializeRequests(effects);
for (const { effect } of requests) {
this.processEffect(effect);
}
}
private processEffect(effect: Effect) {
switch (effect.constructor) {
case EffectVariantRender: {
this.setState(this.view());
break;
}
}
}
}
function deserializeRequests(bytes: Uint8Array): Request[] {
const deserializer = new BincodeDeserializer(bytes);
const len = deserializer.deserializeLen();
const requests: Request[] = [];
for (let i = 0; i < len; i++) {
const request = Request.deserialize(deserializer);
requests.push(request);
}
return requests;
}
function deserializeView(bytes: Uint8Array): ViewModel {
return ViewModel.deserialize(new BincodeDeserializer(bytes));
}
That switch statement, above, is where you would
handle any other effects that your core might ask for.
For example, if your core needs to make an HTTP
request, you would handle that here. To see an example
of this, take a look at the
counter example
in the Crux repository.
Create a component to render the UI
Edit src/app/page.tsx to look like the following.
This code loads the WebAssembly core and sends it an
initial event. Notice that we pass the setState hook
to the update function so that we can update the state
in response to a render effect from the core.
"use client";
import type { NextPage } from "next";
import { useEffect, useRef, useState } from "react";
import {
ViewModel,
EventVariantReset,
EventVariantIncrement,
EventVariantDecrement,
} from "shared_types/app";
import { Core } from "./core";
const Home: NextPage = () => {
const [view, setView] = useState(new ViewModel(""));
const core = useRef(new Core(setView));
useEffect(() => {
void core.current.initialize(true);
}, []);
return (
<main>
<section className="box container has-text-centered m-5">
<p className="is-size-5">{view.count}</p>
<div className="buttons section is-centered">
<button
className="button is-primary is-danger"
onClick={() => core.current.update(new EventVariantReset())}
>
{"Reset"}
</button>
<button
className="button is-primary is-success"
onClick={() => core.current.update(new EventVariantIncrement())}
>
{"Increment"}
</button>
<button
className="button is-primary is-warning"
onClick={() => core.current.update(new EventVariantDecrement())}
>
{"Decrement"}
</button>
</div>
</section>
</main>
);
};
export default Home;
Now all we need is some CSS. First add the Bulma
package, and then import it in layout.tsx.
pnpm add bulma
import "bulma/css/bulma.min.css";
import type { Metadata } from "next";
import { Inter } from "next/font/google";
const inter = Inter({ subsets: ["latin"] });
export const metadata: Metadata = {
title: "Crux Simple Counter Example",
description: "Rust Core, TypeScript Shell (NextJS)",
};
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html lang="en">
<body className={inter.className}>{children}</body>
</html>
);
}
Build and serve our app
We can build our app, and serve it for the browser, in one simple step.
pnpm dev
Web — Rust and Leptos
These are the steps to set up and run a simple Rust Web app that calls into a shared core.
This walk-through assumes you have already set up the
shared library and codegen as described in
Shared core and types.
There are many frameworks available for writing Web applications in Rust. Here we're choosing Leptos for this walk-through as a way to demonstrate how Crux can work with web frameworks that use fine-grained reactivity rather than the conceptual full re-rendering of React. However, a similar setup would work for other frameworks that compile to WebAssembly.
Create a Leptos App
Our Leptos app is just a new Rust project, which we
can create with Cargo. For this example we'll call it
web-leptos.
cargo new web-leptos
We'll also want to add this new project to our Cargo
workspace, by editing the root Cargo.toml file.
[workspace]
members = ["shared", "web-leptos"]
Now we can cd into the web-leptos directory and
start fleshing out our project. Let's add some
dependencies to shared/Cargo.toml.
[package]
name = "web-leptos"
version = "0.1.0"
authors.workspace = true
repository.workspace = true
edition.workspace = true
license.workspace = true
keywords.workspace = true
rust-version.workspace = true
[dependencies]
shared = { path = "../shared" }
leptos = { version = "0.8.17", features = ["csr"] }
[lints]
workspace = true
If using nightly Rust, you can enable the "nightly" feature for Leptos. When you do this, the signals become functions that can be called directly.
However in our examples we are using the stable
channel and so have to use the get() and update()
functions explicitly.
We'll also need a file called index.html, to serve
our app.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Leptos Counter</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css">
</head>
<body></body>
</html>
Create some UI
We will use the
counter example,
which has a shared library that will work with the
following example code.
Counter example
A simple app that increments, decrements and resets a counter.
Wrap the core to handle effects
First, let's add some boilerplate code to wrap our core
and handle the effects that it produces. For this
example, we only need to support the Render effect,
which triggers a render of the UI.
This code that wraps the core only needs to be written once — it only grows when we need to support additional effects.
Edit src/core.rs to look like the following. This
code sends our (UI-generated) events to the core, and
handles any effects that the core asks for. In this
example, we aren't calling any HTTP APIs or handling
any side effects other than rendering the UI, so we
just handle this render effect by sending the new
ViewModel to the relevant Leptos signal.
Also note that because both our core and our shell are written in Rust (and run in the same memory space), we do not need to serialize and deserialize the data that we pass between them. We can just pass the data directly.
use std::rc::Rc;
use leptos::prelude::{Update as _, WriteSignal};
use shared::{Counter, Effect, Event, ViewModel};
pub type Core = Rc<shared::Core<Counter>>;
pub fn new() -> Core {
Rc::new(shared::Core::new())
}
pub fn update(core: &Core, event: Event, render: WriteSignal<ViewModel>) {
for effect in &core.process_event(event) {
process_effect(core, effect, render);
}
}
pub fn process_effect(core: &Core, effect: &Effect, render: WriteSignal<ViewModel>) {
match effect {
Effect::Render(_) => {
render.update(|view| *view = core.view());
}
}
}
That match statement, above, is where you would
handle any other effects that your core might ask for.
For example, if your core needs to make an HTTP
request, you would handle that here. To see an example
of this, take a look at the
counter example
in the Crux repository.
Edit src/main.rs to look like the following. This
code creates two signals — one to update the view
(which starts off with the core's current view), and
the other to capture events from the UI (which starts
off by sending the reset event). We also create an
effect that sends these events into the core whenever
they are raised.
mod core;
use leptos::prelude::*;
use shared::Event;
#[component]
fn RootComponent() -> impl IntoView {
let core = core::new();
let (view, render) = signal(core.view());
let (event, set_event) = signal(Event::Reset);
Effect::new(move |_| {
core::update(&core, event.get(), render);
});
view! {
<section class="box container has-text-centered m-5">
<p class="is-size-5">{move || view.get().count}</p>
<div class="buttons section is-centered">
<button class="button is-primary is-danger"
on:click=move |_| set_event.set(Event::Reset)
>
{"Reset"}
</button>
<button class="button is-primary is-success"
on:click=move |_| set_event.set(Event::Increment)
>
{"Increment"}
</button>
<button class="button is-primary is-warning"
on:click=move |_| set_event.set(Event::Decrement)
>
{"Decrement"}
</button>
</div>
</section>
}
}
fn main() {
leptos::mount::mount_to_body(|| {
view! { <RootComponent /> }
});
}
Build and serve our app
The easiest way to compile the app to WebAssembly and
serve it in our web page is to use
trunk, which we can install
with Homebrew
(brew install trunk) or Cargo
(cargo install trunk).
We can build our app, serve it and open it in our browser, in one simple step.
trunk serve --open
The Weather App
So far, we've explained the basics on a very simple counter app. So simple in fact, that it barely demonstrated any of the key features of Crux.
Time to ditch the training wheels and dive into something real. We'll need to demonstrate a few key concepts. How the Elm architecture works at a larger scale, how we manage navigation in a multi-screen app, and the main focus will be on managed effects and capabilities. To that end, we'll need an app that does enough interesting things, while staying reasonably small.
So we're going to build a Weather app. It needs to call an API, store data locally, and use location APIs to show local weather. That's plenty of effects for us to play with and see how Crux supports this.
Here's the same app — one shared core — running on iOS, Android, macOS, and the web:
The app works like a system weather utility: you get your local weather, search for locations, and save favourites — all backed by a real API.
You can look at the full example code in the Crux Github repo, but we'll walk through the key parts. As before, we're going to start with the core and once we have it, look at the shells.
Unlike in Part I, we will not build the app step by step, it would be very long and repetitive, we will instead do more of a code review of the key parts.
Before we dive in though, lets quickly establish some foundations about the app architecture Crux follows, known most widely as the Elm architecture, based on the language which popularised it.
Elm Architecture
Now we've had a bit of a feel for what writing Crux apps is like, we'll add more context to the different components and the overall architecture of Crux apps. The architecture is heavily inspired by Elm, and if you'd like to compare, the Architecture page of their guide is an excellent starting point.
Event Sourcing as a model for UI
User Interface is fundamentally event-driven. Unlike batch or stream processing, all changes in apps with UI are driven by events happening in the outside world, most commonly the user interface itself – the user touching the screen, typing on a keyboard, executing a CLI command, etc. In response, the app updates its internal state, changes what's shown on the screen, starts an interaction with the outside world, or all of the above.
The Elm architecture is a very direct translation of this pattern in code. User interactions (along with other changes in the outside world, such as time passing) are represented by events, and in response to them, the app updates its internal state represented by a model. The link between them is a simple, pure function which takes the model and the event, and updates the model based on the events. The actual UI on screen is a projection of (i.e "is built only from") the model. Because there is virtually no other state in the app, the model must contain enough information to decide what should be on screen. As a more direct representation of the information, we can use the view model as a step between the model and the UI.
That gives us two functions:
fn update(event: Event, model: &mut Model);
fn view(model: &Model) -> ViewModel;
That's enough for a Counter app, but not for our Weather app. What we're missing is for the app to be able to interact with the outside world and respond to events in it. We can't perform side-effects yet. Conceptually, we need to extend the update function to not only mutate the model, but also to emit some side-effects (or just "effects" for short).
fn update(event: Event, model: &mut Model) -> Vec<Effect>
fn view(model: &Model) -> ViewModel;
This more complete model is a function which takes an event and a model, mutates the model and optionally produces some effects. This is still quite a simple and pure (well, there is an &mut... call it pure enough) function, and is completely predictable, for the same inputs, it will always yield the same outputs (and changes to the model, guaranteed by Rust's borrow checker), and that is a very important design choice. It enables very easy testability, and that is what we need to build quality apps.
UI, effects and testability
User interface and effects are normally where testing gets very difficult.
If the application logic can directly cause changes in the outside world (or input/output — I/O, in computer parlance), the only way to verify the logic completely is to look at the result of those changes. The results, however, are pixels on screen, elements in the DOM, packets going over the network and other complex, difficult to inspect and often short-lived things. The only viable strategy to test them in this direct scenario is to take on the role of the particular device the app is working with, and pretending to be that device – a practice known as mocking (or stubbing, or faking, depending who you talk to). The APIs used to interact with these things are really complicated though, and rarely built with testing in mind. Even if you emulate them well, tests based on this approach won't be stable against changes in that API. When the API changes, your code and your tests will both have to change, taking any confidence they gave you in the first place with them. What's more, they also differ across platforms. Now we have that problem twice or more times.
The problem is in how apps are normally written (when written in a direct, imperative style). When it comes time to perform an effect, the most straightforward code just performs it straight away. The solution, as usual, is to add indirection. What Crux does (inspired by Elm, Haskell and others) is separate the intent from the execution, with a managed effects system.
Crux's effect approach focuses on capturing the intent of the effect, not the specific implementation of executing it. The intent is captured as data to benefit from type checking and from all the tools the language already provides for working with data. The business logic can stay pure, but express all the behaviour: state changes and effects. The intent is also the thing that needs to be tested. We can reasonably afford to trust that the authors of an HTTP client library, for example, have tested it and it does what it promises to do — all we need to check is that we're sending the right requests1.
Executing the effects: the runtime Shell
In Elm, the responsibility to execute the requested effects falls on the Elm runtime. Crux is very similar, except both the app and (some of) the runtime is your responsibility. This means some more work, but it also means you only bring what you need and nothing more, both in terms of supported platforms and the necessary APIs.
In Crux, business logic written in Rust is captured in the update function mentioned above and the other pieces that the function needs: events, model and effects, each represented by a type. This code forms a Core, which is portable, and really easily testable.
The execution of effects, including drawing the user interface, is done in a native Shell. Its job is to draw the appropriate UI on screen, translate user interactions into events to send to the Core, and when requested, perform effects and return their outcomes back to the Core.

The Shell thus has two sides: the driving side – the interactions causing events which push the Core to action, and the driven side, which services the Core's requests for side effects. The Core itself is also driven — Without being prompted by the Shell, the Core does nothing, it can't – with no other I/O, there are no other triggers which could cause the Core code to run. To the Shell, the Core is a simple library, providing some computation. From the perspective of the Core, the Shell is a platform the Core runs on.
Note that this driven nature impacts how effects execute in Crux. In the next few chapters, you'll see that you can write effect orchestration with async Rust, but because the entirety of the core is driven, this async code only executes when the core APIs are called by the shell.
Don't worry if this means nothing to you for now, it'll make sense later.
Managed effects: the complex interactions between the core and the shell
While the basic effects are quite simple (e.g. "fetch a response over HTTP"), real world apps tend to compose them in quite complicated patterns with data dependencies between them, and we need to support this use well. In the next chapter, we'll introduce the Command API used to compose the basic effects into more complex interactions, and later we'll build on this with Capabilities, which provide an abstraction on top of these basic building blocks with a more ergonomic API.
Capabilities not only provide a nicer API for creating effects and effect orchestrations; in the future, they will likely also provide implementations of the effect execution for the various supported platforms.
With commands, our API evolves one final time, to the signature in the App trait:
fn update(&self, event: Event, model: &mut Model) -> Command<Effect, Event>;
fn view(&self, model: &Model) -> ViewModel;
The Commands are generic over two types: an Effect describing the interactions with the outside world we want to do, and our Event, acting as a callback when those interactions are complete and return a value of some kind.
We will look at how effects are created and passed to the shell in a chapter following the next one, in which we'll first have a look at how larger apps fit together in Crux.
-
In reality, we do need to check that at least one of our HTTP requests executes successfully, but once one does, it is very likely that so long as they are described correctly, all of them will. ↩
Structuring larger apps
Now we have a better handle on what Crux apps are made of, let's have a think about how we might build our Weather app. It is certainly small enough to be built by just blindly following the simple counter example. There are only about 20 different events total, but you probably agree that some more structure would be good.
Composition
Fortunately, all the key components of the architecture compose. We can have Event variants which carry other event types,
Model fields containing other models and update functions calling other module's update function. And looking at the main app.rs module of the Weather app, this is exactly what's going on:
Here's the Event
#![allow(unused)] fn main() { #[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)] #[repr(C)] pub enum Event { Navigate(Box<Workflow>), Home(Box<WeatherEvent>), Favorites(Box<FavoritesEvent>), } }
There are only three options - navigate somewhere, an event on the home screen, or an event in the Favourites section.
The update function reflects this too:
#![allow(unused)] fn main() { fn update(&self, event: Self::Event, model: &mut Self::Model) -> Command<Effect, Event> { match event { Event::Navigate(next) => { model.workflow = *next; render() } Event::Home(home_event) => { let mut commands = Vec::new(); if let WeatherEvent::Show = *home_event { commands.push( favorites::events::update(FavoritesEvent::Restore, model) .map_event(|fe| Event::Favorites(Box::new(fe))), ); } commands.push( weather::events::update(*home_event, model) .map_event(|we| Event::Home(Box::new(we))), ); Command::all(commands) } Event::Favorites(fav_event) => favorites::events::update(*fav_event, model) .map_event(|e| Event::Favorites(Box::new(e))), } } }
We'll look closer at the navigation in the next section, but the other two events simply forward to a different module's update function. In a special case we actually call two different updates from two different modules in response to the same event.
In this example, we pass down the whole model as is, but we could also just pass down a single field of it.
You can also see another kind of composition - a composition of commands. both favorites::events:update and weather::events::update return a Command, and the Event::Home branch uses Command::all to run those commands in parallel. You might be wondering what's going on with the .map_event. The Command returned by favorites::events can emit the FavoritesEvent type, but we need our commands to emit them wrapped in the Event::Favorites (and boxed, because they are a larger type), so that when they arrive back to this update function, they get recognized as favorites events and sent down the third branch of the match.
The main thing to remember about this is that the events always come in from the top, and they get routed by the layers to the right function which can process them (or they can be processed directly, if the parent module knows better and wants to do something special).
Model can compose in a similar way, but in our case it's more of a mix:
#![allow(unused)] fn main() { #[derive(Default, Debug)] pub struct Model { pub weather_data: CurrentWeatherResponse, pub workflow: Workflow, pub favorites: Favorites, pub search_results: Option<Vec<GeocodingResponse>>, pub location_enabled: bool, pub last_location: Option<Location>, } }
The favorites field is a type from the favorites module, but weather_data looks useful globally, so does search_results and the location related fields.
The most interesting of these is the Workflow type, which manages our navigation state - what page of the app are we currently on.
The main takeaway is that Crux is designed such that whole apps can be composed - an existing type implementing App can be used, unchanged from a "parent" app, by
- adding an event variant which carries the child's event
- storing the child's model in the model
- calling the child's
updatewhere appropriate - mapping the commands returned to the parent's event type (using
.map_event), and if necessary, effect type (using.map_effect)
That doesn't mean you should always subdivide apps in the same way, it is often a lot more convenient to share a model, or even an event type across two or more modules. Just know that should you need to reuse a whole Crux app later on, you can.
Navigation
Typical apps involve some type of geography. The smaller the screen, the more moving between sections the user needs to do. But in principle, this is just more state, typically of the exclusive nature - the user can't be in two places at once. To
avoid thinking too much about screens or windows (what if we need to build a CLI or a VR version?), let's generalise this idea
in the concept of a Workflow. These are in no way a special type, we're simply modeling our domain in Rust.
In our Weather app, the Workflow is an enum:
#![allow(unused)] fn main() { #[derive(Facet, Default, Serialize, Deserialize, Clone, Debug, PartialEq)] #[repr(C)] pub enum Workflow { #[default] Home, Favorites(FavoritesState), AddFavorite, } }
In other words - the user can be either on the Home page, or in the Favorites section (which has some additional state), or they can be adding a favorite. No other options currently exist, and they can only be doing one of those things at once.
At this point, it might be helpful to look at how this is reflected in the view model:
#![allow(unused)] fn main() { #[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)] pub struct ViewModel { pub workflow: WorkflowViewModel, } #[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)] #[repr(C)] pub enum WorkflowViewModel { Home { weather_data: Box<CurrentWeatherResponse>, favorites: Vec<FavoriteView>, }, Favorites { favorites: Vec<FavoriteView>, delete_confirmation: Option<Location>, }, AddFavorite { search_results: Option<Vec<GeocodingResponse>>, }, } #[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)] pub struct FavoriteView { pub name: String, pub location: Location, pub current: Box<Option<CurrentWeatherResponse>>, } }
The view model is built around a WorkflowViewModel enum, because we're currently thinking about the app as separate workflows. If we had a two-panel kind of UX
with a list and detail, we might model this differently. It's worth spending some time thinking about this when building the app, and this is part of why we encourage building Crux apps from inside out.
The WorkflowViewModel's variants are a fair bit richer than the Workflow - while the workflow in the model is only concerned with where the user is, the ViewModel also carries the information they see. It is entirely enough for us to draw a user interface from.
To bring it home, let's look at the view function:
#![allow(unused)] fn main() { fn view(&self, model: &Model) -> ViewModel { let favorites = model.favorites.iter().map(From::from).collect(); let workflow = match &model.workflow { Workflow::Home => WorkflowViewModel::Home { weather_data: Box::new(model.weather_data.clone()), favorites, }, Workflow::Favorites(favorites_state) => match favorites_state { FavoritesState::Idle => WorkflowViewModel::Favorites { favorites, delete_confirmation: None, }, FavoritesState::ConfirmDelete(location) => WorkflowViewModel::Favorites { favorites, delete_confirmation: Some(*location), }, }, Workflow::AddFavorite => WorkflowViewModel::AddFavorite { search_results: model.search_results.clone(), }, }; ViewModel { workflow } } }
As you may have guessed, it maps the workflow to a view model, inserting some data from the model along the way.
That's enough to express the idea of navigation, and what workflow the user is meant to be in. How it specifically works on each platform is up to each Shell.
Managed Effects
It's time to get the Weather app to actually fetch some weather information and let us store some favourites. And for that, we will need to interact with the outside world - we will need to perform side-effects.
As we mentioned before, the approach to side-effects Crux uses is sometimes called managed side-effects. Your app's core is not allowed to perform side-effects directly. Instead, whenever it wants to interact with the outside world, it needs to request the interaction from the shell.
It's not quite enough to do one side-effect at a time, however. In our weather app example we may want to load the list of favourite locations in parallel with checking the current location. We may also want to run a sequence, such as checking whether location services are enabled, then fetching a location if they are.
The abstraction Crux uses to capture the potentially complex orchestration of effects
in response to an event is a type called Command.
Think of your whole app as a robot, where the Core is the brain of the robot and the Shell is the body of the robot. The brain instructs the body through commands and the body passes information about the outside world back to it with Events.
In this chapter we will explore how commands are created and used, before the next chapter, where we dive into capabilities, which provide a convenient way to create common commands.
Note on intent and execution
Managed effects are the key to Crux being portable across as many platforms as is sensible. Crux apps are, in a sense, built in the abstract, they describe what should happen in response to events, but not how it should happen. We think this is important both for portability, and for testing and general separation of concerns. What should happen is inherent to the product, and should behave the same way on any platform – it's part of what your app is. How it should be executed (and exactly what it looks like) often depends on the platform.
Different platforms may support different ways, for example a biometric authentication may work very differently on various devices and some may not even support it at all. Different platforms may also have different practical restrictions: while it may be perfectly appropriate to write things to disk on one platform, internet access can't be guaranteed (e.g. on a smart watch); on another, writing to disk may not be possible, but internet connection is virtually guaranteed (e.g. in an API service, or on an embedded device in a factory). The specific storage solution for persistent caching would be implemented differently on different platforms, but would potentially share the key format and eviction strategy across them.
The hard part of designing effects is working out exactly where to draw the line between what is the intent and what is the implementation detail, what's common across platforms and what may be different on each, and implementing the former in Rust as a set of types, and the latter on the native side in the Shell, however is appropriate.
Because Effects define the "language" used to express intent, your Crux application code can be portable onto any platform capable of executing the intent in some way. Clearly, the number of different effects we can think of, and platforms we can target is enormous, and Crux doesn't want to force you to implement the entire portfolio of them on every platform.
Instead, your app is expected to define an Effect type which covers the kinds of
effects which your app needs in order to work, and every time it responds to an Event,
it is expected to return a Command.
Here is the Weather app's Effect type:
#![allow(unused)] fn main() { #[effect(facet_typegen)] pub enum Effect { Render(RenderOperation), KeyValue(KeyValueOperation), Http(HttpRequest), Location(LocationOperation), } }
This tells us the app does four kinds of side effects: Rendering the UI, storing something in Key-Value store, using a HTTP client and using Location services. That's all it does, that's also all it can possibly do, until we expand this type further.
What is a Command
The Command is a recipe for a side-effects workflow which may perform several effects and also send events back to the app.

Crux expects a Command to be returned by the update function. A basic Command will result in an effect request to the Shell, and when the request is resolved by the Shell, the Command will pass the output to the app in an Event. The interaction can be more complicated than this, however. You can imagine a command running a set of Effects concurrently (say a few http requests and a timer), then follow some of them with additional effects based on their outputs, and finally send an event with the result of some of the outputs combined. So in principle, Command is a state machine which emits effects (for the Shell) and Events (for the app) according to the internal logic of what needs to be accomplished.
Command provides APIs to iterate over the effects and events emitted so far. This API can be used both in tests and in Rust-based shells, and for some advanced use cases when composing applications.
Effects and Events
Let's look closer at Effects. Each effect carries a request for an Operation (e.g. a HTTP request), which can be inspected and resolved with an operation output (e.g. a HTTP response). After effect requests are resolved, the command may have further effect requests or events, depending on the recipe it's executing.
Types acting as an Operation must implement the crux_core::capability::Operation trait, which ties them to the type of output. These two types are the protocol between the core and the shell when requesting and resolving the effects. The other types involved in the exchange are various wrappers to enable the operations to be defined in separate crates. The operation is first wrapped in a Request, which can be resolved, and then again with an Effect, like we saw above. This allows multiple Operation types from different crates to coexist, and also enables the Shells to "dispatch" to the right implementation to handle them.
The Effect type is typically defined with the help of the #[effect] macro. Here is the Weather app's effect again:
#![allow(unused)] fn main() { #[effect(facet_typegen)] pub enum Effect { Render(RenderOperation), KeyValue(KeyValueOperation), Http(HttpRequest), Location(LocationOperation), } }
The four operations it carries are actually defined by four different Capabilities, so let's talk about those.
Capabilities
Capabilities are developer-friendly, ergonomic APIs to construct commands, from very basic ones all the way to complex stateful orchestrations. Capabilities are an abstraction layer that bundles related operations together with code to create them, and cover one kind of a side-effect (e.g. HTTP, or timers).
We will look at writing capabilities in the next chapter, but for now, it's useful to know that their API often doesn't return Commands straight away, but instead returns command builders, which can be converted into a Command, or converted into a future and used in an async context.
To help that make more sense, let's look at how Commands are typically used.
Working with Commands
The intent behind the command API is to cover 80% of effect orchestration without asking developers to use async Rust. We will look at the async use in a minute, but first let's look at what can be done without it.
A typical use of a Command in an update function will look something like this:
Http::get(API_URL)
.expect_json()
.build()
.then_send(Event::ReceivedResponse),
This code is using a HTTP capability and its API up to the .build() call which returns a CommandBuilder. This is a lot like a Future – its type carries the output type, and it represents the eventual result of the effect. The difference is that it can be converted either into a Command or into a Future to be used in an async context. In this case, the .then_send part is building the command by binding it to an Event to send the output of the request back to the app.
Here's an example of the same from the Weather app:
#![allow(unused)] fn main() { KeyValue::get(FAVORITES_KEY).then_send(FavoritesEvent::Load) }
The get() call again returns a command builder, which is used to create a command with .then_send(). The Command is now fully baked and bound to the specific callback event, and can no longer be meaningfully chained into an "effect pipeline".
One special, but common case of creating a command is creating a Command which does nothing, because there are no more side-effects:
#![allow(unused)] fn main() { Command::done() }
Soon enough, your app will get a little more complicated, you will need to run multiple commands concurrently, but your update function only returns a single value. To get around this, you can combine existing commands into one using either the all function, or the .and method.
We've seen an example of this already, but here it is again:
#![allow(unused)] fn main() { let mut commands = Vec::new(); if let WeatherEvent::Show = *home_event { commands.push( favorites::events::update(FavoritesEvent::Restore, model) .map_event(|fe| Event::Favorites(Box::new(fe))), ); } commands.push( weather::events::update(*home_event, model) .map_event(|we| Event::Home(Box::new(we))), ); Command::all(commands) }
The two update calls involved each return a command, and we want to run them concurrently. The result is another Command, which can be returned from update.
Commands (or more precisely command builders) can be created without capabilities. That's what capabilities do internally. You shouldn't really need this in your app code, so we will cover that side of Commands in the next chapter, when we look at building Capabilities.
You might also want to run effects in a sequence, passing output of one as the input of another. This is another thing the command builders can facilitate. Let's look at that.
Command builders
Command builders come in three flavours:
- RequestBuilder - the most common, builds a request expecting a single response from the shell (think HTTP client)
- StreamBuilder - builds a request expecting a (possibly infinite) sequence of responses from the shell (think WebSockets)
- NotificationBuilder - builds a shell notification, which does not expect a response. The best example is notifying the shell that a new view model is available
All builders share a common API. Request and stream builder can be converted into commands with a .then_send.
Both also support .then_request and .then_stream calls, for chaining on a function which takes the output of the first builder and returns a new builder. This can be used to build things like automatic pagination through an API for example.
You can also .map the output of the request/stream to a new value.
Here's an example of a more complicated chaining from the Command test suite:
#![allow(unused)] fn main() { #[test] fn complex_concurrency() { fn increment(output: AnOperationOutput) -> AnOperation { let AnOperationOutput::Other([a, b]) = output else { panic!("bad output"); }; AnOperation::More([a, b + 1]) } let mut cmd = Command::all([ Command::request_from_shell(AnOperation::More([1, 1])) .then_request(|out| Command::request_from_shell(increment(out))) .then_send(Event::Completed), Command::request_from_shell(AnOperation::More([2, 1])) .then_request(|out| Command::request_from_shell(increment(out))) .then_send(Event::Completed), ]) .then(Command::request_from_shell(AnOperation::More([3, 1])).then_send(Event::Completed)); // ... the assertions are omitted for brevity, see crux_core/src/command/tests/combinators.rs }
Forgive the abstract nature of the operations involved, these constructions are relatively uncommon in real code, and have not been used anywhere in our example code yet.
For more details of this, we recommend the Command API docs.
Combining all these tools provides a fair bit of flexibility to create fairly complex orchestrations of effects. Sometimes, you might want to go more complex than that, however. In such cases, Crux attempting to create more APIs trying to achieve every conceivable orchestration with closures would have diminishing returns. In such cases, you probably just want to write async code instead.
Notice that nowhere in the above examples have we mentioned working with the model during the execution of the command. This is very much by design: Once started, commands do not have model access, because they execute asynchronously, possibly in parallel, and access to model would introduce data races, which are very difficult to debug.
In order to update state, you should pass the result of the effect orchestration back to your app using an Event (as a kind of callback). It's relatively typical for apps to have a number of "internal" events, which handle results of effects. Sometimes these are also useful in tests, if you want to start a particular journey "from the middle".
Commands with async
The real power of commands comes from the fact that they build on async Rust. Each Command is a little async executor, which runs a number of tasks. The tasks get access to the crux context (represented by CommandContext), which gives them the ability to communicate with the shell and with the app.
You can create a raw command like this:
Command::new(|ctx| async move {
let output = ctx.request_from_shell(AnOperation::One).await;
ctx.send_event(Event::Completed(output));
let output = ctx.request_from_shell(AnOperation::Two).await;
ctx.send_event(Event::Completed(output));
});
Command::new takes a closure, which receives the CommandContext and returns a future, which will become the Command's main task (it is not expected to return anything, its Output is (). The provided context can be used to start shell requests, streams, and send events back to the app.
The Context can also be used to spawn more tasks in the command.
There is a very similar async API in command builders too, except the returned future/stream is expected to return a value.
Builders can be converted into a future/stream for use in the async blocks with .into_future(ctx) and .into_stream(ctx), so long as you hold an instance of a CommandContext (otherwise those futures/streams would have no ability to communicate with the shell or the app).
While commands do execute on an async runtime, the runtime does not run on its own - it's part of the core and needs to be driven by the Shell calling the Core APIs. We use async rust as a convenient way to build the cooperative multi-tasking state machines involved in managing side effects.
This is also why combining the Crux async runtime with something like Tokio will appear to somewhat work (because the futures involved are mostly compatible), but it will have odd stop-start behaviours, because the Crux runtime doesn't run all the time, and some futures won't work, because they require specific Tokio support.
That said, a lot of universal async code (like async channels for example), work just fine.
Cancelling commands
Commands can be cancelled using an AbortHandle. Call cmd.abort_handle() to get
a handle, store it in your model, and call handle.abort() later to cancel all
tasks in the command. This is useful for things like cancelling an in-flight search
when the user types a new query.
// In one event handler, start a command and store its abort handle
let mut cmd = Http::get(url).expect_json().build().then_send(Event::Response);
model.search_handle = Some(cmd.abort_handle());
return cmd;
// In a later event handler, cancel the previous command
if let Some(handle) = model.search_handle.take() {
handle.abort();
}
There is more to the async effect API than we can or should cover here. Most of what you'd expect in async rust is supported – join handles, aborting tasks (and even Commands), spawning tasks and communicating between them, etc. Again, we recommend the Command API docs for the full coverage.
Migrating from previous versions of Crux
If you're new to Crux, it's unlikely you need to read this section. The original API for side-effects was very different from Commands and this section is kept to help migrate from that API
The change to Command is a breaking one for all Crux apps. The previous API used Capabilities to perform side-effects via callbacks. The new API removes Capabilities and caps from the App trait entirely, replacing them with a Command return value from update.
There are three parts to the migration:
- Remove the
Capabilitiesassociated type and thecapsparameter fromupdate - Declare the
Effectassociated type on your App - Return
Commandfromupdate
Here's what the end state looks like:
#![allow(unused)] fn main() { impl crux_core::App for App { type Event = Event; type Model = Model; type ViewModel = ViewModel; type Effect = Effect; fn update( &self, event: Event, model: &mut Model, ) -> crux_core::Command<Effect, Event> { crux_core::Command::done() // return a Command } } }
To begin with, you can return Command::done() (a no-op) from update and
incrementally migrate your effect handling to use Commands and capability APIs
that return command builders.
Testing with managed effects
We have seen how to use effects, and we have seen a little bit about the testing, but we should look at that closer.
Crux was expressly designed to support easy, fast, comprehensive testing of your application. Everyone is generally on board with unit tests and TDD when it comes to basic pure logic. But as soon as any I/O or UI gets involved, the dread sets in. We're going to have to set up some fakes, introduce additional traits just to test things, or just bite the bullet and build tests around a fully integrated app and wait for them to run (and probably fail on a race condition sometimes). So most people give up.
Managed effects smooth over that big hump. You pay for it a little bit in how the code is written, but you reap the reward in testing it. This is because the core that uses managed effects is pure and therefore completely deterministic — all the side effects are pushed to the shell.
It's straightforward to write an exhaustive set of unit tests that give you complete confidence in the correctness of your application code — you can test the behavior of your application independently of platform-specific UI and API calls.
There is no need to mock/stub anything, and there is no need to write integration tests.
Not only are the unit tests easy to write, but they run extremely quickly, and can be run in parallel.
For example, here's a test checking that when the weather screen is shown, a location gets checked and the weather gets refreshed.
#![allow(unused)] fn main() { #[test] fn test_show_triggers_set_weather() { let mut model = Model::default(); // 1. Trigger the Show event let event = WeatherEvent::Show; let mut cmd = update(event, &mut model); let mut location = cmd.expect_one_effect().expect_location(); assert_eq!(location.operation, LocationOperation::IsLocationEnabled); // 2. Simulate the Location::is_location_enabled effect (enabled = true) location .resolve(LocationResult::Enabled(true)) .expect("to resolve"); let event = cmd.expect_one_event(); let mut cmd = update(event, &mut model); let mut location = cmd.expect_one_effect().expect_location(); assert_eq!(location.operation, LocationOperation::GetLocation); // 3. Simulate the Location::get_location effect (with a test location) let test_location = Location { lat: 33.456_789, lon: -112.037_222, }; location .resolve(LocationResult::Location(Some(test_location))) .expect("to resolve"); let event = cmd.expect_one_event(); let mut cmd = update(event, &mut model); // 4. Resolve the weather HTTP effect let mut request = cmd.expect_one_effect().expect_http(); assert_eq!(&request.operation, &WeatherApi::build(test_location)); // 5. Resolve the HTTP request with a simulated response from the web API request .resolve(HttpResult::Ok( HttpResponse::ok() .body(test_response_json().as_bytes()) .build(), )) .unwrap(); // 6. The next event should be SetWeather let actual = cmd.expect_one_event(); assert!(matches!(actual, WeatherEvent::SetWeather(_))); // 7. Send the SetWeather event back to the app let _ = update(actual.clone(), &mut model); // Now check the model in detail assert_eq!(model.weather_data, test_response()); } }
You can see it's a test of a whole interaction with multiple kinds of effects, and it runs in 11 ms and is entirely deterministic.
Here's the corresponding code it's testing:
#![allow(unused)] fn main() { pub fn update(event: WeatherEvent, model: &mut Model) -> Command<Effect, WeatherEvent> { match event { WeatherEvent::Show => is_location_enabled().then_send(WeatherEvent::LocationEnabled), WeatherEvent::LocationEnabled(enabled) => { model.location_enabled = enabled; if enabled { get_location().then_send(WeatherEvent::LocationFetched) } else { Command::done() } } WeatherEvent::LocationFetched(location) => { model.last_location.clone_from(&location); if let Some(loc) = location { update(WeatherEvent::Fetch(loc), model) } else { Command::done() } } // Internal events related to fetching weather data WeatherEvent::Fetch(location) => WeatherApi::fetch(location) .then_send(move |result| WeatherEvent::SetWeather(Box::new(result))), WeatherEvent::SetWeather(result) => { if let Ok(weather_data) = *result { model.weather_data = weather_data; } update(WeatherEvent::FetchFavorites, model).and(render()) } WeatherEvent::FetchFavorites => { if model.favorites.is_empty() { return Command::done(); } model .favorites .iter() .map(|f| { let location = f.geo.location(); WeatherApi::fetch(location).then_send(move |result| { WeatherEvent::SetFavoriteWeather(Box::new(result), location) }) }) .collect() } WeatherEvent::SetFavoriteWeather(result, location) => { if let Ok(weather) = *result { // Update the weather data for the matching favorite model .favorites .update(&location, |favorite| favorite.current = Some(weather)); } render() } } } }
Hopefully this illustrates that the managed effects let you test entire transactions involving effects, without ever executing any.
The full suite of 18 tests of the Weather app runs in 36 milliseconds on a Mac Mini M4 Pro. In practice, it's rare for a test suite of a Crux app to take longer than compiling it (even incrementally). Even apps with thousands of tests usually run them in seconds, and sadly they do not yet compile in seconds.
cargo nextest run
Compiling shared v0.1.0
Finished `test` profile [unoptimized + debuginfo] target(s) in 11.60s
────────────
Nextest run ID 53981226-e01e-443a-a6a8-ded6fb5af6e8 with nextest profile: default
Starting 18 tests across 1 binary
PASS [ 0.014s] ( 1/18) shared favorites::events::tests::test_delete_cancelled
PASS [ 0.014s] ( 2/18) shared favorites::events::tests::test_kv_load_empty
PASS [ 0.014s] ( 3/18) shared favorites::events::tests::test_delete_confirmed
PASS [ 0.014s] ( 4/18) shared favorites::events::tests::test_cancel_returns_to_favorites
PASS [ 0.015s] ( 5/18) shared favorites::events::tests::test_add_multiple_favorites
PASS [ 0.015s] ( 6/18) shared favorites::events::tests::test_kv_set_and_load
PASS [ 0.015s] ( 7/18) shared favorites::events::tests::test_delete_with_persistence
PASS [ 0.015s] ( 8/18) shared favorites::events::tests::test_kv_load_error
PASS [ 0.015s] ( 9/18) shared app::tests::test_navigation
PASS [ 0.015s] (10/18) shared favorites::events::tests::test_submit_adds_favorite
PASS [ 0.016s] (11/18) shared favorites::events::tests::test_delete_pressed
PASS [ 0.010s] (12/18) shared favorites::events::tests::test_submit_persists_favorite
PASS [ 0.010s] (13/18) shared favorites::events::tests::test_submit_duplicate_favorite
PASS [ 0.010s] (14/18) shared weather::events::tests::test_show_triggers_set_weather
PASS [ 0.010s] (15/18) shared weather::events::tests::test_fetch_favorites_triggers_fetch_for_all_favorites
PASS [ 0.010s] (16/18) shared weather::events::tests::test_fetch_triggers_favorites_fetch_when_favorites_exist
PASS [ 0.029s] (17/18) shared favorites::events::tests::test_search_triggers_api_call
PASS [ 0.021s] (18/18) shared weather::events::tests::test_current_weather_fetch
────────────
Summary [ 0.036s] 18 tests run: 18 passed, 0 skipped
The test steps
Crux provides test APIs to make the tests a bit more readable and nicer to write, but it's still up to the test to execute the app loop.
Let's have a look at a simpler test from the Weather app and go through it step by step:
#![allow(unused)] fn main() { #[test] fn test_delete_with_persistence() { let mut model = Model::default(); let favorite = test_favorite(); model.favorites.insert(favorite.clone()); // Set the state to ConfirmDelete with the favorite's coordinates model.workflow = Workflow::Favorites(FavoritesState::ConfirmDelete(Location { lat: favorite.geo.lat, lon: favorite.geo.lon, })); // Delete and verify KV is updated let mut cmd = update(FavoritesEvent::DeleteConfirmed, &mut model); let kv_request = cmd.expect_effect().expect_key_value(); cmd.expect_one_effect().expect_render(); assert!(matches!( kv_request.operation, KeyValueOperation::Set { .. } )); assert!(model.favorites.is_empty()); cmd.expect_no_effects(); cmd.expect_no_events(); } }
First, we do some setup - create a model, create a favorite and insert it, and make sure the app is in the right Workflow state.
Then, we call update with FavoritesEvent::DeleteConfirmed and get back a command, which we
store in cmd.
The next line is our assertion on the command - we expect an effect, and we expect it to be a key value effect. The expectation either returns the KeyValueRequest or panics.
Then we inspect the request's operation to check it's a Set – for the purposes of this test
that's enough.
We can then check the favourites in the model are gone, and there is nothing else to do.
More integrated tests and deterministic simulation testing
We could test the key-value storage in a more integrated fashion too - instead of asserting
on the key value operation, we can provide a very basic implementation of a key value store
to use in tests, using a HashMap as storage for example. Then we could simply forward the
key-value effects to it and make sure the storage is managed correctly. Similarly, we could
build a predictable replica of an API service we need to test against, etc.
While that's all starting to sound a lot like mocking, remember that we're not implementing Redis or building an actual HTTP server. It's all very simple code. And if we do that for all the different effects our app needs and provide a realistic enough implementation to mimic the real things, a very interesting thing happens - we get the entire app stack, with the nitty gritty technical details taken out, running in a unit test.

With that, we can create an app instance and send it completely random (but deterministic) events, and make sure "nothing bad happens". The definition of what that means is specific to each app, but just to illustrate some options:
- Introduce randomised errors to your fake API and see they are handled correctly
- Randomly lose data in storage and make sure the app recovers
- Make sure timeouts work correctly by randomly firing them first
- Check that any other invariants hold, e.g. anything time-related only moves forward (counters count up), storage remains referentially consistent, logically impossible states do not happen (ideally they would be impossible to represent, but sometimes that's too hard)
When we do that, we can then run this pseudo random process, for hours if we like, and let it find any bugs for us. To reproduce them, all we need is the random seed used for the specific test run.
In practice, Crux apps will mostly be able to run at thousands of events a second, and these tests will explore more of the state space than we ever could with manual unit tests.
This type of testing is usually reserved to consensus algorithms and network protocols (where anything that can happen will happen and they have to be rock solid), because setting up the test harness is just too much work. But with managed effects it is a few hundred lines of additional code. For a modestly sized app, a testing harness like that will only take a few days to write. We may even ship building blocks of such test harness with Crux in the future.
Building capabilities
The final piece of the puzzle we should look at in our exploration of the Weather app before we move to the Shell is Capabilities.
We looked at effects a fair bit and explored the Commands and CommandBuilders,
but in practice, it's quite rare that you'd interact with those directly from
your app.
Typically, you'll be working with effects using Capabilities - more developer-friendly APIs
which implement a specific kind of side-effect in a generic fashion. They define the core-shell
message protocol for the side-effect and provide an ergonomic API to create the right CommandBuilders.
Examples include: HTTP client, Timer operations, Key-Value storage, Secrets provider, Geolocation, etc.
In practice, we find there is a limited number of these effect packages, they should be very reusable, and an individual app will typically need around seven of them, almost certainly fewer than ten.
Included capabilities
The weather app uses two out of the three capabilities provided with Crux: HTTP client (crux_http),
Key-Value store (crux_kv) (the third is the time capability – crux_time).
These are the most common things we think people will want to use in their apps. There are more, and we will probably build those over time as well, we just haven't worked on a motivating use-case ourselves yet. If you have and built a capability which you'd like to donate, definitely get in touch!
Let's look at the use of crux_http quickly, as it's the most extensive of the three. The Weather
app makes a pretty typical move and centralises the weather API use in a client:
#![allow(unused)] fn main() { pub struct WeatherApi; impl WeatherApi { /// Build an `HttpRequest` for testing purposes #[cfg(test)] pub fn build(location: Location) -> HttpRequest { use crate::weather::model::current_response::WEATHER_URL; HttpRequest::get(WEATHER_URL) .query(&CurrentWeatherQuery { lat: location.lat.to_string(), lon: location.lon.to_string(), units: "metric", appid: API_KEY.clone(), }) .expect("could not serialize query string") .build() } /// Fetch current weather for a specific location pub fn fetch<Effect, Event>( location: Location, ) -> RequestBuilder< Effect, Event, impl std::future::Future<Output = Result<CurrentWeatherResponse, WeatherError>>, > where Effect: From<Request<HttpRequest>> + Send + 'static, Event: Send + 'static, { Http::get(WEATHER_URL) .expect_json::<CurrentWeatherResponse>() .query(&CurrentWeatherQuery { lat: location.lat.to_string(), lon: location.lon.to_string(), units: "metric", appid: API_KEY.clone(), }) .expect("could not serialize query string") .build() .map(|result| match result { Ok(mut response) => match response.take_body() { Some(weather_data) => Ok(weather_data), None => Err(WeatherError::ParseError), }, Err(_) => Err(WeatherError::NetworkError), }) } } }
The main method there is fetch, which uses Http::get from crux_http to create a GET request expecting
a json response which deserialises into a specific type, and provides a URL query to specify
the search. At the end of that chained call is a .map unpicking the response and turning
it into a more convenient Result type for the app code.
The interesting thing here is that the fetch method returns a RequestBuilder. In a way, this
makes it a half-way step to a custom capability, but it also just means the fetch call is
convenient to use from both normal and async context.
This is one of the things capabilities do - they map the lower-level FFI protocols into a more convenient API for the app developer.
Let's look at the other thing they do.
Custom capabilities
The Weather app has one specialty - it works with location services. This is an example of a capability which we'd probably struggle to find a cross-platform crate for. It's also not so common or complex that we feel we should develop and maintain an official one. So a custom capability in the app is the way to go.
The capability defines two things:
- The protocol for communicating to the Shell
- The APIs used by the programmer of the Core
Here is Weather app's Location capability in full:
#![allow(unused)] fn main() { // This module defines the effect for accessing location information in a cross-platform way using Crux. // The structure here is designed to be serializable, portable, and to fit into Crux's command/request architecture. use std::future::Future; use crux_core::{Command, Request, capability::Operation, command::RequestBuilder}; use facet::Facet; use serde::{Deserialize, Serialize}; use super::Location; // The operations that can be performed related to location. // Using an enum allows us to easily add more operations in the future and ensures type safety. #[derive(Facet, Clone, Serialize, Deserialize, Debug, PartialEq)] #[repr(C)] pub enum LocationOperation { IsLocationEnabled, GetLocation, } // The response structure for a location request. // This is serializable so it can be sent across the FFI boundary. // The possible results from performing a location operation. // This enum allows us to handle different response types in a type-safe way. #[derive(Facet, Clone, Serialize, Deserialize, Debug, PartialEq)] #[repr(C)] pub enum LocationResult { Enabled(bool), Location(Option<Location>), } #[must_use] pub fn is_location_enabled<Effect, Event>() -> RequestBuilder<Effect, Event, impl Future<Output = bool>> where Effect: Send + From<Request<LocationOperation>> + 'static, Event: Send + 'static, { Command::request_from_shell(LocationOperation::IsLocationEnabled).map(|result| match result { LocationResult::Enabled(val) => val, LocationResult::Location(_) => false, }) } #[must_use] pub fn get_location<Effect, Event>() -> RequestBuilder<Effect, Event, impl Future<Output = Option<Location>>> where Effect: Send + From<Request<LocationOperation>> + 'static, Event: Send + 'static, { Command::request_from_shell(LocationOperation::GetLocation).map(|result| match result { LocationResult::Location(loc) => loc, LocationResult::Enabled(_) => None, }) } // Implement the Operation trait so that Crux knows how to handle this effect. // This ties the operation type to its output/result type. impl Operation for LocationOperation { type Output = LocationResult; } }
There are two interesting types: LocationOperation and LocationResult - they are the
request and response pair for the capability. The capability tells Crux that LocationResult
is the expected output for the LocationOperation with the trait implementation at the very
bottom. It marks the LocationOperation as an Operation as defined by Crux and associates
the output type.
That's number 1 done - protocol defined. This is what the Shell will need to understand and return back in order to implement the location capability.
The rest of the code are the two APIs used by the Core developer - is_location_enabled and get_location.
Their type signatures are fairly complex, so let's pick them apart.
First, they are both generic over Effect and Event. This isn't strictly necessary for local
capabilities, but it makes the capability reusable for any Effect and Event, not just the
ones from the Weather app.
The other interesting thing is the trait bound Effect: From<Request<LocationOperation>>,
which says that the Effect type needs to be able to convert from a location Request, or
in other words - we need to be able to wrap a Request<LocationOperation> into the app's
Effect type. All Effect types generated with the #[effect] macro already do this.
Other than that, the APIs just create command builders and return them. Those types are also
somewhat gnarly, but it's mostly the impl Future<Output = [value]> that's interesting.
Notice that the Output types are not LocationResult, they are the specific convenient
type the Core developer wants.
And that's all Capabilities do - they provide a convenient API for creating CommandBuilders,
and converting between convenient Rust types and an FFI "wire protocol" used to communicate
with the Shell.
In the ports and adapters architecture, Capabilities are the ports, and the shell-side implementations are the adapters.
In fact, let's go build one in the next chapter.
The shell
We've looked at how the Weather app fits together, how it's tested, and if you were developing it along the way, you would now have a core with the important business logic, fully tested and rock solid. Time to build the UI.
(Okay sure, in practice, you would not build the whole core first, then the whole UI, you'd probably go feature by feature, but the point stands - we now know for a fact that the core does the right thing.)
The shell will have two responsibilities:
- Laying out the UI components, like we've already seen in Part I
- Supporting the app's capabilities. This will be new to us
Like in Part I, you can choose which Shell language you'd like to see this in, but first let's talk about what they all have in common.
Message interface between core and shell
In Part I, we learned to use the update and view APIs of the core. We also learned that
in their raw form, they take serialized values as byte buffers.
We skimmed over the return value of update very quickly. In that case it only ever
returned a request for a RenderOperation - a signal that a new view model is available.
In the Weather's case, more options are possible. Recall the effect type:
#![allow(unused)] fn main() { #[effect(facet_typegen)] pub enum Effect { Render(RenderOperation), KeyValue(KeyValueOperation), Http(HttpRequest), Location(LocationOperation), } }
Those are the four possible variants we'll see in the return from update. It
is essentially telling us "I did the state update, and here are some side-effects
for you to perform".
Let's say that the effect is an HTTP request. We execute it, get a response, and
what do we do then? Well, that's what the third core API, resolve, is for:
#![allow(unused)] fn main() { pub fn update(data: &[u8]) -> Vec<u8> pub fn resolve(id: u32, data: &[u8]) -> Vec<u8> pub fn view() -> Vec<u8> }
Each effect request comes with an identifier. We use resolve to return the
output of the effect back to the app, alongside the identifier, so that it can
be paired correctly.
Let's look at how this works in practice.
Platforms
You can continue with your platform of choice:
iOS/macOS
Let's start with the new part, and also typically the shorter part – implementing the capabilities.
Capability implementation
This is what Weather's core.swift looks like
@MainActor
class Core: ObservableObject {
@Published var view: ViewModel
private let logger = Logger(subsystem: "com.example.weather", category: "Core")
private let keyValueStore: KeyValueStore
private var isInitialized = false
private var core: CoreFfi
init() {
logger.info("Initializing Core")
self.core = CoreFfi()
// swiftlint:disable:next force_try
self.view = try! .bincodeDeserialize(input: [UInt8](core.view()))
do {
self.keyValueStore = try KeyValueStore()
logger.debug("KeyValueStore initialized successfully")
} catch {
logger.error("Failed to initialize KeyValueStore: \(error.localizedDescription)")
fatalError("KeyValueStore initialization failed: \(error)")
}
}
func update(_ event: Event) {
// swiftlint:disable:next force_try
let effects = [UInt8](core.update(Data(try! event.bincodeSerialize())))
// swiftlint:disable:next force_try
let requests: [Request] = try! .bincodeDeserialize(input: effects)
for request in requests {
processEffect(request)
}
}
func processEffect(_ request: Request) {
// ...
}
}
It's slightly more complicated, but broadly the same as the Counter's core.
We have an extra logger which is not really important for us, and we
also hold on to a KeyValueStore, which is the storage for the key-value
implementation.
We've truncated the processEffect method, because it's fairly long, but the basic
structure is this:
func processEffect(_ request: Request) {
switch request.effect {
case .render:
DispatchQueue.main.async {
self.view = try! .bincodeDeserialize(input: [UInt8](self.core.view()))
}
case .http(let req):
// ...
case .keyValue(let keyValue):
// ...
case .location(let locationOp):
// ...
}
}
We get a Request, and do an exhaustive match on what the requested effect is. In Swift we have tagged unions, so we can also destructure the operation requested.
We can have a look at what the HTTP branch does:
case .http(let req):
handleHttp(request, req)
This delegates to handleHttp, which does the actual work:
private func handleHttp(_ request: Request, _ req: HttpRequest) {
logger.info("Making HTTP request to: \(req.url)")
Task {
do {
let response = try await requestHttp(req).get()
logger.debug("Received HTTP response with status: \(response.status)")
// swiftlint:disable:next force_try
let data = Data(try! HttpResult.ok(response).bincodeSerialize())
resolveEffects(request.id, data)
} catch {
logger.error("HTTP request failed: \(error.localizedDescription)")
}
}
}
We start a new Task to run this job off the main thread, then use the
async requestHttp() call to run the request.
Then it takes the response, serializes it and passes it to core.resolve via resolveEffects, which
returns more effect requests. This is perhaps unexpected, but it's the direct
consequence of the Commands async nature. There can easily be a command which
does something along the lines of:
Command::new(|ctx| {
let http_req = Http::get(url).expect_json<Counter>().build().into_future(ctx);
let resp = http_req.await; // effect 1
let counter = resp.map(|result| match result {
Ok(mut response) => match response.take_body() {
Some(counter) => {
Ok(results)
}
None => Err(ApiError::ParseError),
},
Err(_) => Err(ApiError::NetworkError),
});
let _ = KeyValue::set(COUNTER, counter).into_future(ctx).await // effect 2
// ...
ctx.send_event(Event::Done);
})
Once we resolve the http request at the .await point marked "effect 1", this future can
proceed and make a KeyValue request at the "effect 2" .await point. So on the
shell end, we need to be able to respond appropriately.
What we do is loop through those effect requests (there could easily be multiple requests
at once), go through them and recurse - call processEffect again to handle it.
Just for completeness, this is what requestHttp looks like:
import App
import SwiftUI
enum HttpError: Error {
case generic(Error)
case message(String)
}
func requestHttp(_ request: HttpRequest) async -> Result<HttpResponse, HttpError> {
var req = URLRequest(url: URL(string: request.url)!)
req.httpMethod = request.method
for header in request.headers {
req.addValue(header.value, forHTTPHeaderField: header.name)
}
do {
let (data, response) = try await URLSession.shared.data(for: req)
if let httpResponse = response as? HTTPURLResponse {
let status = UInt16(httpResponse.statusCode)
let body = [UInt8](data)
return .success(HttpResponse(status: status, headers: [], body: body))
} else {
return .failure(.message("bad response"))
}
} catch {
return .failure(.generic(error))
}
}
Not that interesting, it's a wrapper around URLRequest and friends which takes and
returns the generated HttpRequest and HttpResponse, originally defined in Rust by
crux_http.
The pattern repeats similarly for key-value store and the location capability.
User interface and navigation
It's worth looking at how Weather handles the Workflow navigation in SwiftUI.
As in the Counter example, the Weather's core has a @Published var view: ViewModel
which we can use in the Views.
Here's the root content view:
struct ContentView: View {
@ObservedObject var core: Core
init(core: Core) {
self.core = core
}
var body: some View {
NavigationStack {
ZStack {
// Base background that's always present
Color(platformGroupedBackground)
.ignoresSafeArea()
// Content views
switch core.view.workflow {
case .home:
HomeView(core: core)
.transition(
.opacity.combined(with: .offset(x: 0, y: 10))
)
case .favorites:
FavoritesView(core: core)
.transition(
.opacity.combined(with: .offset(x: 0, y: 10))
)
case .addFavorite:
AddFavoriteView(core: core)
.transition(
.opacity.combined(with: .offset(x: 0, y: 10))
)
}
}
.animation(.easeOut(duration: 0.2), value: core.view.workflow)
}
}
}
Thanks to the declarative nature of SwiftUI, we can show the view we need to, depending on the workflow, and pass the core down.
We could do this differently - core could stay in the root view and we could pass
an update callback in an environment, and just the appropriate section of the
view model to each view, it's up to you how you want to go about it.
Let's look at the HomeView as well, just to complete the picture:
struct HomeView: View {
@ObservedObject var core: Core
@State private var hasLoadedInitialData = false
@State private var selectedPage = 0
var body: some View {
Group {
if case .home(let weatherData, let favorites) = core.view.workflow {
VStack {
TabView(selection: $selectedPage) {
// Main weather card
Group {
if weatherData.cod == 200 && weatherData.main.temp.isFinite {
WeatherCard(weatherData: weatherData)
.transition(.opacity)
} else {
LoadingCard()
}
}
.tag(0)
.tabItem { Label(weatherData.name.isEmpty ? "Current" : weatherData.name, systemImage: "location") }
// Favorite weather cards
ForEach(Array(favorites.enumerated()), id: \.element.name) { idx, favorite in
Group {
if let current = favorite.current {
WeatherCard(weatherData: current)
.transition(.opacity)
} else {
LoadingCard()
}
}
.tag(idx + 1)
.tabItem { Label(favorite.name, systemImage: "star") }
}
}
#if os(iOS)
.tabViewStyle(PageTabViewStyle(indexDisplayMode: .automatic))
#endif
}
.padding(.vertical)
.toolbar {
ToolbarItem(placement: .automatic) {
Button {
withAnimation(.easeOut(duration: 0.2)) {
core.update(.navigate(.favorites(FavoritesState.idle)))
}
} label: {
Image(systemName: "star")
}
}
}
} else {
Color.clear // Placeholder for transition
}
}
.onAppear {
if !hasLoadedInitialData {
core.update(.home(.show))
hasLoadedInitialData = true
}
}
}
}
It simply caters for the possible situations in the view model, draws the
weather cards for each favorite and adds a toolbar with an item, which
when tapped calls core.update with the swift equivalent of the .navigate
event we saw earlier in the call.
This is quite a simple navigation setup in that it is a static set of screens
we're managing. Sometimes a more dynamic navigation is necessary, but
SwiftUI's NavigationStack in recent iOS supports quite complex scenarios in
a declarative fashion using NavigationPath,
so the general principle of naively projecting the view model into the user
interface broadly works even there.
There isn't much more to it, the rest of the app is rinse and repeat. It is relatively rare to implement a new capability, so most of the work is in finessing the user interface. Crux tends to work reasonably well with SwiftUI previews as well so you can typically avoid the Simulator or device for the inner development loop.
What's next
Congratulations! You know now all you will likely need to build Crux apps. The following parts of the book will cover advanced topics, other support platforms, and internals of Crux, should you be interested in how things work.
Happy building!
Android
Let's start with the new part, and also typically the shorter part – implementing the capabilities.
Capability implementation
This is what Weather's Core.kt looks like
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/core/Core.kt:core_base}}
// ...
}
}
It's slightly more complicated, but broadly the same as the Counter's core.
We have an extra logger which is not really important for us, and we
also hold on to a KeyValueStore, which is the storage for the key-value
implementation. The dependencies (HttpClient, LocationManager, KeyValueStore)
are injected via the constructor using Koin DI.
The processRequest method handles each effect type:
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/core/Core.kt:process_request}}
We get a Request, and do an exhaustive match on what the requested effect is. In Kotlin
we have sealed classes, so we can use a when expression to also destructure the
operation requested.
We can have a look at what the HTTP branch does:
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/core/Core.kt:http}}
This delegates to handleHttpEffect, which does the actual work:
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/core/Core.kt:handle_http}}
We launch a coroutine to run this job off the main thread, then use the
httpClient.request() call to run the request.
Then it takes the response, serializes it and passes it to core.resolve via resolveAndHandleEffects, which
returns more effect requests. This is perhaps unexpected, but it's the direct
consequence of the Commands async nature. There can easily be a command which
does something along the lines of:
Command::new(|ctx| {
let http_req = Http::get(url).expect_json<Counter>().build().into_future(ctx);
let resp = http_req.await; // effect 1
let counter = resp.map(|result| match result {
Ok(mut response) => match response.take_body() {
Some(counter) => {
Ok(results)
}
None => Err(ApiError::ParseError),
},
Err(_) => Err(ApiError::NetworkError),
});
let _ = KeyValue::set(COUNTER, counter).into_future(ctx).await // effect 2
// ...
ctx.send_event(Event::Done);
})
Once we resolve the http request at the .await point marked "effect 1", this future can
proceed and make a KeyValue request at the "effect 2" .await point. So on the
shell end, we need to be able to respond appropriately.
What we do is loop through those effect requests (there could easily be multiple requests
at once), go through them and recurse - call processRequest again to handle it.
This is what resolveAndHandleEffects looks like:
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/core/Core.kt:resolve}}
Just for completeness, this is what the request method on HttpClient looks like:
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/core/HttpClient.kt:request}}
Not that interesting, it's a wrapper around Ktor's HttpClient which takes and
returns the generated HttpRequest and HttpResponse, originally defined in Rust by
crux_http.
The pattern repeats similarly for key-value store and the location capability.
User interface and navigation
It's worth looking at how Weather handles the Workflow navigation in Jetpack Compose.
As in the Counter example, the Weather's core has a StateFlow<ViewModel>
which we can collect with collectAsState() in the composables.
Here's the root content view:
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/MainActivity.kt:content_view}}
Thanks to the declarative nature of Jetpack Compose, we can show the view we need to,
depending on the workflow, and pass the core down. We use AnimatedContent with
a when block to switch between screens based on the current workflow state, and
a BackHandler to navigate back when the user presses the system back button.
We could do this differently - core could stay in the root view and we could pass
an update callback in a CompositionLocal, and just the appropriate section of the
view model to each view, it's up to you how you want to go about it.
Let's look at the HomeScreen as well, just to complete the picture:
{{#include ../../../../examples/weather/android/app/src/main/java/com/crux/example/weather/ui/home/HomeScreen.kt:home_screen}}
It uses koinViewModel to obtain its view model, collects the UI state,
draws the weather cards for each favorite using a HorizontalPager, and adds
a toolbar with an IconButton, which when tapped calls onShowFavorites
with the Kotlin equivalent of the .navigate event we saw earlier.
This is quite a simple navigation setup in that it is a static set of screens
we're managing. Sometimes a more dynamic navigation is necessary, but
Jetpack Compose Navigation with NavHost supports quite complex scenarios in
a declarative fashion, so the general principle of naively projecting the view model
into the user interface broadly works even there.
There isn't much more to it, the rest of the app is rinse and repeat. It is relatively rare to implement a new capability, so most of the work is in finessing the user interface. Crux tends to work reasonably well with Compose previews as well so you can typically avoid the Emulator or device for the inner development loop.
What's next
Congratulations! You know now all you will likely need to build Crux apps. The following parts of the book will cover advanced topics, other support platforms, and internals of Crux, should you be interested in how things work.
Happy building!
React
Let's start with the new part, and also typically the shorter part – implementing the capabilities.
Capability implementation
This is what Weather's core.ts looks like
export class Core {
core: CoreFFI;
callback: Dispatch<SetStateAction<ViewModel>>;
constructor(callback: Dispatch<SetStateAction<ViewModel>>) {
this.callback = callback;
this.core = new CoreFFI();
}
update(event: Event) {
const serializer = new BincodeSerializer();
event.serialize(serializer);
const effects = this.core.update(serializer.getBytes());
const requests = deserializeRequests(effects);
for (const { id, effect } of requests) {
this.resolve(id, effect);
}
}
// ...
}
It's slightly more complicated, but broadly the same as the Counter's core.
We wrap the CoreFFI (loaded via WASM) and hold on to a React setState
callback, which we use to update the view model whenever the core asks us to
render.
We've truncated the resolve method, because it's fairly long, but the basic
structure is this:
async resolve(id: number, effect: Effect) {
switch (effect.constructor) {
case EffectVariantRender:
// ...
case EffectVariantHttp:
// ...
case EffectVariantKeyValue:
// ...
case EffectVariantLocation:
// ...
}
}
We get a Request, and do a switch on what the requested effect's constructor is
to determine the type. In TypeScript we use instanceof-style constructor
checks, so we can also cast and destructure the operation requested.
We can have a look at what the HTTP branch does:
case EffectVariantHttp: {
const request = (effect as EffectVariantHttp).value;
const response = await http.request(request);
this.respond(id, response);
break;
}
This delegates to http.request(), which does the actual work, and then calls
this.respond() with the result:
respond(id: number, response: Response) {
const serializer = new BincodeSerializer();
response.serialize(serializer);
const effects = this.core.resolve(id, serializer.getBytes());
const requests = deserializeRequests(effects);
for (const { id, effect } of requests) {
this.resolve(id, effect);
}
}
We use async/await to run the HTTP request, then take the response,
serialize it and pass it to core.resolve via respond, which
returns more effect requests. This is perhaps unexpected, but it's the direct
consequence of the Commands async nature. There can easily be a command which
does something along the lines of:
Command::new(|ctx| {
let http_req = Http::get(url).expect_json<Counter>().build().into_future(ctx);
let resp = http_req.await; // effect 1
let counter = resp.map(|result| match result {
Ok(mut response) => match response.take_body() {
Some(counter) => {
Ok(results)
}
None => Err(ApiError::ParseError),
},
Err(_) => Err(ApiError::NetworkError),
});
let _ = KeyValue::set(COUNTER, counter).into_future(ctx).await // effect 2
// ...
ctx.send_event(Event::Done);
})
Once we resolve the http request at the .await point marked "effect 1", this future can
proceed and make a KeyValue request at the "effect 2" .await point. So on the
shell end, we need to be able to respond appropriately.
What we do is loop through those effect requests (there could easily be multiple requests
at once), go through them and recurse—call resolve again to handle each one.
Just for completeness, this is what http.ts looks like:
import type { HttpRequest, HttpResult } from "shared_types/app";
import {
HttpResponse,
HttpHeader,
HttpResultVariantOk,
} from "shared_types/app";
export async function request({
url,
method,
headers,
}: HttpRequest): Promise<HttpResult> {
const request = new Request(url, {
method,
headers: headers.map((header) => [header.name, header.value]),
});
const response = await fetch(request);
const responseHeaders = Array.from(
response.headers.entries(),
([name, value]) => new HttpHeader(name, value),
);
const body = await response.arrayBuffer();
return new HttpResultVariantOk(
new HttpResponse(response.status, responseHeaders, new Uint8Array(body)),
);
}
Not that interesting, it's a wrapper around the browser's native fetch API which takes and
returns the generated HttpRequest and HttpResponse, originally defined in Rust by
crux_http.
The pattern repeats similarly for key-value store and the location capability.
User interface and navigation
It's worth looking at how Weather handles the Workflow navigation in React.
As in the Counter example, the Weather's core holds a ViewModel which we
store in React state via useState, so the component re-renders whenever the
core asks us to.
Here's the root component:
const Home: NextPage = () => {
const [view, setView] = useState(
new ViewModel(new WorkflowViewModelVariantHome(null!, [])),
);
const core: React.RefObject<Core | null> = useRef(null);
const initialized = useRef(false);
useEffect(
() => {
if (!initialized.current) {
initialized.current = true;
init_core().then(() => {
if (core.current === null) {
core.current = new Core(setView);
}
core.current?.update(
new EventVariantHome(new WeatherEventVariantShow()),
);
});
}
},
/*once*/ [],
);
const workflow = view.workflow;
return (
<main>
<section className="section has-text-centered">
<p className="title">Crux Weather Example</p>
<p className="is-size-5">Rust Core, TypeScript Shell (Next.js)</p>
</section>
<section className="container">
{workflow instanceof WorkflowViewModelVariantHome && (
<HomeView
weatherData={workflow.weather_data}
favorites={workflow.favorites}
core={core}
/>
)}
{workflow instanceof WorkflowViewModelVariantFavorites && (
<FavoritesView
favorites={workflow.favorites}
deleteConfirmation={workflow.delete_confirmation}
core={core}
/>
)}
{workflow instanceof WorkflowViewModelVariantAddFavorite && (
<AddFavoriteView searchResults={workflow.search_results} core={core} />
)}
</section>
</main>
);
};
We initialize the WASM core inside a useEffect that runs once, create a
Core instance with the setView callback, and immediately dispatch the
initial Show event to kick things off.
Thanks to the declarative nature of React, we can show the view we need to,
depending on the workflow, using instanceof checks on the workflow variant.
Each branch renders the appropriate component and passes the core ref down.
We could do this differently—core could stay in the root component and we
could pass an update callback via React context, and just the appropriate
section of the view model to each component. You could also use React Router
for navigation. It's up to you how you want to go about it.
Let's look at the HomeView as well, just to complete the picture:
function HomeView({
weatherData,
favorites,
core,
}: {
weatherData: unknown;
favorites: unknown[];
core: React.RefObject<Core | null>;
}) {
const wd = weatherData as any;
const hasData = wd && wd.cod == 200;
const favs = favorites as any[];
return (
<>
<div className="box">
{hasData ? (
<div className="has-text-centered">
<h2 className="title is-4">{wd.name}</h2>
<p className="is-size-1 has-text-weight-bold">
{wd.main.temp.toFixed(1)}°
</p>
{wd.weather?.[0] && (
<p className="is-size-5">{wd.weather[0].description}</p>
)}
<div className="columns is-multiline is-centered mt-4">
<div className="column is-one-third">
<p className="heading">Feels Like</p>
<p>{wd.main.feels_like.toFixed(1)}°</p>
</div>
<div className="column is-one-third">
<p className="heading">Humidity</p>
<p>{Number(wd.main.humidity)}%</p>
</div>
<div className="column is-one-third">
<p className="heading">Wind</p>
<p>{wd.wind.speed.toFixed(1)} m/s</p>
</div>
<div className="column is-one-third">
<p className="heading">Pressure</p>
<p>{Number(wd.main.pressure)} hPa</p>
</div>
<div className="column is-one-third">
<p className="heading">Clouds</p>
<p>{Number(wd.clouds.all)}%</p>
</div>
<div className="column is-one-third">
<p className="heading">Visibility</p>
<p>{Math.floor(Number(wd.visibility) / 1000)} km</p>
</div>
</div>
</div>
) : (
<p className="has-text-centered">Loading weather data...</p>
)}
</div>
{favs.length > 0 && (
<div className="box">
<h3 className="title is-5">Favorites</h3>
{favs.map((fav, i) => {
const w = fav.current;
return (
<div key={i} className="box">
<strong>{fav.name}</strong>
{w ? (
<div className="columns is-multiline mt-2">
<div className="column is-one-third">
<p className="is-size-3 has-text-weight-bold">
{w.main.temp.toFixed(1)}°
</p>
</div>
<div className="column is-one-third">
{w.weather?.[0] && <p>{w.weather[0].description}</p>}
</div>
<div className="column is-one-third">
<p>Humidity: {Number(w.main.humidity)}%</p>
</div>
</div>
) : (
<p className="has-text-grey">Loading...</p>
)}
</div>
);
})}
</div>
)}
<div className="buttons is-centered mt-4">
<button
className="button is-info"
onClick={() =>
core.current?.update(
new EventVariantNavigate(
new WorkflowVariantFavorites(new FavoritesStateVariantIdle()),
),
)
}
>
Favorites
</button>
</div>
</>
);
}
It simply caters for the possible situations in the view model—checking
whether cod === 200 to decide if weather data has loaded—draws the
weather cards with a grid of details, and adds a "Favorites" button which
when clicked calls core.current?.update with the TypeScript equivalent of
the .navigate event we saw earlier in the core.
This is quite a simple navigation setup in that it is a static set of screens we're managing. Sometimes a more dynamic navigation is necessary, but React Router or similar libraries support quite complex scenarios in a declarative fashion, so the general principle of naively projecting the view model into the user interface broadly works even there.
There isn't much more to it, the rest of the app is rinse and repeat. It is relatively rare to implement a new capability, so most of the work is in finessing the user interface. Crux tends to work reasonably well with hot module reloading so you can typically avoid full page reloads for the inner development loop.
What's next
Congratulations! You know now all you will likely need to build Crux apps. The following parts of the book will cover advanced topics, other support platforms, and internals of Crux, should you be interested in how things work.
Happy building!
Leptos
Let's start with the new part, and also typically the shorter part – implementing the capabilities.
Capability implementation
This is what Weather's core.rs looks like
pub type Core = Rc<shared::Core<Weather>>;
pub fn new() -> Core {
Rc::new(shared::Core::new())
}
pub fn update(core: &Core, event: Event, render: WriteSignal<ViewModel>) {
log::debug!("event: {event:?}");
for effect in core.process_event(event) {
process_effect(core, effect, render);
}
}
Because both the shell and the core are Rust, the Leptos shell is simpler than
the iOS or Android equivalents. There is no need for serialization or foreign
function interfaces—the shared types are used directly. The Core is an
Rc<shared::Core<Weather>>, and new and update are free functions rather
than methods on a class.
We've truncated the process_effect function, but the basic structure is this:
pub fn process_effect(core: &Core, effect: Effect, render: WriteSignal<ViewModel>) {
match effect {
Effect::Render(_) => { /* ... */ }
Effect::Http(mut request) => { /* ... */ }
Effect::KeyValue(mut request) => { /* ... */ }
Effect::Location(mut request) => { /* ... */ }
}
}
In Rust we have enums, so we can pattern match and destructure the operation requested. This is the most readable version of effect dispatch across all the shells, since both the core and the shell speak the same language.
We can have a look at what the HTTP branch does:
Effect::Http(mut request) => {
task::spawn_local({
let core = core.clone();
async move {
let response = http::request(&request.operation).await;
for effect in core
.resolve(&mut request, response.into())
.expect("should resolve")
{
process_effect(&core, effect, render);
}
}
});
}
We spawn a local async task via task::spawn_local (WASM is single-threaded,
so we use a local future rather than a multi-threaded runtime), then call
http::request() to perform the actual HTTP call.
Then it takes the response and passes it to core.resolve, which returns
more effect requests. This is perhaps unexpected, but it's the direct
consequence of the Commands async nature. There can easily be a command which
does something along the lines of:
Command::new(|ctx| {
let http_req = Http::get(url).expect_json<Counter>().build().into_future(ctx);
let resp = http_req.await; // effect 1
let counter = resp.map(|result| match result {
Ok(mut response) => match response.take_body() {
Some(counter) => {
Ok(results)
}
None => Err(ApiError::ParseError),
},
Err(_) => Err(ApiError::NetworkError),
});
let _ = KeyValue::set(COUNTER, counter).into_future(ctx).await // effect 2
// ...
ctx.send_event(Event::Done);
})
Once we resolve the http request at the .await point marked "effect 1", this future can
proceed and make a KeyValue request at the "effect 2" .await point. So on the
shell end, we need to be able to respond appropriately.
What we do is loop through those effect requests (there could easily be multiple requests
at once), go through them and recurse—call process_effect again to handle it.
Note that unlike the iOS shell, where resolve returns bytes that need
deserialization, in Leptos we call core.resolve() directly and get Effect
values back—no serialization boundary to cross.
Just for completeness, this is what http.rs looks like:
use gloo_net::http;
use shared::http::{
HttpError, Result,
protocol::{HttpRequest, HttpResponse},
};
#[allow(clippy::future_not_send)] // WASM is single-threaded
pub async fn request(
HttpRequest {
method,
url,
headers,
..
}: &HttpRequest,
) -> Result<HttpResponse> {
let mut request = match method.as_str() {
"GET" => http::Request::get(url),
"POST" => http::Request::post(url),
_ => panic!("not yet handling this method"),
};
for header in headers {
request = request.header(&header.name, &header.value);
}
let response = request
.send()
.await
.map_err(|error| HttpError::Io(error.to_string()))?;
let body = response
.binary()
.await
.map_err(|error| HttpError::Io(error.to_string()))?;
Ok(HttpResponse::status(response.status()).body(body).build())
}
Not that interesting, it's a wrapper around gloo_net's HTTP client for WASM
which takes and returns the generated HttpRequest and HttpResponse,
originally defined in Rust by crux_http.
The pattern repeats similarly for key-value store and the location capability.
User interface and navigation
It's worth looking at how Weather handles the Workflow navigation in Leptos.
Here's the root component:
#[component]
fn root_component() -> impl IntoView {
let core = core::new();
let (view, render) = signal(core.view());
let (event, set_event) = signal(Event::Home(Box::new(WeatherEvent::Show)));
let (search_text, set_search_text) = signal(String::new());
Effect::new(move |_| {
core::update(&core, event.get(), render);
});
view! {
<>
<section class="section has-text-centered">
<p class="title">{"Crux Weather Example"}</p>
<p class="is-size-5">{"Rust Core, Rust Shell (Leptos)"}</p>
</section>
<section class="container">
{move || {
let v = view.get();
match v.workflow {
WorkflowViewModel::Home { weather_data, favorites } => {
let set_event = set_event;
view! {
<HomeView
weather_data=*weather_data
favorites=favorites
set_event=set_event
/>
}.into_any()
}
WorkflowViewModel::Favorites { favorites, delete_confirmation } => {
let set_event = set_event;
view! {
<FavoritesView
favorites=favorites
delete_confirmation=delete_confirmation
set_event=set_event
/>
}.into_any()
}
WorkflowViewModel::AddFavorite { search_results } => {
let set_event = set_event;
view! {
<AddFavoriteView
search_results=search_results
set_event=set_event
search_text=search_text
set_search_text=set_search_text
/>
}.into_any()
}
}
}}
</section>
</>
}
}
We create the core with core::new(), then set up two pairs of reactive
signals: (view, render) for the view model and (event, set_event) for
dispatching events. An Effect::new watches the event signal and calls
core::update whenever it changes. The view! macro—Leptos's JSX-like
syntax—matches on the WorkflowViewModel enum to decide which child
component to render, passing the relevant data and the set_event writer down.
We could do this differently—the core could stay in the root component and we could pass an update callback through Leptos context, and just the appropriate section of the view model to each component. It's up to you how you want to go about it.
Let's look at the HomeView as well, just to complete the picture:
#[component]
fn home_view(
weather_data: shared::weather::model::current_response::CurrentWeatherResponse,
favorites: Vec<shared::FavoriteView>,
set_event: WriteSignal<Event>,
) -> impl IntoView {
let wd = weather_data;
let has_data = wd.cod == 200;
view! {
<div class="box">
{if has_data {
let name = wd.name.clone();
let desc = wd.weather.first().map(|w| w.description.clone());
view! {
<div class="has-text-centered">
<h2 class="title is-4">{name}</h2>
<p class="is-size-1 has-text-weight-bold">
{format!("{:.1}°", wd.main.temp)}
</p>
{desc.map(|d| view! {
<p class="is-size-5">{d}</p>
})}
<div class="columns is-multiline is-centered mt-4">
<div class="column is-one-third">
<p class="heading">{"Feels Like"}</p>
<p>{format!("{:.1}°", wd.main.feels_like)}</p>
</div>
<div class="column is-one-third">
<p class="heading">{"Humidity"}</p>
<p>{format!("{}%", wd.main.humidity)}</p>
</div>
<div class="column is-one-third">
<p class="heading">{"Wind"}</p>
<p>{format!("{:.1} m/s", wd.wind.speed)}</p>
</div>
<div class="column is-one-third">
<p class="heading">{"Pressure"}</p>
<p>{format!("{} hPa", wd.main.pressure)}</p>
</div>
<div class="column is-one-third">
<p class="heading">{"Clouds"}</p>
<p>{format!("{}%", wd.clouds.all)}</p>
</div>
<div class="column is-one-third">
<p class="heading">{"Visibility"}</p>
<p>{format!("{} km", wd.visibility / 1000)}</p>
</div>
</div>
</div>
}.into_any()
} else {
view! {
<p class="has-text-centered">{"Loading weather data..."}</p>
}.into_any()
}}
</div>
{if !favorites.is_empty() {
view! {
<div class="box">
<h3 class="title is-5">{"Favorites"}</h3>
{favorites.into_iter().map(|fav| {
let name = fav.name.clone();
view! {
<div class="box">
<strong>{name.clone()}</strong>
{if let Some(w) = *fav.current {
view! {
<div class="columns is-multiline mt-2">
<div class="column is-one-third">
<p class="is-size-3 has-text-weight-bold">{format!("{:.1}°", w.main.temp)}</p>
</div>
<div class="column is-one-third">
{w.weather.first().map(|wd| view! {
<p>{wd.description.clone()}</p>
})}
</div>
<div class="column is-one-third">
<p>{format!("Humidity: {}%", w.main.humidity)}</p>
</div>
</div>
}.into_any()
} else {
view! { <p class="has-text-grey">{"Loading..."}</p> }.into_any()
}}
</div>
}
}).collect::<Vec<_>>()}
</div>
}.into_any()
} else {
view! { <div></div> }.into_any()
}}
<div class="buttons is-centered mt-4">
<button class="button is-info"
on:click=move |_| set_event.set(Event::Navigate(
Box::new(shared::Workflow::Favorites(FavoritesState::Idle))
))
>
{"Favorites"}
</button>
</div>
}
}
It checks whether the weather data has loaded (cod == 200), renders the weather
details in a grid using Bulma CSS classes, lists any favorites, and adds a
button which when clicked sets the event signal to navigate to the Favorites
screen.
This is quite a simple navigation setup in that it is a static set of screens we're managing. Sometimes a more dynamic navigation is necessary, but Leptos Router supports quite complex scenarios in a declarative fashion, so the general principle of naively projecting the view model into the user interface broadly works even there.
There isn't much more to it, the rest of the app is rinse and repeat. It is relatively rare to implement a new capability, so most of the work is in finessing the user interface.
What's next
Congratulations! You know now all you will likely need to build Crux apps. The following parts of the book will cover advanced topics, other support platforms, and internals of Crux, should you be interested in how things work.
Happy building!
Middleware
Middleware is a relatively new, and somewhat advanced feature for split effect handling, i.e. handling some effects in the shell, and some still in the core, but outside the app's state loop.
Middleware can be useful when you have an existing 3rd party library written in Rust which you want to use, but it isn't written in a sans-I/O way with managed effects or otherwise isn't compatible with Crux. This is sadly most libraries with side effects.
It is quite likely most apps will never need to use middleware. Before reaching for middleware, we encourage you to consider:
- Implementing the side-effect in each Shell using native, platform SDKs. Shared libraries give a productivity boost at first, but for the same reason Crux uses Capabilities, they can't always be the best platform citizens, and often rely on very low-level system APIs which compromise the experience, don't collaborate well with platform security measures, etc.
- Moving coordination logic from the Rust implementation into a custom capability in the core and implementing it on top of lower level capabilities, e.g. HTTP. This would be the case for HTTP API SDK type libraries, but may well not be practical at first
Only if neither of these is a good option, reach for a middleware. The cost of using it is that the effect handling becomes less straightforward, which may cause some headaches debugging effect ordering, etc.
We are also still learning how middleware operates in the wild, and the API may change more than the rest of Crux tends to.
All that said, the feature is used in production with success today and should work well.
How it works
Middleware sits between the Core and the Shell in the effect processing pipeline. When the app requests effects, they pass through the middleware stack on their way to the shell. A middleware layer can intercept specific effect variants, handle them (performing the side-effect in Rust), and resolve the request — all without the shell ever seeing that effect. Effects the middleware doesn't handle pass through to the shell as normal.
We'll walk through the counter-middleware example to see how this works in practice. This example is a counter app that has a "random" button — when pressed, the counter changes by a random amount. The random number generation is handled by a middleware, rather than by the shell.
Defining the operation
First, we need an Operation type that describes the request and its output. This is the
same as defining a capability's protocol — a request type and a response type:
#[derive(Facet, Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct RandomNumberRequest(pub isize, pub isize); // request a random number from 1 to N, inclusive
#[derive(Facet, Debug, PartialEq, Eq, Deserialize)]
pub struct RandomNumber(pub isize);
impl Operation for RandomNumberRequest {
type Output = RandomNumber;
}
The RandomNumberRequest carries the range (min, max), and RandomNumber carries the result.
The Operation impl connects them so that Crux knows a RandomNumberRequest produces a
RandomNumber.
The app uses this operation as one variant of its Effect enum:
#[effect(facet_typegen)]
#[derive(Debug)]
pub enum Effect {
Render(RenderOperation),
Http(HttpRequest),
ServerSentEvents(SseRequest),
Random(RandomNumberRequest),
}
And the app can request a random number using Command::request_from_shell, just as it
would for any shell-handled effect:
Event::Random => Command::request_from_shell(RandomNumberRequest(-5, 5))
.map(|out| out.0)
.then_send(Event::UpdateBy),
The app doesn't know or care that this effect will be intercepted by middleware — it just requests the effect and handles the response.
Implementing EffectMiddleware
The EffectMiddleware trait is how you tell Crux what to do when it encounters a specific
effect. You implement try_process_effect, which receives the operation and an
EffectResolver that you use to send back the result.
Here's the RngMiddleware from the example:
use std::{
sync::mpsc::{Sender, channel},
thread::spawn,
};
use crux_core::middleware::{EffectMiddleware, EffectResolver};
use rand::rngs::SysRng;
use rand::{RngExt, SeedableRng, TryRng as _, rngs::StdRng};
use crate::capabilities::{RandomNumber, RandomNumberRequest};
pub struct RngMiddleware {
jobs_tx: Sender<(RandomNumberRequest, EffectResolver<RandomNumber>)>,
}
impl RngMiddleware {
pub fn new() -> Self {
let (jobs_tx, jobs_rx) = channel::<(RandomNumberRequest, EffectResolver<RandomNumber>)>();
// Persistent background worker
spawn(move || {
let mut sys_rng = SysRng;
let mut rng =
StdRng::seed_from_u64(sys_rng.try_next_u64().expect("could not seed RNG"));
while let Ok((RandomNumberRequest(from, to), mut resolver)) = jobs_rx.recv() {
#[allow(clippy::cast_sign_loss)]
let top = (to - from) as usize;
#[allow(clippy::cast_possible_wrap)]
let out = rng.random_range(0..top) as isize + from;
resolver.resolve(RandomNumber(out));
}
});
Self { jobs_tx }
}
}
impl EffectMiddleware for RngMiddleware {
type Op = RandomNumberRequest;
fn try_process_effect(
&self,
operation: RandomNumberRequest,
resolver: EffectResolver<RandomNumber>,
) {
self.jobs_tx
.send((operation, resolver))
.expect("Job failed to send to worker thread");
}
}
A few things to note:
- The
type Opassociated type tells Crux which operation this middleware handles (RandomNumberRequestin this case). try_process_effectreceives the operation and anEffectResolver. You must callresolver.resolve(output)with the result when the work is done.- The processing happens on a background thread. This is important — the middleware
must not block the caller of
process_event. On native targets this typically means spawning a thread; on WASM it means an async task (e.g.spawn_local). - The background thread pattern shown here (a persistent worker with a channel) is a good approach when the middleware holds state (like the RNG seed). For stateless work, you could simply spawn a thread per request.
Wiring it up
The middleware is composed with the Core in the FFI module, where you build the bridge between the core and the shell. Here's the key part from the uniffi (native) FFI setup:
pub fn new(shell: Arc<dyn CruxShell>) -> Self {
let core = Core::<Counter>::new()
.handle_effects_using(RngMiddleware::new())
.map_effect::<Effect>()
.bridge::<BincodeFfiFormat>(move |effect_bytes| match effect_bytes {
Ok(effect) => shell.process_effects(effect),
Err(e) => panic!("{e}"),
});
Self { core }
}
This reads bottom-to-top as a pipeline:
Core::<Counter>::new()— creates the core, which produces the app's fullEffectenum (including theRandomvariant)..handle_effects_using(RngMiddleware::new())— wraps the core with the RNG middleware. AnyRandomeffects are intercepted and handled here; all other effects pass through..map_effect::<Effect>()— narrows the effect type. Since the middleware has consumed allRandomeffects, the shell will never see them. This step converts to a newEffectenum that doesn't include theRandomvariant, so your shell code doesn't need an unreachable branch..bridge::<BincodeFfiFormat>(...)— creates the FFI bridge as usual.
The narrowed effect type
The FFI module defines its own Effect enum without the Random variant:
#[effect(facet_typegen)]
pub enum Effect {
Render(RenderOperation),
Http(HttpRequest),
ServerSentEvents(SseRequest),
}
And a From implementation to convert from the app's full effect type:
impl From<crate::app::Effect> for Effect {
fn from(effect: crate::app::Effect) -> Self {
match effect {
crate::Effect::Render(request) => Effect::Render(request),
crate::Effect::Http(request) => Effect::Http(request),
crate::Effect::ServerSentEvents(request) => Effect::ServerSentEvents(request),
crate::Effect::Random(_) => panic!("Encountered a Random effect"),
}
}
}
The Random arm panics because it should never be reached — the middleware handles all
Random effects before they get here.
Testing
The app can be tested exactly the same way as any other Crux app — the middleware is not
involved in unit tests. You test the app's update function directly, treating Random
as a normal effect:
#[test]
fn random_change() {
let app = Counter;
let mut model = Model::default();
let mut cmd = app.update(Event::Random, &mut model);
// the app should request a random number from the web API
let mut request = cmd.effects().next().unwrap().expect_random();
assert_eq!(request.operation, RandomNumberRequest(-5, 5));
request.resolve(RandomNumber(-2)).unwrap();
// And start an UpdateBy the number
let event = cmd.events().next().unwrap();
assert_eq!(event, Event::UpdateBy(-2));
This is one of the nice properties of middleware: the app logic remains pure and testable, and the middleware is a separate concern that's composed at the FFI boundary.
Summary
To add a middleware to your app:
- Define an
Operation— a request type and output type, just like a capability protocol. - Implement
EffectMiddleware— handle the operation and resolve the result, typically on a background thread. - Wire it up — use
.handle_effects_using()in your FFI setup to intercept the effects, and optionally.map_effect()to narrow the effect type for the shell.
For the full API reference, see the middleware module docs.
Other platforms
This section is a collection of instructions for using Crux with other platforms than the ones we've chosen to write Part I and Part II for. The support is just as mature for all of them, we are simply more familiar with the four we've shown in detail.
You can read about using Crux with:
- Dioxus — Rust web framework (WebAssembly)
- React Router — TypeScript web framework (WebAssembly)
- Yew — Rust web framework (WebAssembly)
- Tauri — Desktop/mobile app with a web frontend and Rust backend
- Ratatui — Terminal UI (TUI) app in Rust
Web — TypeScript and React Router
These are the steps to set up and run a simple TypeScript Web app that calls into a shared core.
This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.
There are many frameworks available for writing Web applications with JavaScript/TypeScript. We've chosen React with React Router for this walk-through. However, a similar setup would work for other frameworks.
Create a React Router App
For this walk-through, we'll use the pnpm package manager
for no reason other than we like it the most! You can use npm exactly the same
way, though.
Let's create a simple React Router app for TypeScript with pnpm. You can give
it a name and then probably accept the defaults.
pnpm create react-router@latest
Compile our Rust shared library
When we build our app, we also want to compile the Rust core to WebAssembly so that it can be referenced from our code.
To do this, we'll use
wasm-pack, which you can
install like this:
# with homebrew
brew install wasm-pack
# or directly
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
Now that we have wasm-pack installed, we can build our shared library to
WebAssembly for the browser.
(cd shared && wasm-pack build --target web)
You might want to add a wasm:build script to your package.json
file, and call it when you build your React Router project.
{
"scripts": {
"build": "pnpm run wasm:build && react-router build",
"dev": "pnpm run wasm:build && react-router dev",
"wasm:build": "cd ../shared && wasm-pack build --target web"
}
}
Add the shared library as a Wasm package to your web-react-router project
cd web-react-router
pnpm add ../shared/pkg
We want Vite to bundle our shared Wasm package, so we register the wasm and
React Router plugins in vite.config.ts:
import { reactRouter } from "@react-router/dev/vite";
import wasm from "vite-plugin-wasm";
import { defineConfig } from "vite";
export default defineConfig({
plugins: [wasm(), reactRouter()],
});
Add the Shared Types
To generate the shared types for TypeScript, we can just run cargo build from
the root of our repository. You can check that they have been generated
correctly:
ls --tree shared_types/generated/typescript
shared_types/generated/typescript
├── bincode
│ ├── bincodeDeserializer.d.ts
│ ├── bincodeDeserializer.js
│ ├── bincodeDeserializer.ts
│ ├── bincodeSerializer.d.ts
│ ├── bincodeSerializer.js
│ ├── bincodeSerializer.ts
│ ├── mod.d.ts
│ ├── mod.js
│ └── mod.ts
├── node_modules
│ └── typescript -> .pnpm/typescript@4.8.4/node_modules/typescript
├── package.json
├── pnpm-lock.yaml
├── serde
│ ├── binaryDeserializer.d.ts
│ ├── binaryDeserializer.js
│ ├── binaryDeserializer.ts
│ ├── binarySerializer.d.ts
│ ├── binarySerializer.js
│ ├── binarySerializer.ts
│ ├── deserializer.d.ts
│ ├── deserializer.js
│ ├── deserializer.ts
│ ├── mod.d.ts
│ ├── mod.js
│ ├── mod.ts
│ ├── serializer.d.ts
│ ├── serializer.js
│ ├── serializer.ts
│ ├── types.d.ts
│ ├── types.js
│ └── types.ts
├── tsconfig.json
└── types
├── shared_types.d.ts
├── shared_types.js
└── shared_types.ts
You can see that it also generates an npm package that we can add directly to
our project.
pnpm add ../shared_types/generated/typescript
Load the Wasm binary when our React Router app starts
The app/entry.client.tsx file is where we can load our Wasm binary. We can
import the shared package and then call the init function to load the Wasm
binary.
Note that we import the wasm binary as well — Vite will automatically bundle
it for us, giving it a cache-friendly hash-based name.
import { startTransition, StrictMode } from "react";
import { hydrateRoot } from "react-dom/client";
import { HydratedRouter } from "react-router/dom";
import init from "shared/shared";
import wasmUrl from "shared/shared_bg.wasm?url";
init(wasmUrl).then(() => {
startTransition(() => {
hydrateRoot(
document,
<StrictMode>
<HydratedRouter />
</StrictMode>
);
});
});
Create some UI
We will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.
Simple counter example
A simple app that increments, decrements and resets a counter.
Wrap the core to support capabilities
First, let's add some boilerplate code to wrap our core and handle the
capabilities that we are using. For this example, we only need to support the
Render capability, which triggers a render of the UI.
This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.
Edit app/core.ts to look like the following. This code sends our
(UI-generated) events to the core, and handles any effects that the core asks
for. In this simple example, we aren't calling any HTTP APIs or handling any
side effects other than rendering the UI, so we just handle this render effect
by updating the component's view hook with the core's ViewModel.
Notice that we have to serialize and deserialize the data that we pass between the core and the shell. This is because the core is running in a separate WebAssembly instance, and so we can't just pass the data directly.
import type { Dispatch, SetStateAction } from "react";
import { CoreFFI } from "shared";
import type { Effect, Event } from "shared_types/app";
import { EffectVariantRender, Request, ViewModel } from "shared_types/app";
import { BincodeDeserializer, BincodeSerializer } from "shared_types/bincode";
import init_core from "shared/shared";
export class Core {
core: CoreFFI | null = null;
initializing: Promise<void> | null = null;
setState: Dispatch<SetStateAction<ViewModel>>;
constructor(setState: Dispatch<SetStateAction<ViewModel>>) {
// Don't initialize CoreFFI here - wait for WASM to be loaded
this.setState = setState;
}
initialize(shouldLoad: boolean): Promise<void> {
if (this.core) {
return Promise.resolve();
}
if (!this.initializing) {
const load = shouldLoad ? init_core() : Promise.resolve();
this.initializing = load
.then(() => {
this.core = new CoreFFI();
this.setState(this.view());
})
.catch((error) => {
this.initializing = null;
console.error("Failed to initialize wasm core:", error);
});
}
return this.initializing;
}
view(): ViewModel {
if (!this.core) {
throw new Error("Core not initialized. Call initialize() first.");
}
return deserializeView(this.core.view());
}
update(event: Event) {
if (!this.core) {
throw new Error("Core not initialized. Call initialize() first.");
}
const serializer = new BincodeSerializer();
event.serialize(serializer);
const effects = this.core.update(serializer.getBytes());
const requests = deserializeRequests(effects);
for (const { effect } of requests) {
this.processEffect(effect);
}
}
private processEffect(effect: Effect) {
switch (effect.constructor) {
case EffectVariantRender: {
this.setState(this.view());
break;
}
}
}
}
function deserializeRequests(bytes: Uint8Array): Request[] {
const deserializer = new BincodeDeserializer(bytes);
const len = deserializer.deserializeLen();
const requests: Request[] = [];
for (let i = 0; i < len; i++) {
const request = Request.deserialize(deserializer);
requests.push(request);
}
return requests;
}
function deserializeView(bytes: Uint8Array): ViewModel {
return ViewModel.deserialize(new BincodeDeserializer(bytes));
}
That switch statement, above, is where you would handle any other effects that
your core might ask for. For example, if your core needs to make an HTTP
request, you would handle that here. To see an example of this, take a look at
the
counter example
in the Crux repository.
Create a component to render the UI
Edit app/routes/_index.tsx to look like the following. Notice that we pass the
setState hook to the update function so that we can update the state in
response to a render effect from the core (as seen above).
import { useEffect, useRef, useState } from "react";
import {
ViewModel,
EventVariantReset,
EventVariantIncrement,
EventVariantDecrement,
} from "shared_types/app";
import { Core } from "../core";
export const meta = () => {
return [
{ title: "Crux Counter — React Router" },
{ name: "description", content: "Crux Counter with React Router" },
];
};
export default function Index() {
const [view, setView] = useState(new ViewModel(""));
const core = useRef(new Core(setView));
useEffect(() => {
void core.current.initialize(false);
}, []);
return (
<main>
<section className="box container has-text-centered m-5">
<p className="is-size-5">{view.count}</p>
<div className="buttons section is-centered">
<button
className="button is-primary is-danger"
onClick={() => core.current.update(new EventVariantReset())}
>
{"Reset"}
</button>
<button
className="button is-primary is-success"
onClick={() => core.current.update(new EventVariantIncrement())}
>
{"Increment"}
</button>
<button
className="button is-primary is-warning"
onClick={() => core.current.update(new EventVariantDecrement())}
>
{"Decrement"}
</button>
</div>
</section>
</main>
);
}
Now all we need is some CSS.
To add a CSS stylesheet, we can add it to the Links export in the
app/root.tsx file.
export const links: LinksFunction = () => [
{
rel: "stylesheet",
href: "https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css",
},
];
Build and serve our app
We can build our app, and serve it for the browser, in one simple step.
pnpm dev
Web — Rust and Yew
These are the steps to set up and run a simple Rust Web app that calls into a shared core.
This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.
There are many frameworks available for writing Web applications in Rust. We've chosen Yew for this walk-through because it is arguably the most mature. However, a similar setup would work for any framework that compiles to WebAssembly.
Create a Yew App
Our Yew app is just a new Rust project, which we can create with Cargo. For this
example we'll call it web-yew.
cargo new web-yew
We'll also want to add this new project to our Cargo workspace, by editing the
root Cargo.toml file.
[workspace]
members = ["shared", "web-yew"]
Now we can start fleshing out our project. Let's add some dependencies to
web-yew/Cargo.toml.
[package]
name = "web-yew"
version = "0.1.0"
authors.workspace = true
repository.workspace = true
edition.workspace = true
license.workspace = true
keywords.workspace = true
rust-version.workspace = true
[lints]
workspace = true
[dependencies]
shared = { path = "../shared" }
yew = { version = "0.23.0", features = ["csr"] }
We'll also need a file called index.html, to serve our app.
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Yew Counter</title>
<link
rel="stylesheet"
href="https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css"
/>
<link data-trunk rel="rust" />
</head>
</html>
Create some UI
There are several, more advanced, examples of Yew apps in the Crux repository.
However, we will use the
simple counter example,
which has shared and shared_types libraries that will work with the
following example code.
Simple counter example
A simple app that increments, decrements and resets a counter.
Wrap the core to support capabilities
First, let's add some boilerplate code to wrap our core and handle the
capabilities that we are using. For this example, we only need to support the
Render capability, which triggers a render of the UI.
This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.
Edit src/core.rs to look like the following. This code sends our
(UI-generated) events to the core, and handles any effects that the core asks
for. In this simple example, we aren't calling any HTTP APIs or handling any
side effects other than rendering the UI, so we just handle this render effect
by sending it directly back to the Yew component. Note that we wrap the effect
in a Message enum because Yew components have a single associated type for
messages and we need that to include both the events that the UI raises (to send
to the core) and the effects that the core uses to request side effects from the
shell.
Also note that because both our core and our shell are written in Rust (and run in the same memory space), we do not need to serialize and deserialize the data that we pass between them. We can just pass the data directly.
use shared::{Counter, Effect, Event};
use std::rc::Rc;
use yew::Callback;
pub type Core = Rc<shared::Core<Counter>>;
pub enum Message {
Event(Event),
#[allow(dead_code)]
Effect(Effect),
}
pub fn new() -> Core {
Rc::new(shared::Core::new())
}
pub fn update(core: &Core, event: Event, callback: &Callback<Message>) {
for effect in core.process_event(event) {
process_effect(core, effect, callback);
}
}
pub fn process_effect(_core: &Core, effect: Effect, callback: &Callback<Message>) {
match effect {
render @ Effect::Render(_) => callback.emit(Message::Effect(render)),
}
}
That match statement, above, is where you would handle any other effects that
your core might ask for. For example, if your core needs to make an HTTP
request, you would handle that here. To see an example of this, take a look at
the
counter example
in the Crux repository.
Edit src/main.rs to look like the following. The update function is
interesting here. We set up a Callback to receive messages from the core and
feed them back into Yew's event loop. Then we test to see if the incoming
message is an Event (raised by UI interaction) and if so we use it to update
the core, returning false to indicate that the re-render will happen later. In
this app, we can assume that any other message is a render Effect and so we
return true indicating to Yew that we do want to re-render.
mod core;
use crate::core::{Core, Message};
use shared::Event;
use yew::prelude::*;
#[derive(Default)]
struct RootComponent {
core: Core,
}
impl Component for RootComponent {
type Message = Message;
type Properties = ();
fn create(_ctx: &Context<Self>) -> Self {
Self { core: core::new() }
}
fn update(&mut self, ctx: &Context<Self>, msg: Self::Message) -> bool {
let link = ctx.link().clone();
let callback = Callback::from(move |msg| {
link.send_message(msg);
});
if let Message::Event(event) = msg {
core::update(&self.core, event, &callback);
false
} else {
true
}
}
fn view(&self, ctx: &Context<Self>) -> Html {
let link = ctx.link();
let view = self.core.view();
html! {
<section class="box container has-text-centered m-5">
<p class="is-size-5">{&view.count}</p>
<div class="buttons section is-centered">
<button class="button is-primary is-danger"
onclick={link.callback(|_| Message::Event(Event::Reset))}>
{"Reset"}
</button>
<button class="button is-primary is-success"
onclick={link.callback(|_| Message::Event(Event::Increment))}>
{"Increment"}
</button>
<button class="button is-primary is-warning"
onclick={link.callback(|_| Message::Event(Event::Decrement))}>
{"Decrement"}
</button>
</div>
</section>
}
}
}
fn main() {
yew::Renderer::<RootComponent>::new().render();
}
Build and serve our app
The easiest way to compile the app to WebAssembly and serve it in our web page
is to use trunk, which we can install with
Homebrew (brew install trunk) or Cargo
(cargo install trunk).
We can build our app, serve it and open it in our browser, in one simple step.
trunk serve --open
Web — Rust and Dioxus
These are the steps to set up and run a simple Rust Web app that calls into a shared core.
This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.
There are many frameworks available for writing Web applications in Rust. We've chosen Dioxus for this walk-through. However, a similar setup would work for other frameworks that compile to WebAssembly.
Create a Dioxus App
Dioxus has a CLI tool called dx, which can initialize, build and serve our app.
cargo install dioxus-cli
Test that the executable is available.
dx --help
Before we create a new app, let's add it to our Cargo workspace (so that the
dx tool won't complain), by editing the root Cargo.toml file.
For this example, we'll call the app web-dioxus.
[workspace]
members = ["shared", "web-dioxus"]
Now we can create a new Dioxus app. The tool asks for a project name, which
we'll provide as web-dioxus.
dx create
cd web-dioxus
Now we can start fleshing out our project. Let's add some dependencies to the
project's Cargo.toml.
[package]
name = "web-dioxus"
version = "0.1.0"
authors.workspace = true
repository.workspace = true
edition.workspace = true
license.workspace = true
keywords.workspace = true
rust-version.workspace = true
[lints]
workspace = true
[dependencies]
console_error_panic_hook = "0.1.7"
dioxus = { version = "0.7.3", features = ["web"] }
dioxus-logger = "0.7.3"
futures-util = "0.3.32"
shared = { path = "../shared" }
tracing = "0.1.44"
Create some UI
There is slightly more advanced example of a Dioxus app in the Crux repository.
However, we will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.
Simple counter example
A simple app that increments, decrements and resets a counter.
Wrap the core to support capabilities
First, let's add some boilerplate code to wrap our core and handle the
capabilities that we are using. For this example, we only need to support the
Render capability, which triggers a render of the UI.
This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.
Edit src/core.rs to look like the following. This code sends our
(UI-generated) events to the core, and handles any effects that the core asks
for. In this simple example, we aren't calling any HTTP APIs or handling any
side effects other than rendering the UI, so we just handle this render effect
by updating the component's view hook with the core's ViewModel.
Because both our core and our shell are written in Rust (and run in the same memory space), we do not need to serialize and deserialize the data that we pass between them. We can just pass the data directly.
use std::rc::Rc;
use dioxus::{
prelude::{Signal, UnboundedReceiver},
signals::WritableExt as _,
};
use futures_util::StreamExt;
use shared::{Counter, Effect, Event, ViewModel};
use tracing::debug;
type Core = Rc<shared::Core<Counter>>;
pub struct CoreService {
core: Core,
view: Signal<ViewModel>,
}
impl CoreService {
pub fn new(view: Signal<ViewModel>) -> Self {
debug!("initializing core service");
Self {
core: Rc::new(shared::Core::new()),
view,
}
}
#[allow(clippy::future_not_send)] // WASM is single-threaded
pub async fn run(&self, rx: &mut UnboundedReceiver<Event>) {
let mut view = self.view;
view.set(self.core.view());
while let Some(event) = rx.next().await {
self.update(event, &mut view);
}
}
fn update(&self, event: Event, view: &mut Signal<ViewModel>) {
debug!("event: {:?}", event);
for effect in &self.core.process_event(event) {
process_effect(&self.core, effect, view);
}
}
}
fn process_effect(core: &Core, effect: &Effect, view: &mut Signal<ViewModel>) {
debug!("effect: {:?}", effect);
match effect {
Effect::Render(_) => {
view.set(core.view());
}
}
}
That match statement, above, is where you would handle any other effects that
your core might ask for. For example, if your core needs to make an HTTP
request, you would handle that here. To see an example of this, take a look at
the
counter example
in the Crux repository.
Edit src/main.rs to look like the following. This code sets up the Dioxus app
and connects the core to the UI. We create a signal for the view state
and a coroutine that receives events from the UI and forwards them to the core.
mod core;
use dioxus::prelude::*;
use tracing::Level;
use shared::{Event, ViewModel};
use core::CoreService;
#[allow(clippy::volatile_composites)] // false positive from Dioxus asset! macro internals
#[component]
fn App() -> Element {
let view = use_signal(ViewModel::default);
let core = use_coroutine(move |mut rx| {
let svc = CoreService::new(view);
async move { svc.run(&mut rx).await }
});
rsx! {
document::Link {
rel: "stylesheet",
href: asset!("../public/css/bulma.min.css")
}
main {
section { class: "section has-text-centered",
p { class: "is-size-5", "{view().count}" }
div { class: "buttons section is-centered",
button { class:"button is-primary is-danger",
onclick: move |_| {
core.send(Event::Reset);
},
"Reset"
}
button { class:"button is-primary is-success",
onclick: move |_| {
core.send(Event::Increment);
},
"Increment"
}
button { class:"button is-primary is-warning",
onclick: move |_| {
core.send(Event::Decrement);
},
"Decrement"
}
}
}
}
}
}
fn main() {
dioxus_logger::init(Level::DEBUG).expect("failed to init logger");
console_error_panic_hook::set_once();
launch(App);
}
We also need a Dioxus.toml configuration file to set up the app title and
asset directory.
[application]
name = "web-dioxus"
default_platform = "web"
out_dir = "dist"
asset_dir = "public"
[web.app]
title = "Crux Simple Counter example"
[web.watcher]
reload_html = true
watch_path = ["src", "public"]
Build and serve our app
Now we can build our app and serve it in one simple step.
dx serve
Desktop/Mobile — Tauri
These are the steps to set up and run a Crux app as a desktop (and mobile) application using Tauri. Tauri uses a native webview to render the UI, with a Rust backend — making it a natural fit for Crux.
This walk-through assumes you have already added the shared library to your repo, as described in Shared core and types.
Tauri apps have a Rust backend (where the Crux core lives) and a web frontend (React, in this example). Because the core runs directly in the Rust backend process, there is no need for WebAssembly or FFI — the shell calls the core directly and communicates with the frontend via Tauri's event system.
Create a Tauri App
Install the Tauri CLI if you haven't already:
cargo install tauri-cli
Create a new Tauri app. Tauri's init command will scaffold the project
structure for you — choose React as the frontend framework.
cargo tauri init
Project structure
A Tauri project has two parts:
src-tauri/— the Rust backend, where the Crux core livessrc/— the web frontend (React + TypeScript in this example)
Backend dependencies
Add the shared library and Tauri to your src-tauri/Cargo.toml:
[package]
name = "counter_tauri"
version = "0.1.0"
authors.workspace = true
repository.workspace = true
edition.workspace = true
license.workspace = true
keywords.workspace = true
rust-version.workspace = true
[lib]
name = "tauri_lib"
crate-type = ["staticlib", "cdylib", "rlib"]
[build-dependencies]
tauri-build = { version = "2.5.6", features = [] }
[dependencies]
shared = { path = "../../shared" }
tauri = { version = "2.10.3", features = [] }
[features]
custom-protocol = ["tauri/custom-protocol"]
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = [
'cfg(mobile)',
'cfg(desktop)',
] }
Frontend dependencies
Your package.json should include the Tauri API package for communicating
between the frontend and backend:
{
"name": "tauri",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc && vite build",
"preview": "vite preview",
"tauri": "tauri",
"postinstall": "mkdir -p dist"
},
"dependencies": {
"@tauri-apps/api": "^2.10.1",
"react": "^19.2.4",
"react-dom": "^19.2.4"
},
"devDependencies": {
"@tauri-apps/cli": "^2.10.1",
"@types/node": "^25.5.0",
"@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"@vitejs/plugin-react": "^6.0.1",
"typescript": "^5.9.3",
"vite": "^8.0.0"
},
"packageManager": "pnpm@9.6.0+sha512.38dc6fba8dba35b39340b9700112c2fe1e12f10b17134715a4aa98ccf7bb035e76fd981cf0bb384dfa98f8d6af5481c2bef2f4266a24bfa20c34eb7147ce0b5e"
}
The Rust backend
The Rust backend is where the Crux core runs. We create a static Core
instance and expose Tauri commands that forward events to the core. When the
core requests a Render effect, we emit a Tauri event to the frontend
with the updated view model.
use shared::{Core, Counter, Effect, Event};
use std::sync::{Arc, LazyLock};
use tauri::Emitter;
static CORE: LazyLock<Arc<Core<Counter>>> = LazyLock::new(|| Arc::new(Core::new()));
fn handle_event(event: Event, core: &Arc<Core<Counter>>, app: &tauri::AppHandle) {
for effect in core.process_event(event) {
process_effect(effect, core, app);
}
}
fn process_effect(effect: Effect, core: &Arc<Core<Counter>>, app: &tauri::AppHandle) {
match effect {
Effect::Render(_) => {
let view = core.view();
let _ = app.emit("render", view);
}
}
}
#[tauri::command]
async fn increment(app_handle: tauri::AppHandle) {
handle_event(Event::Increment, &CORE, &app_handle);
}
#[tauri::command]
async fn decrement(app_handle: tauri::AppHandle) {
handle_event(Event::Decrement, &CORE, &app_handle);
}
#[tauri::command]
async fn reset(app_handle: tauri::AppHandle) {
handle_event(Event::Reset, &CORE, &app_handle);
}
/// The main entry point for Tauri
/// # Panics
/// If the Tauri application fails to run.
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![increment, decrement, reset])
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
A few things to note:
- The
Coreis stored in aLazyLock<Arc<...>>so it can be shared across Tauri command handlers. - Each user action (increment, decrement, reset) is a separate Tauri command that sends the corresponding event to the core.
- The
Rendereffect is handled by callingapp.emit("render", view), which sends the serializedViewModelto the frontend as a Tauri event. - Because the core is running directly in Rust, there is no serialization
boundary between the shell and the core — we call
core.process_event()directly.
The React frontend
The frontend listens for render events from the backend and updates the UI.
User interactions invoke Tauri commands, which run in the Rust backend.
import { useEffect, useState } from "react";
import { invoke } from "@tauri-apps/api/core";
import { listen, UnlistenFn } from "@tauri-apps/api/event";
type ViewModel = {
count: string;
};
const initialState: ViewModel = {
count: "",
};
function App() {
const [view, setView] = useState(initialState);
useEffect(() => {
let unlistenToRender: UnlistenFn;
listen<ViewModel>("render", (event) => {
setView(event.payload);
}).then((unlisten) => {
unlistenToRender = unlisten;
});
// trigger initial render
invoke("reset");
return () => {
unlistenToRender?.();
};
}, []);
return (
<main>
<section className="section has-text-centered">
<p className="title">Crux Counter Example</p>
<p className="is-size-5">Rust Core, Rust Shell (Tauri + React)</p>
</section>
<section className="container has-text-centered">
<p className="is-size-5">{view.count}</p>
<div className="buttons section is-centered">
<button
className="button is-primary is-danger"
onClick={() => invoke("reset")}
>
{"Reset"}
</button>
<button
className="button is-primary is-success"
onClick={() => invoke("increment")}
>
{"Increment"}
</button>
<button
className="button is-primary is-warning"
onClick={() => invoke("decrement")}
>
{"Decrement"}
</button>
</div>
</section>
</main>
);
}
export default App;
The frontend is straightforward:
- On mount, we call
listen("render", ...)to receive view model updates from the backend, and invokeresetto trigger an initial render. - Button clicks call
invoke("increment"),invoke("decrement"), etc. — these are the Tauri commands defined in our Rust backend. - There is no serialization code in the frontend — Tauri handles the
serialization of the
ViewModelstruct automatically.
Build and run
cargo tauri dev
Terminal — Rust and Ratatui
These are the steps to set up and run a Crux app as a terminal UI (TUI) application using Ratatui. This is a great way to build lightweight, keyboard-driven interfaces that share the same core logic as your web and mobile apps.
This walk-through assumes you have already added the shared library to your repo, as described in Shared core and types.
Because both the core and the shell are written in Rust and run in the same process, there is no FFI boundary — the shell calls the core directly with no serialization overhead.
Create the project
Our TUI app is just a new Rust project, which we can create with Cargo.
cargo new tui
Add it to your Cargo workspace by editing the root Cargo.toml:
[workspace]
members = ["shared", "tui"]
Add the dependencies to tui/Cargo.toml:
[package]
name = "tui"
version = "0.1.0"
authors.workspace = true
edition.workspace = true
repository.workspace = true
license.workspace = true
keywords.workspace = true
rust-version.workspace = true
[lints]
workspace = true
[dependencies]
shared = { path = "../shared" }
ratatui = "0.30.0"
crossterm = "0.29.0"
We depend on shared (our Crux core), ratatui (the TUI framework), and
crossterm (for terminal input handling).
The shell
The entire TUI shell lives in a single main.rs. Let's walk through the key
parts.
use std::io;
use crossterm::event::{self, Event, KeyCode, KeyEvent, KeyEventKind};
use ratatui::{
DefaultTerminal, Frame,
buffer::Buffer,
layout::{Constraint, Layout, Rect},
style::{Color, Style, Styled, Stylize},
symbols::border,
text::{Line, Text},
widgets::{Block, Paragraph, Widget},
};
use shared::{Core, Counter, Effect, Event as AppEvent};
const BUTTONS: [(&str, AppEvent); 3] = [
("Increment", AppEvent::Increment),
("Decrement", AppEvent::Decrement),
("Reset", AppEvent::Reset),
];
#[allow(clippy::cast_possible_truncation)]
const NUM_BUTTONS: u16 = BUTTONS.len() as u16;
struct App {
core: Core<Counter>,
selected: usize,
exit: bool,
}
impl App {
fn new() -> Self {
Self {
core: Core::new(),
selected: 0,
exit: false,
}
}
fn run(&mut self, terminal: &mut DefaultTerminal) -> io::Result<()> {
while !self.exit {
terminal.draw(|frame| self.draw(frame))?;
self.handle_events()?;
}
Ok(())
}
fn draw(&self, frame: &mut Frame) {
frame.render_widget(self, frame.area());
}
fn handle_events(&mut self) -> io::Result<()> {
match event::read()? {
Event::Key(key_event) if key_event.kind == KeyEventKind::Press => {
self.handle_key_event(key_event);
}
_ => {}
}
Ok(())
}
fn handle_key_event(&mut self, key_event: KeyEvent) {
match key_event.code {
KeyCode::Char('q') | KeyCode::Esc => self.exit = true,
KeyCode::Left | KeyCode::Char('h') => self.select_prev(),
KeyCode::Right | KeyCode::Char('l') => self.select_next(),
KeyCode::Enter | KeyCode::Char(' ') => self.press_selected(),
KeyCode::Char('+' | '=') => self.dispatch(AppEvent::Increment),
KeyCode::Char('-') => self.dispatch(AppEvent::Decrement),
KeyCode::Char('0') => self.dispatch(AppEvent::Reset),
_ => {}
}
}
const fn select_prev(&mut self) {
self.selected = self.selected.saturating_sub(1);
}
const fn select_next(&mut self) {
if self.selected < BUTTONS.len() - 1 {
self.selected += 1;
}
}
fn press_selected(&self) {
let (_, ref event) = BUTTONS[self.selected];
self.dispatch(event.clone());
}
fn dispatch(&self, event: AppEvent) {
for effect in self.core.process_event(event) {
match effect {
Effect::Render(_) => {
// The shell re-renders on the next loop iteration
}
}
}
}
}
impl Widget for &App {
fn render(self, area: Rect, buf: &mut Buffer) {
let view = self.core.view();
let title = Line::from(" Simple Counter ".bold());
let instructions = Line::from(vec![
" Select ".into(),
"<←→>".blue().bold(),
" Confirm ".into(),
"<Enter>".blue().bold(),
" Quit ".into(),
"<Q> ".blue().bold(),
]);
let block = Block::bordered()
.title(title.centered())
.title_bottom(instructions.centered())
.border_set(border::THICK);
let inner = block.inner(area);
block.render(area, buf);
// Split inner into: space for subtitle | main content (count+buttons) | bottom pad
// count(3) + gap(1) + buttons(3) = 7
let [top_space, main_content, _] = Layout::vertical([
Constraint::Fill(1),
Constraint::Length(7),
Constraint::Fill(1),
])
.areas(inner);
// -- Subtitle (vertically centered in the space above the counter) --
let [_, subtitle_area, _] = Layout::vertical([
Constraint::Fill(1),
Constraint::Length(1),
Constraint::Fill(1),
])
.areas(top_space);
let sub_title = Line::from("Rust Core, Rust Shell (Ratatui)".bold());
Paragraph::new(sub_title)
.centered()
.render(subtitle_area, buf);
// -- Main content areas --
let [count_area, _, buttons_area] = Layout::vertical([
Constraint::Length(3),
Constraint::Length(1),
Constraint::Length(3),
])
.areas(main_content);
// -- Count display --
let counter_text = Text::from(vec![Line::from(view.count.yellow().bold())]);
let count_block = Block::bordered().border_set(border::PLAIN);
Paragraph::new(counter_text)
.centered()
.block(count_block)
.render(count_area, buf);
// -- Buttons --
ButtonBar::new(self.selected).render(buttons_area, buf);
}
}
struct ButtonBar {
selected: usize,
}
impl ButtonBar {
const fn new(selected: usize) -> Self {
Self { selected }
}
}
impl Widget for ButtonBar {
fn render(self, area: Rect, buf: &mut Buffer) {
let button_width: u16 = 14;
let gap_width: u16 = 2;
let total_width = button_width * NUM_BUTTONS + gap_width * (NUM_BUTTONS - 1);
let [_, button_strip, _] = Layout::horizontal([
Constraint::Fill(1),
Constraint::Length(total_width),
Constraint::Fill(1),
])
.areas(area);
let constraints: Vec<Constraint> = BUTTONS
.iter()
.enumerate()
.flat_map(|(i, _)| {
if i < BUTTONS.len() - 1 {
vec![
Constraint::Length(button_width),
Constraint::Length(gap_width),
]
} else {
vec![Constraint::Length(button_width)]
}
})
.collect();
let cols = Layout::horizontal(constraints).split(button_strip);
let colors = [Color::Green, Color::Yellow, Color::Red];
for (i, (label, _)) in BUTTONS.iter().enumerate() {
let col = cols[i * 2]; // even indices are buttons, odd are gaps
let is_selected = i == self.selected;
let color = colors[i];
let (text_style, bdr_set) = if is_selected {
(
Style::new().fg(Color::Black).bg(color).bold(),
border::THICK,
)
} else {
(Style::new().fg(color), border::PLAIN)
};
let line = Line::from((*label).set_style(text_style));
let btn_block = Block::bordered()
.border_set(bdr_set)
.border_style(text_style);
Paragraph::new(line)
.centered()
.style(text_style)
.block(btn_block)
.render(col, buf);
}
}
}
fn main() -> io::Result<()> {
ratatui::run(|terminal| App::new().run(terminal))
}
How it works
The TUI shell follows the same pattern as any Crux shell, but with a terminal render loop instead of a UI framework:
-
Event loop — Ratatui runs a loop that draws the UI and then waits for keyboard input. Each keypress is mapped to an app
Event(e.g. pressing+sendsEvent::Increment). -
Dispatching events — The
dispatchmethod sends events to the core viacore.process_event()and processes the resulting effects. For this simple example, the only effect isRender, which is a no-op in the TUI — the shell re-renders on every loop iteration anyway. -
Rendering the view — On each frame, the shell calls
core.view()to get the currentViewModeland renders it using Ratatui widgets. The counter value is displayed in a bordered box with a row of selectable buttons below it. -
No serialization — Because both the core and the shell are Rust running in the same process, we call
Core::new(),core.process_event(), andcore.view()directly with native Rust types.
Build and run
cargo run -p tui
Your app should look something like this in the terminal:
┏━━━━━━━━━━━━━━ Simple Counter ━━━━━━━━━━━━━━┓
┃ ┃
┃ Rust Core, Rust Shell (Ratatui) ┃
┃ ┃
┃ ┌───────────────────┐ ┃
┃ │ 0 │ ┃
┃ └───────────────────┘ ┃
┃ ┃
┃ ┃ Increment ┃ │ Decrement │ │ Reset │┃
┃ ┃
┗━━ Select <←→> Confirm <Enter> Quit <Q> ━━━━┛
</div>
</div>
Command Runtime
In the previous sections we focused on building applications in Crux and using its public APIs to do so. In this and the following chapters, we'll look at how the internals of Crux work, starting with the command runtime.
The command runtime is a set of components that process effects, presenting the two perspectives we previously mentioned:
- For the core, the shell appears to be a platform with a message based system interface
- For the shell, the core appears as a stateful library responding to events with requests for side-effects
There are a few challenges to solve in order to facilitate this interface.
First, each run of the update function returns a Command which may
contain several concurrent tasks, each requesting effects from the shell.
The requested effects are expected to be emitted together, and each batch
of effects will be processed concurrently, so the calls can't be blocking.
Second, each effect may require multiple round-trips between the core and
shell to conclude and we don't want to require a call to update per
round trip, so we need some ability to "suspend" execution while waiting
for an effect to be fulfilled. The ability to suspend effects introduces a
new challenge — effects which are suspended need, once resolved, to
continue execution in the same async task.
Given this concurrency and execution suspension, an async interface seems
like a good candidate. Commands request work from the shell, .await the
results, and continue their work when the result has arrived. The call to
request_from_shell or stream_from_shell translates into an effect
request returned from the current core "transaction" (one call to
process_event or resolve).
In this chapter, we will focus on the runtime and the core interface and ignore the serialisation, bridge and FFI, and return to them in the following sections. The examples will assume a Rust based shell.
Async runtime
One of the fairly unique aspects of Rust's async is the fact that it doesn't come with a bundled runtime. This is recognising that asynchronous execution is useful in various different scenarios, and no one runtime can serve all of them. Crux takes advantage of this and brings its own runtime, tailored to the execution of side-effects on top of a message based interface.
For a deeper background on Rust's async architecture, we recommend the Asynchronous Programming in Rust book, especially the chapter about executing futures and tasks. We will assume you are familiar with the basic ideas and mechanics of async here.
The job of an async runtime is to manage a number of tasks, each driving one
future to completion. This management is done by an executor, which is
responsible for scheduling the futures and polling them at the right time to
drive their execution forward. General-purpose runtimes like Tokio do
this on a number of threads in a thread pool, but in Crux, we run in
the context of a single function call (of the app's update function)
and potentially in a WebAssembly context which is single-threaded
anyway, so our runtime only needs to poll all the tasks sequentially,
to see if any of them need to continue.
Polling all the tasks would work, and in our case wouldn't even be that inefficient, but the async system is set up to avoid unnecessary polling of futures with one additional concept - wakers. A waker is a mechanism which can be used to signal to the executor that something that a given task is waiting on has changed, and the task's future should be polled, because it will be able to proceed. This is how "at the right time" from the above paragraph is decided.
In our case there's a single situation which causes such a change - a result has arrived from the shell, for a particular effect requested earlier.
Always use the Command APIs provided by Crux for async work (see the capabilities chapter). Using other async APIs can lead to unexpected behaviour, because the resulting futures are not tied to Crux effects. Such futures will resolve, but only after the next shell request causes the Crux executor to execute.
If you want to depend on a crate that requires a standard runtime like Tokio, you can integrate it through an effect via middleware.
One effect's life cycle
So, step by step, our strategy for commands to handle effects is:
- A
Commandcreates a task containing a future with some code to run (viaCommand::neworctx.spawn) - The new task is scheduled to be polled next time the executor runs
- The executor goes through the list of ready tasks until it gets to our task and polls it
- The future runs to the point where the first async call is
awaited. In commands, this should only be a future returned from one of the calls to request something from the shell, or a future resulting from a composition of such futures (through async method calls or combinators likeselectorjoin). - The shell request future's first step is to create the request and prepare it to be sent. We will look at the mechanics of the sending shortly, but for now it's only important that part of this request is a callback used to resolve it.
- The request future, as part of the first poll by the executor, sends the request to be handed to the shell. As there is no result from the shell yet, it returns a pending state and the task is suspended.
- The request is passed on to the shell to resolve (as a return value
from
process_eventorresolve) - Eventually, the shell has a result ready for the request and asks
the core to
resolvethe request. - The request's resolve callback is executed, sending the provided result through an internal channel. The channel wakes the future's waker, which enqueues the task for processing on the executor.
- The executor runs again (asked to do so by the core's
resolveAPI after calling the callback), and polls the awoken future. - The future sees there is now a result available and continues the execution of the original task until a further await or until completion.
The cycle may repeat a few times, depending on the command implementation, but eventually the original task completes and is removed.
This is probably a lot to take in, but the basic gist is that command
futures (the ones created by Command::new or ctx.spawn) always
pause on request futures (the ones returned from request_from_shell
et al.), which submit requests. Resolving requests updates the state
of the original future and wakes it up to continue execution.
With that in mind we can look at the individual moving parts and how they communicate.
Spawning tasks on the executor
The first step for anything to happen is creating a Command with a
task. Each task runs within a CommandContext, which provides the
interface for communicating with the shell and the app:
pub struct CommandContext<Effect, Event> {
pub(crate) effects: Sender<Effect>,
pub(crate) events: Sender<Event>,
pub(crate) tasks: Sender<Task>,
pub(crate) rc: Arc<()>,
}
There are sending ends of channels for effects and events, and also
a sender for spawning new tasks. The rc field is a reference
counter used to track whether any contexts are still alive
(indicating the command may still produce more work).
A Command is itself an async executor, managing a set of tasks:
#[must_use = "Unused commands never execute. Return the command from your app's update function or combine it with other commands with Command::and or Command::all"]
pub struct Command<Effect, Event> {
effects: Receiver<Effect>,
events: Receiver<Event>,
context: CommandContext<Effect, Event>,
// Executor internals
// TODO: should this be a separate type?
ready_queue: Receiver<TaskId>,
spawn_queue: Receiver<Task>,
tasks: Slab<Task>,
ready_sender: Sender<TaskId>, // Used in creating wakers for tasks
waker: Arc<AtomicWaker>, // Shared with task wakers when polled in async context
// Signaling
aborted: Arc<AtomicBool>,
}
It holds the receiving ends of the effect and event channels, along
with the executor internals: a Slab of tasks, a ready queue of
task IDs, and a spawn queue for new tasks.
Each Task is a simple data structure holding a future and some
coordination state:
pub(crate) struct Task {
// Used to wake the join handle when the task concludes
pub(crate) join_handle_wakers: Receiver<Waker>,
// Set to true when the task finishes, used by the join handle
// RFC: is there a safe way to do this relying on the waker alone?
pub(crate) finished: Arc<AtomicBool>,
// Set to true when the task is aborted. Aborted tasks will poll Ready on the
// next poll
pub(crate) aborted: Arc<AtomicBool>,
// The future polled by this task
pub(crate) future: BoxFuture<'static, ()>,
}
Tasks are spawned by CommandContext::spawn:
pub fn spawn<F, Fut>(&self, make_future: F) -> JoinHandle
where
F: FnOnce(CommandContext<Effect, Event>) -> Fut,
Fut: Future<Output = ()> + Send + 'static,
{
let (sender, receiver) = crossbeam_channel::unbounded();
let ctx = self.clone();
let future = make_future(ctx);
let task = Task {
finished: Arc::default(),
aborted: Arc::default(),
future: future.boxed(),
join_handle_wakers: receiver,
};
let handle = JoinHandle {
finished: task.finished.clone(),
aborted: task.aborted.clone(),
register_waker: sender,
};
self.tasks
.send(task)
.expect("Command could not spawn task, tasks channel disconnected");
handle
}
After constructing a task with the future returned by the closure,
it is sent to the command's spawn queue. A JoinHandle is returned,
which can be used to await the task's completion or abort it.
The command runs all tasks to completion (or suspension) with
run_until_settled:
pub(crate) fn run_until_settled(&mut self) {
if self.was_aborted() {
// Spawn new tasks to clear the spawn_queue as well
self.spawn_new_tasks();
self.tasks.clear();
return;
}
loop {
self.spawn_new_tasks();
if self.ready_queue.is_empty() {
break;
}
while let Ok(task_id) = self.ready_queue.try_recv() {
match self.run_task(task_id) {
TaskState::Missing | TaskState::Suspended => {
// Missing:
// The task has been evicted because it completed. This can happen when
// a _running_ task schedules itself to wake, but then completes and gets
// removed
// Suspended:
// we pick it up again when it's woken up
}
TaskState::Completed | TaskState::Cancelled => {
// Remove and drop the task, it's finished
let task = self.tasks.remove(task_id.0);
task.finished.store(true, Ordering::Release);
task.wake_join_handles();
drop(task);
}
}
}
}
}
The method first checks if the command has been aborted. If not, it loops: spawning any new tasks from the spawn queue, then polling each ready task. Tasks that complete are removed. Tasks that are suspended wait to be woken.
The waking mechanism is provided by CommandWaker:
pub(crate) struct CommandWaker {
pub(crate) task_id: TaskId,
pub(crate) ready_queue: Sender<TaskId>,
// Waker for the executor running this command as a Stream.
// When the command is executed directly (e.g. in tests) this waker
// will not be registered.
pub(crate) parent_waker: Arc<AtomicWaker>,
woken: AtomicBool,
}
impl Wake for CommandWaker {
fn wake(self: Arc<Self>) {
self.wake_by_ref();
}
fn wake_by_ref(self: &Arc<Self>) {
// If we can't send the id to the ready queue, there is no Command to poll the task again anyway,
// nothing to do.
// TODO: Does that mean we should bail, since waking ourselves is
// now pointless?
let _ = self.ready_queue.send(self.task_id);
self.woken.store(true, Ordering::Release);
// Note: calling `wake` before `register` is a no-op
self.parent_waker.wake();
}
}
When a task's future needs to be woken (because a shell response has arrived), the waker sends the task's ID back to the ready queue and also wakes the parent waker (used when the command is running as a stream inside another command).
While there are a lot of moving pieces involved, the basic mechanics
are relatively straightforward — tasks are submitted either by
Command::new, ctx.spawn, or awoken by arriving responses to the
requests they submitted. The queue of tasks is processed whenever
run_until_settled is called. This happens in the Core API
implementation: both process_event and resolve trigger it as
part of their processing.
Now we know how the futures get executed, suspended and resumed, we can examine the flow of information between commands and the Core API calls layered on top.
Requests flow from commands to the shell
The key to understanding how the effects get processed and executed is to name all the various pieces of information, and discuss how they are wrapped in each other.
The basic inner piece of the effect request is an operation. This
is the intent which the command is submitting to the shell. Each
operation has an associated output value, with which the operation
request can be resolved. There are multiple capabilities in each
app, and in order for the shell to easily tell which capability's
effect it needs to handle, we wrap the operation in an effect. The
Effect type is a generated enum based on the app's set of
capabilities, with one variant per capability. It allows us to
multiplex (or type erase) the different typed operations into a
single type, which can be matched on to process the operations.
Finally, the effect is wrapped in a request which carries the effect, and an associated resolve callback to which the output will eventually be given. We discussed this callback in the previous section — its job is to send the result through an internal channel, waking up the paused future. The request is the value passed to the shell, and used as both the description of the effect intent, and the "token" used to resolve it.
Each task in a command has access to a CommandContext, which holds
the sending ends of channels for effects and events. When a task
calls request_from_shell, the context creates a Request
containing the operation and a resolve callback, wraps it in the
app's Effect type (via the From trait), and sends it through the
effects channel. The Command collects these effects and surfaces
them to the Core.
Looking at the core itself:
pub struct Core<A>
where
A: App,
{
// WARNING: The user controlled types _must_ be defined first
// so that they are dropped first, in case they contain coordination
// primitives which attempt to wake up a future when dropped. For that
// reason the executor _must_ outlive the user type instances
// user types
model: RwLock<A::Model>,
app: A,
// internals
root_command: Mutex<Command<A::Effect, A::Event>>,
}
The Core holds a root_command — a single long-lived Command
onto which all commands returned from update are spawned. This
root command acts as the top-level executor, collecting all effects
and events across all active commands.
A single update cycle
To piece all these things together, let's look at processing a
single call from the shell. Both process_event and resolve share
a common step advancing the command runtime.
Here is process_event:
pub fn process_event(&self, event: A::Event) -> Vec<A::Effect> {
let mut model = self.model.write().expect("Model RwLock was poisoned.");
let command = self.app.update(event, &mut model);
// drop the model here, we don't want to hold the lock for the process() call
drop(model);
let mut root_command = self
.root_command
.lock()
.expect("Capability runtime lock was poisoned");
root_command.spawn(|ctx| command.into_future(ctx));
drop(root_command);
self.process()
}
and here is resolve:
pub fn resolve<Output>(
&self,
request: &mut impl Resolvable<Output>,
result: Output,
) -> Result<Vec<A::Effect>, ResolveError>
{
let resolve_result = request.resolve(result);
debug_assert!(resolve_result.is_ok());
resolve_result?;
Ok(self.process())
}
The interesting things happen in the common process method:
pub(crate) fn process(&self) -> Vec<A::Effect> {
let mut root_command = self
.root_command
.lock()
.expect("Capability runtime lock was poisoned");
let mut events: VecDeque<_> = root_command.events().collect();
while let Some(event_from_commands) = events.pop_front() {
let mut model = self.model.write().expect("Model RwLock was poisoned.");
let command = self.app.update(event_from_commands, &mut model);
drop(model);
root_command.spawn(|ctx| command.into_future(ctx));
events.extend(root_command.events());
}
root_command.effects().collect()
}
First, we drain events from the root command (which internally runs
all ready tasks before collecting). There can be new events because
we just returned a command from update (which may have immediately
sent events) or resolved some effects (which woke up suspended
futures that then sent events).
For each event, we call update again, spawning the returned
command onto the root command, and drain any further events produced.
This continues until no more events remain.
Finally, we collect all of the effect requests submitted in the process and return them to the shell.
Resolving requests
We've now seen everything other than the mechanics of resolving
requests. The resolve callback is carried by the request as a
RequestHandle, tagged by the expected number of resolutions:
type ResolveOnce<Out> = Box<dyn FnOnce(Out) + Send>;
type ResolveMany<Out> = Box<dyn Fn(Out) -> Result<(), ()> + Send>;
/// Resolve is a callback used to resolve an effect request and continue
/// one of the capability Tasks running on the executor.
pub enum RequestHandle<Out> {
Never,
Once(ResolveOnce<Out>),
Many(ResolveMany<Out>),
}
A RequestHandle can be Never (for notifications that don't
expect a response), Once (for one-shot requests), or Many (for
streaming requests). Resolving a Once handle consumes it, turning
it into Never to prevent double-resolution.
Here's how the resolve callback is set up in request_from_shell:
pub fn request_from_shell<Op>(&self, operation: Op) -> ShellRequest<Op::Output>
where
Op: Operation,
Effect: From<Request<Op>> + Send + 'static,
{
let (output_sender, output_receiver) = mpsc::unbounded();
let request = Request::resolves_once(operation, move |output| {
// If the channel is closed, the associated task has been cancelled
let _ = output_sender.unbounded_send(output);
});
let send_request = {
let effect = request.into();
let effects = self.effects.clone();
move || {
effects
.send(effect)
.expect("Command could not send request effect, effect channel disconnected");
}
};
ShellRequest::new(Box::new(send_request), output_receiver)
}
The callback sends the output through an mpsc channel. On the
receiving end, the ShellRequest future is waiting — when the value
arrives, the channel wakes the future's waker, which schedules the
task on the executor to continue.
In the next chapter, we will look at how this process changes when Crux is used via an FFI interface where requests and responses need to be serialised in order to pass across the language boundary.
Type generation
Why type generation?
Declaring every type across an FFI boundary is painful. Complex types
like nested enums, generics, and rich view models are difficult or
impossible to represent directly in tools like UniFFI or
wasm-bindgen. And even when you can declare them, maintaining the
declarations by hand as your app evolves is tedious and error-prone.
Crux sidesteps this problem by keeping the FFI surface as small as
possible. The entire core-shell interface is just three methods —
update, resolve, and view — and all data crosses the boundary as
serialized byte arrays (using Bincode). The
shell doesn't need to know the Rust types at the FFI level at all.
But the shell does need to serialize events and deserialize effects and view models on its side of the boundary. For that, it needs equivalent type definitions in Swift, Kotlin, or TypeScript — along with the matching serialization code. This is what type generation provides: it inspects your Rust types and generates the corresponding foreign types and their Bincode serialization implementations automatically.
How it works
Type generation uses the Facet crate for
zero-cost reflection. Types that derive the Facet trait can be
introspected at build time to discover their shape — fields, variants,
generic parameters. The
facet-generate crate
uses that reflection data to generate equivalent types (and their
serialization code) in Swift, Kotlin, and TypeScript.
The process has three parts:
- Annotate your types — derive
Faceton types that cross the FFI boundary, and use#[effect(facet_typegen)]on yourEffectenum. - Add a codegen binary to your shared crate — a short
mainthat registers your app and generates the foreign code. - Run it — typically via a
just typegenrecipe as part of your build workflow.
Annotating your types
Events, ViewModel, and other data types
Types that the shell needs to know about should derive Facet (along
with Serialize and Deserialize for the FFI serialization). Here's
the counter example:
#[derive(Facet, Serialize, Deserialize, Clone, Debug)]
#[repr(C)]
pub enum Event {
Increment,
Decrement,
Reset,
}
#[derive(Facet, Serialize, Deserialize, Clone, Default)]
pub struct ViewModel {
pub count: String,
}
Note the #[repr(C)] on the enum — this is required by Facet for
enums that cross the FFI boundary.
The Effect type
The Effect enum uses the #[effect(facet_typegen)] attribute, which
tells the #[effect] macro to generate the type registration code
that the codegen binary needs:
#[effect(facet_typegen)]
#[derive(Debug)]
pub enum Effect {
Render(RenderOperation),
}
The macro discovers the operation types carried by each variant (e.g.
RenderOperation) and registers them for type generation
automatically.
Skipping and opaque types
Not all event variants need to cross the FFI boundary. Internal
events (ones the shell never sends) can be excluded from the generated
output with #[facet(skip)]:
#[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)]
#[repr(C)]
pub enum Event {
// events from the shell
Get,
Increment,
Decrement,
Random,
StartWatch,
// events local to the core
#[serde(skip)]
#[facet(skip)]
Set(#[facet(opaque)] crux_http::Result<crux_http::Response<Count>>),
#[serde(skip)]
#[facet(skip)]
Update(Count),
#[serde(skip)]
#[facet(skip)]
UpdateBy(isize),
}
In this example, Set, Update, and UpdateBy are internal events
— the shell never creates them, so they're skipped.
However, Facet must still be derivable on the entire type,
including skipped variants. If a skipped variant contains a field
whose type doesn't implement Facet (like crux_http::Result<...>),
you need to mark that field with #[facet(opaque)] so the derive
succeeds. That's why Set has both #[facet(skip)] on the variant
and #[facet(opaque)] on its field.
The codegen binary
Each shared crate includes a small binary that drives the type generation. Here's the one from the counter example:
use std::path::PathBuf;
use clap::{Parser, ValueEnum};
use crux_core::{
cli::{BindgenArgsBuilder, bindgen},
type_generation::facet::{Config, TypeRegistry},
};
use log::info;
use uniffi::deps::anyhow::Result;
use shared::Counter;
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, ValueEnum)]
enum Language {
Swift,
Kotlin,
Typescript,
}
#[derive(Parser)]
#[command(version, about, long_about = None)]
struct Args {
#[arg(short, long, value_enum)]
language: Language,
#[arg(short, long)]
output_dir: PathBuf,
}
fn main() -> Result<()> {
pretty_env_logger::init();
let args = Args::parse();
let typegen_app = TypeRegistry::new().register_app::<Counter>()?.build()?;
let name = match args.language {
Language::Swift => "App",
Language::Kotlin => "com.crux.examples.counter",
Language::Typescript => "app",
};
let config = Config::builder(name, &args.output_dir)
.add_extensions()
.add_runtimes()
.build();
match args.language {
Language::Swift => {
info!("Typegen for Swift");
typegen_app.swift(&config)?;
}
Language::Kotlin => {
info!("Typegen for Kotlin");
typegen_app.kotlin(&config)?;
info!("Bindgen for Kotlin");
let bindgen_args = BindgenArgsBuilder::default()
.crate_name(env!("CARGO_PKG_NAME").to_string())
.kotlin(&args.output_dir)
.build()?;
bindgen(&bindgen_args)?;
}
Language::Typescript => {
info!("Typegen for TypeScript");
typegen_app.typescript(&config)?;
}
}
Ok(())
}
The key steps are:
TypeRegistry::new().register_app::<Counter>()?— discovers all types reachable from yourAppimplementation (events, effects, view model, and the operation types they reference)..build()?— produces aCodeGeneratorwith the full type graph.Config::builder(name, &output_dir)— configures the output. Thenameparameter is the package/module name (e.g."App"for Swift,"com.crux.examples.counter"for Kotlin,"app"for TypeScript)..add_extensions()— includes helper code likeRequests.swiftthat makes it easier to work with the generated types..add_runtimes()— includes the serialization runtime (Serde and Bincode implementations in the target language)..swift(&config)?/.kotlin(&config)?/.typescript(&config)?— generates the code.
The binary also handles UniFFI binding generation for Kotlin (the
bindgen call), which produces the Kotlin bindings for the Rust FFI
layer.
Cargo.toml setup
The codegen binary needs a few additions to your shared/Cargo.toml.
Declare the binary, gated on a codegen feature:
[[bin]]
name = "codegen"
required-features = ["codegen"]
Enable facet_typegen in crux_core:
[features]
facet_typegen = ["crux_core/facet_typegen"]
And add facet as a dependency — all types that cross the FFI
boundary derive Facet:
[dependencies]
facet = "=0.31"
Running type generation
Type generation is typically run via Just
recipes. Each shell runs the codegen binary and writes the output into
a generated/ directory inside itself. In the counter example, the
layout looks like this:
examples/counter/
├── shared/ # the Crux core
├── apple/
│ └── generated/ # Swift package "App"
├── Android/
│ └── generated/ # Kotlin package "com.crux.examples.counter"
├── web-react-router/
│ └── generated/
│ └── types/ # TypeScript package "app"
└── ...
The package names are set in codegen.rs via the Config::builder
call — see the codegen binary above.
Each shell's Justfile has a typegen recipe. For example, the Apple
shell runs:
RUST_LOG=info cargo run \
--package shared \
--bin codegen \
--features codegen,facet_typegen \
-- \
--language swift \
--output-dir generated
The --output-dir is relative to the shell directory where the recipe
runs — so the generated code lands right where the shell project can
reference it. The TypeScript shells use generated/types to keep the
types separate from the wasm package (which lives in generated/pkg).
The generated/ directories are gitignored and regenerated as part of
the build process. Each shell's build recipe depends on typegen.
What gets generated
For each target language, the codegen produces:
- Type definitions — enums, structs, and their serialization code,
matching the shape of your Rust types. For example,
Event,Effect,ViewModel, and any operation types. - Serialization runtime — Serde and Bincode implementations in the target language, so the shell can serialize events and deserialize effects and view models.
- Helper extensions — like
Requests.swift, which provides convenience methods for working with effect requests.
For Swift, the output is a Swift Package. For Kotlin, it's a set of source files alongside UniFFI bindings. For TypeScript, it's an npm package.


