Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Overview

Crux is a framework for building cross-platform applications with better testability, higher code and behavior reuse, better safety, security, and more joy from better tools.

It splits the application into two distinct parts, a Core built in Rust, which drives as much of the business logic as possible, and a Shell, built in the platform native language (Swift, Kotlin, TypeScript), which provides all interfaces with the external world, including the human user, and acts as a platform on which the core runs.

Crux

The aim is to separate three kinds of code in a typical app, which have different goals:

  • the presentation layer in the user interface,
  • the pure logic driving behaviour and state updates in response to the user's actions, and
  • the effects (or I/O) layer where network communication, storage, interactions with real-world time, and other similar things are handled

The Core handles the behaviour logic, the Shell handles the presentation layer and effect execution (but not orchestration, that is part of the behaviour and therefore in the Core). This strict separation makes the behaviour logic much easier to test without any of the other layers getting involved.

The interface between the Core and the Shell is a native FFI (Foreign Function Interface) with message passing semantics, where simple data structures are passed across the boundary, supported by cross-language code generation and type checking.

Get to know Crux

To get playing with Crux quickly, follow Part I of the book, from the Getting Started chapter onward. It will take you from zero to a basic working app on your preferred platform quickly. From there, continue on to Part II – building the Weather App, which builds on the basics and covers the more advanced features and patterns needed in a real world app.

If you just want to understand why we set out to build Crux in the first place and what problems it tries to solve, before you spend any time trying it (no hard feelings, we would too), read our original Motivation.

API docs

There are two places to find API documentation: the latest published version on docs.rs, or the very latest master docs if you too like to live dangerously.

You can see the latest version of this book (generated from the master branch) on Github Pages.

Crux is open source on Github. A good way to learn Crux is to explore the code, play with the examples, and raise issues or pull requests. We'd love you to get involved.

You can also join the friendly conversation on our Zulip channel.

Design overview

Logical architecture

The architecture is event-driven, with state management based on event sourcing, similar to Elm or Redux. The Core holds the majority of state, which is updated in response to events happening in the Shell. The interface between the Core and the Shell is message-based.

Native UI

The user interface layer is built natively, with modern declarative UI frameworks such as Swift UI, Jetpack Compose and React/Svelte or a WASM based framework on the web. The UI layer is as thin as it can be, and all behaviour logic is implemented by the shared Core. The one restriction is that the Core is side–effect free. This is both a technical requirement (to be able to target WebAssembly), and an intentional design goal, to separate behaviour from effects and make them both easier to test in isolation.

Managed effects

Crux uses managed side-effects – the Core requests side-effects from the Shell, which executes them. The basic difference is that instead of doing the asynchronous work, the core describes the intent for the work with data (which also serves as the input for the effect), and passes this to the Shell to be performed. The Shell performs the work, and returns the outcomes back to the Core. This approach using deferred execution is inspired by Elm, and similar to how other purely functional languages deal with effects and I/O (e.g. the IO monad in Haskell). It is also similar in its laziness to how iterators work in Rust.

Type generation

The Core exports types for the messages it can understand. The Shell can call the Core and pass one of the messages. In return, it receives a set of side-effect requests to perform. When the work is completed, the Shell sends the result back into the Core, which responds with further requests if necessary.

Updating the user interface is considered one of the side-effects the Core can request. The entire interface is strongly typed and breaking changes in the core will result in build failures in the Shell.

Goals

We set out to find a better way of building apps across platforms. You can read more about our motivation. The overall goals of Crux are to:

  • Build the majority of the application code once, in Rust
  • Encapsulate the behavior of the app in the Core for reuse
  • Follow the Ports and Adapters pattern, also known as Hexagonal Architecture to facilitate pushing side-effects to the edge, making behavior easy to test
  • Strictly separate the behavior from the look and feel and interaction design
  • Use the native UI tool kits to create user experience that is the best fit for a given platform
  • Use the native I/O libraries to be good citizens of the ecosystem and get the benefit of any OS-provided services

Path to 1.0

Crux is used in production apps today, and we consider it production ready. However, we still have a number of things to work on to call it 1.0, with a stable API and excellent DX expected from a mature framework.

Below is a list of some of the things we know we want to do before 1.0:

  • Better code generation with additional features, and support for more languages (e.g. C#, Dart, even C++...) and in turn more Shells (e.g. .NET, Flutter) which will also enable Desktop apps for Windows
  • Improved documentation, code examples, and example apps for newcomers
  • Improved onboarding experience, with less boilerplate code that end users have to write or copy from an example

Until then, we hope you will work with us on the rough edges, and adapt to the necessary API updates as we evolve. We strive to minimise the impact of changes as much as we can, but before 1.0, some breaking changes will be unavoidable.

Motivation

We set out to prove this approach to building apps largely because we've seen the drawbacks of all the other approaches in real life, and thought "there must be a better way". The two major available approaches to building the same application for iOS and Android are:

  1. Build a native app for each platform, effectively doing the work twice.
  2. Use React Native or Flutter to build the application once1 and produce native looking and feeling apps which behave nearly identically.

The drawback of the first approach is doing the work twice. In order to build every feature for iOS and Android at the same time, you need twice the number of people, either people who happily do Swift and Kotlin (and they are very rare), or more likely a set of iOS engineers and another set of Android engineers. This typically leads to forming two separate, platform-focused teams. We have witnessed situations first-hand, where those teams struggle with the same design problems, and despite one encountering and solving the problem first, the other one can learn nothing from their experience (and that's despite long design discussions).

We think such experiences with the platform native approach are common, and the reason why people look to React Native and Flutter.

The issues with the second approach are two-fold:

  • Only mostly native user interface
  • In the case of React Native, the JavaScript ecosystem tooling disaster

React Native (we'll focus the discussion on it, but most of the below applies to Flutter too) effectively takes over, and works hard to insulate the engineer from the native platform underneath and pretend it doesn't really exist, but of course, inevitably, it does exist and the user interface ends up being built in a combination of 90% JavaScript/TypeScript and 10% Kotlin/Swift. This was a major win when React Native was first introduced, because the platform native UI toolkits were imperative, following a version of MVC architecture, and generally made it quite difficult to get UI state management right. React on the other hand is declarative, leaving much less space for errors stemming from the UI getting into an undefined state (although as apps got more complex and codebases grew, React's state management model got more complex with them). The benefit of declarative UI was clearly recognised by iOS and Android, and both introduced their own declarative UI toolkit - Swift UI and Jetpack Compose. Both of them are quite good, matching that particular advantage of React Native, and leaving only building things once (in theory). But in exchange, they have to be written in JavaScript (and adjacent tools and languages).

Why not build all apps in JavaScript?

The main issue with the JavaScript ecosystem is that it's built on sand. The underlying language is quite loose and has a lot of inconsistencies. It came with no package manager originally, now it has three. To serve code to the browser, it gets bundled, and the list of bundlers is too long to include here, and even 10 years since the introduction of ES modules, the ecosystem is still split and the competing module standards make all tooling more complex and difficult to configure.

JavaScript was built as a dynamic language. This means a lot of basic human errors, which are made while writing the code are only discovered when running the code. Static type systems aim to solve that problem and TypeScript adds this onto JavaScript, but the types only go so far (until they hit an any type, or dependencies with no type definitions), and they disappear at runtime, so you don't get a type based conditional (well, kind of).

In short, upgrading JavaScript to something modern, capable of handling a large app codebase with multiple people or even teams working on it is possible, but takes a lot of tooling. Getting all this tooling set up and ready to build things is an all day job, and so more tooling, like Vite has popped up providing this configuration in a box, batteries included. Perhaps the final admission of this problem is the Biome toolchain (formerly the Rome project), attempting to bring all the various tools under one roof (and Biome itself is built in Rust...).

It's no wonder that even a working setup of all the tooling has sharp edges, and cannot afford to be nearly as strict as tooling designed with strictness in mind, such as Rust's. The heart of the problem is that computers are strict and precise instruments, and humans are sloppy creatures. With enough humans (more than 10, being generous) and no additional help, the resulting code will be sloppy, full of unhandled edge cases, undefined behaviour being relied on, circular dependencies preventing testing in isolation, etc. (and yes, these are not hypotheticals).

Contrast that with Rust, which is as strict as it gets, and generally backs up the claim that if it compiles it will work (and if you struggle to get it past the compiler, it's probably a bad idea). The tooling and package management is built in with cargo. There are fewer decisions to make when setting up a Rust project.

In short, we think the JS ecosystem has jumped the shark, the "complexity toothpaste" is out of the tube, and it's time to stop. But there's no real viable alternative.

Crux is our attempt to provide one.


  1. In reality it's more like 1.4x effort build the same app for two platforms.

Getting started

We generally recommend building Crux apps from inside out, starting with the Core.

This part will first take you through setting up the tools and building the Core, and writing tests to make sure everything works as expected. Finally, once we're confident we have a working core, we'll set up the necessary bindings for the shell and build the UI for your chosen platform.

But first, we need to make sure we have all the necessary tools

Install the tools

This is an example of a rust-toolchain.toml file, which you can add at the root of your repo. It should ensure that the correct rust channel and compile targets are installed automatically for you when you use any rust tooling within the repo.

You may not need all the targets if you're not planning to build a fully cross platform app.

[toolchain]
channel = "stable"
components = ["rustfmt", "rustc-dev"]
targets = [
    "aarch64-apple-darwin",
    "aarch64-apple-ios",
    "aarch64-apple-ios-sim",
    "aarch64-linux-android",
    "wasm32-unknown-unknown",
    "x86_64-apple-ios",
]
profile = "minimal"

For testing, we also recommend to install cargo-nextest, the test runner we'll be using in the examples.

cargo install cargo-nextest

Create the core crate

We need a crate to hold our application's core, but since one of our shell options later will be rust based, we'll set up a cargo workspace to have some isolation between the core and the other Rust based modules

The workspace and library manifests

First, create a workspace and start with a /Cargo.toml file, at the monorepo root, to add the new library to our workspace.

It should look something like this:

# /Cargo.toml
[workspace]
resolver = "3"
members = ["shared"]

[workspace.package]
edition = "2024"
rust-version = "1.88"

[workspace.dependencies]
anyhow = "1.0.100"
crux_core = "0.17.0"
serde = "1.0.228"

The shared library

The first library to create is the one that will be shared across all platforms, containing the behavior of the app. You can call it whatever you like, but we have chosen the name shared here. You can create the shared rust library, like this:

cargo new --lib shared

The library's manifest, at /shared/Cargo.toml, should look something like the following,

# /shared/Cargo.toml
[package]
name = "shared"
version = "0.1.0"
edition.workspace = true
rust-version.workspace = true

[lib]
crate-type = ["cdylib", "lib", "staticlib"]
name = "shared"

[dependencies]
crux_core.workspace = true
serde = { workspace = true, features = ["derive"] }

Note the crate-type in the [lib] section. This is in preparation for linking with the shells:

  • lib is the default rust library when linking into a rust binary
  • staticlib is a static library (libshared.a) for use with iOS apps
  • cdylib is a C-ABI dynamic library (libshared.so) for use with JNA in an Android app

The basic files

The only missing part now is your src/lib.rs file. This will eventually contain a fair bit of configuration for the shell interface, so we tend to recommend reserving it to this job and creating a a src/app.rs module for your app code.

For now, the lib.rs file looks as follows:

#![allow(unused)]
fn main() {
// src/lib.rs
pub mod app;
}

and app.rs can be empty, but let's put our app's main type in it, call it Counter:

#![allow(unused)]
fn main() {
// src/app.rs

#[derive(Default)]
pub struct Counter;
}

Running

cargo build

should build your Core. Let's make it do something now.

A very basic app

The basic app we'll build as an example to demonstrate the interaction between the Shell and the Core and the state management will be the well known and loved counter app. A simple counter we can increment, decrement and reset.

Code of the app

Example

You can find the full code for this part of the guide here

In the last chapter, we started with the main type

#[derive(Default)]
pub struct Counter;

We need to implement Default so that Crux can construct the app for us.

To turn it into a Crux app, we need to implement the App trait from the crux_core crate.

use crux_core::App;

impl App for Counter {

}

If you're following along, the compiler is now screaming at you that you're missing four associated types for the trait — Event, Model, ViewModel, and Effect.

Let's add them and talk about them one by one.

Event

Event defines all the possible events the app can respond to. It is essentially the Core's public API.

In our case it will look as follows:

#[derive(Serialize, Deserialize, Clone, Debug)]
pub enum Event {
    Increment,
    Decrement,
    Reset,
}

Those are the three things we can do with the counter. None of them need any additional information, so this simple enum will do. It is serializable, because it will eventually be crossing the FFI boundary. We will get to that soon.

Model

Model holds our application's internal state. You can probably guess what this will look like:

#![allow(unused)]
fn main() {
#[derive(Default)]
pub struct Model {
    count: isize,
}
}

It is a simple counter after all. Model stays in the core, so it doesn't need to serialize.

You can derive (or implement) Default and have Crux create an instance of your app and your model for you, or you can explicitly create a core with specified App and Model instances (this may be useful if you need to set up some initial state).

ViewModel

ViewModel represents the user interface at any one point in time. This is our indirection between the internal state and the UI on screen. In the case of the counter, this is pretty academic, there is no practical reason for making them different, but for the sake of the example, let's add some formatting in the mix and make it a string.

#[derive(Serialize, Deserialize, Clone, Default)]
pub struct ViewModel {
    pub count: String,
}

The difference between Model and ViewModel will get a lot more pronounced once we introduce some navigation into the mix in Part II.

Effect

For now, the counter has no side effects. Except it wants to update the user interface, and that is also a side effect. We'll go with this:

#![allow(unused)]
fn main() {
use crux_core::macros::effect;
use crux_core::render::RenderOperation;

#[effect(typegen)]
#[derive(Debug)]
pub enum Effect {
    Render(RenderOperation),
}
}

We're saying "the only side effect of our behaviour is rendering the user interface".

The Effect type is worth understanding further, but in order to do that we need to talk about what makes Crux different from most UI frameworks.

Managed side-effects

One of the key design choices in Crux is that the Core is free of side-effects (besides its internal state). Your application can never perform anything that directly interacts with the environment around it - no network calls, no reading/writing files, not even updating the screen. Actually doing all those things is the job of the Shell, the core can only ask for them to be done.

This makes the core portable between platforms, and, importantly, very easy to test. It also separates the intent – the "functional" requirements – from the implementation of the side-effects and the "non-functional" requirements (NFRs).

For example, your application knows it wants to store data in a SQL database, but it doesn't need to know or care whether that database is local or remote. That decision can even change as the application evolves, and be different on each platform. We won't go into the detail at this point, because we don't need the full extent of side effects just yet. If you want to know more now, you can jump ahead to the chapter on Managed Effects, but it's probably a bit much at this point. Up to you.

All you need to know for now is that for us to ask the Shell for side effects, it will need to know what side effects it needs to handle, so we will need to list the possible kinds of effects (as an enum). Effects are simply messages describing what should happen. In our case the only option is asking for a UI update (or, more precisely, telling the shell a new view model is available).

That's enough about effects for now, we will spend a lot more time with them later on.

Implementing the App trait

We now have all the building blocks to implement the App trait. Here is where we end up (straight from the actual example code):

impl App for Counter {
    type Event = Event;
    type Model = Model;
    type ViewModel = ViewModel;
    type Effect = Effect;

    fn update(&self, event: Event, model: &mut Model) -> Command<Effect, Event> {
        match event {
            Event::Increment => model.count += 1,
            Event::Decrement => model.count -= 1,
            Event::Reset => model.count = 0,
        }

        render()
    }

    fn view(&self, model: &Model) -> ViewModel {
        ViewModel {
            count: format!("Count is: {}", model.count),
        }
    }
}

The update function is the heart of the app, it manages the state transitions of the app. It responds to events by (optionally) updating the state. You may have noticed the strange return type: Command<Effect, Event>.

This is the request for some side-effects. We seem to be accumulating terminology, so let's do a quick recap:

  • Effect - a request for a type of side-effect (e.g. a HTTP request)
  • Operation - carried by the Effect, specifies the data for the effect (e.g. the URL, method, headers, body...)
  • Command - a bundle of effect requests which execute together, sequentially, in parallel or in a more complex coordination

Why so much layering?

In real apps, we typically use a few kinds of effects over and over, and so it's necessary to allow reuse. That's what the Effect enum does, it bundles together effects of the same type, defined by the same module or crate (We call those modules Capabilities, but lets not worry about those yet).

The other thing that happens in real apps is mixing different kinds of effects in workflows, chaining them, running them concurrently, even racing them. That's what commands allow you to do.

Our update function looks at the event it got, updates the model.count, and since the count has changed, the UI needs to update, so it calls render(). The render() call returns a Command, which update just passes on to the caller.

The view function's job is to return the representation of what we want the Shell to show on screen. It's up to the Shell to call it when ready. Our view does a bit of string formatting and wraps it in a ViewModel.

That's a working counter done. It's obviously really basic, but it's enough for us to test it.

Testing the Counter app

In this chapter we'll write some basic tests for our counter app. It is tempting to skip reading this, but please don't. Testing and testability is one of the most important benefits of Crux, and even in this simple case, subtle things are going on, which we'll build on later.

The first test

Technically, we've already broken the rules and written code without having a failing test for it. We're going to let that slip in the name of education, but let's fix that before someone alerts the TDD authorities.

The first test we're going to write will check that resetting the count renders the UI.

#[cfg(test)]
mod test {
    use super::*;

    #[test]
    fn renders() {
        let app = Counter;
        let mut model = Model::default();

        let mut cmd = app.update(Event::Reset, &mut model);

        // Check update asked us to `Render`
        cmd.expect_one_effect().expect_render();
    }
}

We create an instance of the app, and an instance of the model. Then we call update with the Event::Reset event. As you may remember we get back a Command, which we expect to carry a request for a render operation. Using the expectation helper API of the Command type, we check we got one effect, and that the effect is a render. Both methods will panic if they don't succeed (they are also #[cfg(test)] only, don't use them outside of tests).

That test should pass (check with cargo nextest run). Next up, we can check that the view model is rendered correctly

#[test]
fn shows_initial_count() {
    let app = Counter;
    let model = Model::default();

    let actual_view = app.view(&model).count;
    let expected_view = "Count is: 0";

    assert_eq!(actual_view, expected_view);
}

This is a lot more basic, just a simple equality assertion. Let's try something a bit more interesting

#[test]
fn increments_count() {
    let app = Counter;
    let mut model = Model::default();

    let mut cmd = app.update(Event::Increment, &mut model);

    // Check update asked us to `Render`
    cmd.expect_one_effect().expect_render();

    let actual_view = app.view(&model).count;
    let expected_view = "Count is: 1";
    assert_eq!(actual_view, expected_view);
}

When we send the increment event, we expect to be told to render, and we expect the view to show "Count is: 1".

You could just as well test just the model state, this is really up to you, what is more convenient and whether you prefer your tests to know about how your state works and to what extent.

By now you get the gist, so here's all the tests to satisfy ourselves that the app does in fact work:

#[cfg(test)]
mod test {
    use super::*;

    #[test]
    fn renders() {
        let app = Counter;
        let mut model = Model::default();

        let mut cmd = app.update(Event::Reset, &mut model);

        // Check update asked us to `Render`
        cmd.expect_one_effect().expect_render();
    }

    #[test]
    fn shows_initial_count() {
        let app = Counter;
        let model = Model::default();

        let actual_view = app.view(&model).count;
        let expected_view = "Count is: 0";
        assert_eq!(actual_view, expected_view);
    }

    #[test]
    fn increments_count() {
        let app = Counter;
        let mut model = Model::default();

        let mut cmd = app.update(Event::Increment, &mut model);

        // Check update asked us to `Render`
        cmd.expect_one_effect().expect_render();

        let actual_view = app.view(&model).count;
        let expected_view = "Count is: 1";
        assert_eq!(actual_view, expected_view);
    }

    #[test]
    fn decrements_count() {
        let app = Counter;
        let mut model = Model::default();

        let mut cmd = app.update(Event::Decrement, &mut model);

        // Check update asked us to `Render`
        cmd.expect_one_effect().expect_render();

        let actual_view = app.view(&model).count;
        let expected_view = "Count is: -1";
        assert_eq!(actual_view, expected_view);
    }

    #[test]
    fn resets_count() {
        let app = Counter;
        let mut model = Model::default();

        let _ = app.update(Event::Increment, &mut model);
        let _ = app.update(Event::Reset, &mut model);

        // Was the view updated correctly?
        let actual = app.view(&model).count;
        let expected = "Count is: 0";
        assert_eq!(actual, expected);
    }

    #[test]
    fn counts_up_and_down() {
        let app = Counter;
        let mut model = Model::default();

        let _ = app.update(Event::Increment, &mut model);
        let _ = app.update(Event::Reset, &mut model);
        let _ = app.update(Event::Decrement, &mut model);
        let _ = app.update(Event::Increment, &mut model);
        let _ = app.update(Event::Increment, &mut model);

        // Was the view updated correctly?
        let actual = app.view(&model).count;
        let expected = "Count is: 1";
        assert_eq!(actual, expected);
    }
}

You can see that occasionally, we test for the render to be requested. This will be important later, because we'll be able to not only check for the effects, but also resolve them – provide the value they requested, for example the response to a HTTP request.

That will let us test entire user flows calling web APIs, working with local storage and timers, and anything else, all at the speed of unit test and without ever touching the external world or writing a single fake (and maintaining it later).

For now though, let's actually give this thing some user interface. Time to build a Shell.

Preparing to add the Shell

So far, we've built a basic app in relatively basic Rust. If we now want to expose it to a Shell written in a different language, we'll have to set up the necessary plumbing, starting with the foreign function interface.

The core FFI bindings

From the work so far, you may have noticed the app has a pretty limited API, basically the update and view methods. There's one more for resolving effects (called resolve), but that really is it. We need to make those three methods available to the Shell, but once that's done, we don't have to touch it again.

Let's briefly talk about what we want from this interface. Ideally, in our language of choice we would:

  • have a native equivalent of the update, view and resolve function
  • have an equivalent for our Event, Effect and ViewModel types
  • not have to worry about what black magic is happening behind the scenes to make that work

Crux provides code generation support for all of the above.

Note

It isn't in any way actual black magic. What happens is Crux exposes FFI calls taking and returning the values serialized with bincode (by default), and generated "foreign" (Swift, Kotlin, ...) types handling the foreign side of the serialization.

Yes, this introduces some extra work to the FFI, but generally, for each user interaction we make a relatively small number of round-trips (almost certainly less than ten), and our benchmarks say we can make thousands of them per second. The real throughput is dependent on how much data gets serialized, but it only becomes a problem with really large messages, and advanced workarounds exists. You most likely don't need to worry about it, at least not for now.

Preparing the core

We will prepare the core for both kinds of supported shells - native ones and WebAssembly ones.

To help with the native setup, Crux uses Mozilla's Uniffi to generate the bindings. For WebAssembly, it uses wasm-bingen.

First, lets update our Cargo.toml:

# shared/Cargo.toml
[package]
name = "shared"
version = "0.1.0"
authors.workspace = true
edition.workspace = true
rust-version.workspace = true
repository.workspace = true
license.workspace = true
keywords.workspace = true

[lib]
crate-type = ["cdylib", "lib", "staticlib"]

[[bin]]
name = "codegen"
required-features = ["codegen"]

[features]
uniffi = ["dep:uniffi"]
wasm_bindgen = ["dep:wasm-bindgen", "getrandom/wasm_js"]
codegen = [
    "crux_core/cli",
    "dep:clap",
    "dep:log",
    "dep:pretty_env_logger",
    "uniffi",
]
facet_typegen = ["crux_core/facet_typegen"]

[dependencies]
crux_core.workspace = true
serde = { workspace = true, features = ["derive"] }
facet = "=0.31"

# optional dependencies
clap = { version = "4.5.60", optional = true, features = ["derive"] }
getrandom = { version = "=0.3", optional = true, default-features = false }
# js-sys = { version = "0.3.83", optional = true }
log = { version = "0.4.29", optional = true }
pretty_env_logger = { version = "0.5.0", optional = true }
uniffi = { version = "=0.29.4", optional = true }
wasm-bindgen = { version = "0.2.114", optional = true }

A lot has changed! The key things we added are:

  1. a bin target called codegen, which is how we're going to run all the code generation
  2. feature flags to optionally enable uniffi and wasm_bindgen, and grouped those under codegen alongside some dependencies which are optional depending on that feature flag being enabled
  3. dependencies we need for the code generation

And since we've declared the codegen target, we need to add the code for it.

// shared/src/bin/codegen.rs
use std::path::PathBuf;

use clap::{Parser, ValueEnum};
use crux_core::{
    cli::{BindgenArgsBuilder, bindgen},
    type_generation::facet::{Config, TypeRegistry},
};
use log::info;
use uniffi::deps::anyhow::Result;

use shared::Counter;

#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, ValueEnum)]
enum Language {
    Swift,
    Kotlin,
    Typescript,
}

#[derive(Parser)]
#[command(version, about, long_about = None)]
struct Args {
    #[arg(short, long, value_enum)]
    language: Language,
    #[arg(short, long)]
    output_dir: PathBuf,
}

fn main() -> Result<()> {
    pretty_env_logger::init();
    let args = Args::parse();

    let typegen_app = TypeRegistry::new().register_app::<Counter>()?.build()?;

    let name = match args.language {
        Language::Swift => "App",
        Language::Kotlin => "com.crux.examples.simplecounter",
        Language::Typescript => "app",
    };
    let config = Config::builder(name, &args.output_dir)
        .add_extensions()
        .add_runtimes()
        .build();

    match args.language {
        Language::Swift => {
            info!("Typegen for Swift");
            typegen_app.swift(&config)?;
        }
        Language::Kotlin => {
            info!("Typegen for Kotlin");
            typegen_app.kotlin(&config)?;

            info!("Bindgen for Kotlin");
            let bindgen_args = BindgenArgsBuilder::default()
                .crate_name(env!("CARGO_PKG_NAME").to_string())
                .kotlin(&args.output_dir)
                .build()?;
            bindgen(&bindgen_args)?;
        }
        Language::Typescript => {
            info!("Typegen for TypeScript");
            typegen_app.typescript(&config)?;
        }
    }

    Ok(())
}

This is essentially boilerplate for a CLI we can use to run the binding generation and type generation. But it's also a place where you can customize how they work if you have some more advanced needs.

It uses the facet based type generation from crux_core to scan the App for types which will cross the FFI boundary, collect them and then, depending on what language should be generated builds the code for it and places it into a specified output_dir directory.

We will call this CLI from the shell projects shortly.

Codegen, typegen, bindgen, which is it?

You'll here these terms thrown around here and there in the docs, so it's worth clarifying what we mean

bindgen – "bindings generation" – provides APIs in the foreign language to call the core's Rust FFI APIs. For most platforms we use UniFFI, except for WebAssembly, where we use wasm_bindgen

typegen – "type generation" – The core's FFI interface operates on bytes, but both Rust and the languages we're targeting are generally strongly typed. To facilitate the serialization / deserialization, we generate type definition reflecting the Rust types from the core in the foreign language (Swift, Kotlin, TypeScript, ...), which all serialize consistently.

codegen – you guessed it, "code generation" – is the two things above combined.

Bindings code

Now we need to add the Rust side of the bindings into our code. Update your lib.rs to look like this:

// shared/src/lib.rs
mod app;
pub mod ffi;

pub use app::*;
pub use crux_core::Core;

#[cfg(feature = "uniffi")]
const _: () = assert!(
    uniffi::check_compatible_version("0.29.4"),
    "please use uniffi v0.29.4"
);
#[cfg(feature = "uniffi")]
uniffi::setup_scaffolding!();

This code uses our feature flags to conditionally initialize the UniFFI bindings and check the version in use.

More importantly, it introduced a new ffi.rs module. Let's look at it closer:

#![allow(unused)]
fn main() {
// shared/src/ffi.rs
use crux_core::{
    Core,
    bridge::{Bridge, EffectId},
};

use crate::Counter;

/// The main interface used by the shell
#[cfg_attr(feature = "uniffi", derive(uniffi::Object))]
#[cfg_attr(feature = "wasm_bindgen", wasm_bindgen::prelude::wasm_bindgen)]
pub struct CoreFFI {
    core: Bridge<Counter>,
}

impl Default for CoreFFI {
    fn default() -> Self {
        Self::new()
    }
}

#[cfg_attr(feature = "uniffi", uniffi::export)]
#[cfg_attr(feature = "wasm_bindgen", wasm_bindgen::prelude::wasm_bindgen)]
impl CoreFFI {
    #[cfg_attr(feature = "uniffi", uniffi::constructor)]
    #[cfg_attr(
        feature = "wasm_bindgen",
        wasm_bindgen::prelude::wasm_bindgen(constructor)
    )]
    #[must_use]
    pub fn new() -> Self {
        Self {
            core: Bridge::new(Core::new()),
        }
    }

    /// Send an event to the app and return the effects.
    /// # Panics
    /// If the event cannot be deserialized.
    /// In production you should handle the error properly.
    #[must_use]
    pub fn update(&self, data: &[u8]) -> Vec<u8> {
        let mut effects = Vec::new();
        match self.core.update(data, &mut effects) {
            Ok(()) => effects,
            Err(e) => panic!("{e}"),
        }
    }

    /// Resolve an effect and return the effects.
    /// # Panics
    /// If the `data` cannot be deserialized into an effect or the `effect_id` is invalid.
    /// In production you should handle the error properly.
    #[must_use]
    pub fn resolve(&self, id: u32, data: &[u8]) -> Vec<u8> {
        let mut effects = Vec::new();
        match self.core.resolve(EffectId(id), data, &mut effects) {
            Ok(()) => effects,
            Err(e) => panic!("{e}"),
        }
    }

    /// Get the current `ViewModel`.
    /// # Panics
    /// If the view cannot be serialized.
    /// In production you should handle the error properly.
    #[must_use]
    pub fn view(&self) -> Vec<u8> {
        let mut view_model = Vec::new();
        match self.core.view(&mut view_model) {
            Ok(()) => view_model,
            Err(e) => panic!("{e}"),
        }
    }
}
}

Broad strokes: we define a type for core with FFI, which holds a Bridge wrapping our Counter, and provide implementations of the three API methods taking and returning byte buffers.

The translation between rust types and the byte buffers is the job of the bridge (it also holds the effect requests inside the core under an id, which can be sent out to the Shell and used to resolve the effect, but more on that later).

Notice the Shell is in charge of creating the instance of this type, so in theory your Shell can have several instances of the app if it wants to.

There are many attribute macros annotating the FFI type for uniffi and wasm_bindgen, which generate the actual code making them available as FFIs. We recommend the respective documentation if you're interested in the detail of how this works. The notable part is that both libraries have a level of support for various basic and structured data types which we don't use, and instead we serialize the data with Serde, and generate types with facet_generate to make the support consistent.

It's not essential for you to understand the detail of the above code now. You won't need to change it, unless you're doing something fairly advanced, by which time you'll understand it.

Platform native part

Okay, with that plumbing, the Core part of adding a shell is complete. It's not a one liner, but you will only set this up once, and most likely won't touch it again, but having the ability, should you need to, is important.

Now we can proceed to the actual shell for your platform of choice:

iOS — Swift and SwiftUI

In this section, we'll set up Xcode to build and run the simple counter app we built so far.

Tip

We think that using XcodeGen may be the simplest way to create an Xcode project to build and run a simple iOS app that calls into a shared core.

If you'd rather set up Xcode manually, you can do that, but most of this section will still apply. You just need to add the Swift package dependencies into your project by hand.

When we use Crux to build iOS apps, the Core API bindings are generated in Swift (with C headers) using Mozilla's UniFFI.

The shared core, which we built in previous chapters, is compiled to a static library and linked into the iOS binary.

The shared types are generated by Crux as a Swift package, which we can add to our iOS project as a dependency. The Swift code to serialize and deserialize these types across the boundary is also generated by Crux as Swift packages.

build flow

Compile our Rust shared library

When we build our iOS app, we also want to build the Rust core as a static library so that it can be linked into the binary that we're going to ship.

Other than Xcode and the Apple developer tools, we will use cargo-swift to generate a Swift package for our shared library, which we can add in Xcode.

To match our current version of UniFFI, we need to install version 0.9 of cargo-swift. You can install it with

cargo install cargo-swift --version '=0.9'

To run the various steps, we'll also use the Just task runner.

cargo install just

Let's write the Justfile and we can look at what happens:

# /iOS/Justfile

# Generate types for Swift, build the shared library as a Swift package and rebuild the Xcode project
build: typegen package generate-project

# clean and build
rebuild: clean build

# build and run Xcode
dev: build
    xed .

# remove all the generated artefacts
clean:
    cargo clean
    rm -rf *.xcodeproj generated

# rebuild the Xcode project from the `project.yml` file
generate-project:
    xcodegen

# generate types for Swift
typegen:
    RUST_LOG=info cargo run \
        --package shared \
        --bin codegen \
        --features codegen,facet_typegen \
        -- \
            --language swift \
            --output-dir generated

# use `cargo swift` to build the shared library as a Swift package
[working-directory('../shared')]
package:
    cargo swift --version | grep -q '0.9.0'
    cargo swift package \
        --name Shared \
        --platforms ios \
        --lib-type static \
        --features uniffi
    rm -rf generated
    mkdir -p ../iOS/generated/Shared
    cp -r Shared/* ../iOS/generated/Shared/
    rm -rf Shared

We have quite a few tasks. The main one is dev which we'll use shortly. It runs the build task and opens Xcode in the current directory.

build in turn runs typegen, package and generate-project. typegen will use the codegen CLI we prepared earlier, and package will use cargo swift to create a Shared package with our app binary and the bindgen code. That package will be our Swift interface to the core.

Finally generate-project will run xcodegen to give us an Xcode project file. They are famously fragile files and difficult to version control, so generating it from a less arcane source of truth seems like a good idea (yes, even if that source of truth is YAML).

Here's the project file:

# /iOS/project.yml
name: SimpleCounter
packages:
  Shared:
    path: ./generated/Shared
  App:
    path: ./generated/App
options:
  bundleIdPrefix: com.crux.examples.simplecounter
attributes:
  BuildIndependentTargetsInParallel: true
targets:
  SimpleCounter:
    type: application
    platform: iOS
    deploymentTarget: 18.0
    sources: [SimpleCounter]
    dependencies:
      - package: Shared
      - package: App
    info:
      path: SimpleCounter/Info.plist
      properties:
        UISupportedInterfaceOrientations:
          - UIInterfaceOrientationPortrait
          - UIInterfaceOrientationLandscapeLeft
          - UIInterfaceOrientationLandscapeRight
        UILaunchScreen: {}

Nothing too special, other than linking a couple packages and using them as dependencies.

With that, you can run

just dev

Simple - just dev! So what exactly happened?

The core built, including the FFI and the extra CLI binary, which was then called to generate Swift code, and that was then packaged as a Swift package. You can look at the generated directory, and you'll see two Swift packages - Shared and App, just like we asked in project.yml. The Shared package has our app as a static lib and all the generated FFI code for our FFI bindings, and the App package has the key types we will need.

No need to spend much time in here, but this is all the low-level glue code sorted out. Now we need to actually build some UI and we can run our app.

Building the UI

To add some UI, we need to do three things: wrap the core with a simple Swift interface, build a basic View to give us something to put on screen, and use that view as our main app view.

Wrap the core

The generated code still works with byte buffers, so lets give ourselves a nicer interface for it:

// iOS/SimpleCounter/core.swift
import App
import UIKit
import Shared

@MainActor
class Core: ObservableObject {
    @Published var view: ViewModel

    private var core: CoreFfi

    init() {
        self.core = CoreFfi()
        self.view = try! .bincodeDeserialize(input: [UInt8](core.view()))
    }

    func update(_ event: Event) {
        let effects = [UInt8](core.update(data: Data(try! event.bincodeSerialize())))

        let requests: [Request] = try! .bincodeDeserialize(input: effects)
        for request in requests {
            processEffect(request)
        }
    }

    func processEffect(_ request: Request) {
        switch request.effect {
        case .render:
            DispatchQueue.main.async {
                self.view = try! .bincodeDeserialize(input: [UInt8](self.core.view()))
            }
        }
    }
}

This is mostly just serialization code. But the processEffect method is interesting. That is where effect execution goes. At the moment the switch statement has a single lonely case updating the view model whenever the .render variant is requested, but you can add more in here later, as you expand your Effect type.

Build a basic view

Xcode should've generated a ContentView file for you in iOS/SimpleCounter/ContentView.swift. Change it to look like this:

import SwiftUI

struct ContentView: View {
    @ObservedObject var core: Core

    var body: some View {
        VStack {
            Image(systemName: "globe")
                .imageScale(.large)
                .foregroundColor(.accentColor)
            Text(core.view.count)
            HStack {
                ActionButton(label: "Reset", color: .red) {
                    core.update(.reset)
                }
                ActionButton(label: "Inc", color: .green) {
                    core.update(.increment)
                }
                ActionButton(label: "Dec", color: .yellow) {
                    core.update(.decrement)
                }
            }
        }
    }
}

struct ActionButton: View {
    var label: String
    var color: Color
    var action: () -> Void

    init(label: String, color: Color, action: @escaping () -> Void) {
        self.label = label
        self.color = color
        self.action = action
    }

    var body: some View {
        Button(action: action) {
            Text(label)
                .fontWeight(.bold)
                .font(.body)
                .padding(EdgeInsets(top: 10, leading: 15, bottom: 10, trailing: 15))
                .background(color)
                .cornerRadius(10)
                .foregroundColor(.white)
                .padding()
        }
    }
}

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        ContentView(core: Core())
    }
}

And finally, make sure iOS/SimpleCounter/SimpleCounterApp.swift looks like this to use the ContentView:

import SwiftUI

@main
struct SimpleCounterApp: App {
    var body: some Scene {
        WindowGroup {
            ContentView(core: Core())
        }
    }
}

The one interesting part of this is the @ObservedObject var core: Core. Since the Core is an ObservableObject, we can subscribe to it to refresh our view. And we've marked the view property as @Published, so whenever we set it, the View will draw.

The view then simply shows the core.view.count in a Text and whenever we press a button, we directly call core.update() with the appropriate action.

Success

You should then be able to run the app in the simulator or on an iPhone, and it should look like this:

simple counter app

Android

Warning

This section has not been updated to match the rest of the documentation and some parts may not match how Crux works any more.

Bear with us while we update and use the iOS section as the template to follow.

When we use Crux to build Android apps, the Core API bindings are generated in Java using Mozilla's Uniffi.

The shared core (that contains our app's behavior) is compiled to a dynamic library, using Mozilla's Rust gradle plugin for Android and the Android NDK. The library is loaded at runtime using Java Native Access.

The shared types are generated by Crux as Java packages, which we can add to our Android project using sourceSets. The Java code to serialize and deserialize these types across the boundary is also generated by Crux as Java packages.

build flow

This section has a guide for building Android apps with Crux:

  1. Kotlin and Jetpack Compose

Android — Kotlin and Jetpack Compose

These are the steps to set up Android Studio to build and run a simple Android app that calls into a shared core.

Sharp edge

We want to make setting up Android Studio to work with Crux really easy. As time progresses we will try to simplify and automate as much as possible, but at the moment there is some manual configuration to do. This only needs doing once, so we hope it's not too much trouble. If you know of any better ways than those we describe below, please either raise an issue (or a PR) at https://github.com/redbadger/crux.

Rust gradle plugin

This walkthrough uses Mozilla's excellent Rust gradle plugin for Android, which uses Python. However, pipes has recently been removed from Python (since Python 3.13) so you may encounter an error linking your shared library.

If you hit this problem, you can either:

  1. use an older Python (<3.13)
  2. wait for a fix (see this issue)
  3. or use a different plugin — there is a PR in the Crux repo that explores the use of cargo-ndk and the cargo-ndk-android plugin that may be useful.

Create an Android App

The first thing we need to do is create a new Android app in Android Studio.

Open Android Studio and create a new project, for "Phone and Tablet", of type "Empty Activity". In this walk-through, we'll call it "SimpleCounter"

  • "Name": SimpleCounter
  • "Package name": com.example.simple_counter
  • "Save Location": a directory called Android at the root of our monorepo
  • "Minimum SDK" API 34
  • "Build configuration language": Groovy DSL (build.gradle)

Your repo's directory structure might now look something like this (some files elided):

.
├── Android
│  ├── app
│  │  ├── build.gradle
│  │  ├── libs
│  │  └── src
│  │     └── main
│  │        ├── AndroidManifest.xml
│  │        └── java
│  │           └── com
│  │              └── example
│  │                 └── simple_counter
│  │                    └── MainActivity.kt
│  ├── build.gradle
│  ├── gradle.properties
│  ├── local.properties
│  └── settings.gradle
├── Cargo.lock
├── Cargo.toml
├── shared
│  ├── build.rs
│  ├── Cargo.toml
│  ├── src
│  │  ├── app.rs
│  │  ├── lib.rs
│  │  └── shared.udl
│  └── uniffi.toml
├── shared_types
│  ├── build.rs
│  ├── Cargo.toml
│  └── src
│     └── lib.rs
└── target

Add a Kotlin Android Library

This shared Android library (aar) is going to wrap our shared Rust library.

Under File -> New -> New Module, choose "Android Library" and give it the "Module name" shared. Set the "Package name" to match the one from your /shared/uniffi.toml, which in this example is com.example.simple_counter.shared.

Again, set the "Build configuration language" to Groovy DSL (build.gradle).

For more information on how to add an Android library see https://developer.android.com/studio/projects/android-library.

We can now add this library as a dependency of our app.

Edit the app's build.gradle (/Android/app/build.gradle) to look like this:

{{#include ../../../../examples/simple_counter/Android/app/build.gradle}}

Note

In our gradle files, we are referencing a "Version Catalog" to manage our dependency versions, so you will need to ensure this is kept up to date.

Our catalog (Android/gradle/libs.versions.toml) will end up looking like this:

{{#include ../../../../examples/simple_counter/Android/gradle/libs.versions.toml}}

The Rust shared library

We'll use the following tools to incorporate our Rust shared library into the Android library added above. This includes compiling and linking the Rust dynamic library and generating the runtime bindings and the shared types.

The NDK can be installed from "Tools, SDK Manager, SDK Tools" in Android Studio.

Let's get started.

Add the four rust android toolchains to your system:

$ rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android

Edit the project's build.gradle (/Android/build.gradle) to look like this:

{{#include ../../../../examples/simple_counter/Android/build.gradle}}

Edit the library's build.gradle (/Android/shared/build.gradle) to look like this:

{{#include ../../../../examples/simple_counter/Android/shared/build.gradle}}

Sharp edge

You will need to set the ndkVersion to one you have installed, go to "Tools, SDK Manager, SDK Tools" and check "Show Package Details" to get your installed version, or to install the version matching build.gradle above.

Tip

When you have edited the gradle files, don't forget to click "sync now".

If you now build your project you should see the newly built shared library object file.

$ ls --tree Android/shared/build/rustJniLibs
Android/shared/build/rustJniLibs
└── android
   └── arm64-v8a
      └── libshared.so
   └── armeabi-v7a
      └── libshared.so
   └── x86
      └── libshared.so
   └── x86_64
      └── libshared.so

You should also see the generated types — note that the sourceSets directive in the shared library gradle file (above) allows us to build our shared library against the generated types in the shared_types/generated folder.

$ ls --tree shared_types/generated/java
shared_types/generated/java
└── com
   ├── example
   │  └── simple_counter
   │     ├── shared
   │     │  └── shared.kt
   │     └── shared_types
   │        ├── Effect.java
   │        ├── Event.java
   │        ├── RenderOperation.java
   │        ├── Request.java
   │        ├── Requests.java
   │        ├── TraitHelpers.java
   │        └── ViewModel.java
   └── novi
      ├── bincode
      │  ├── BincodeDeserializer.java
      │  └── BincodeSerializer.java
      └── serde
         ├── ArrayLen.java
         ├── BinaryDeserializer.java
         ├── BinarySerializer.java
         ├── Bytes.java
         ├── DeserializationError.java
         ├── Deserializer.java
         ├── Int128.java
         ├── SerializationError.java
         ├── Serializer.java
         ├── Slice.java
         ├── Tuple2.java
         ├── Tuple3.java
         ├── Tuple4.java
         ├── Tuple5.java
         ├── Tuple6.java
         ├── Unit.java
         └── Unsigned.java

Create some UI and run in the Simulator

Example

There is a slightly more advanced example of an Android app in the Crux repository.

However, we will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.

Simple counter example

A simple app that increments, decrements and resets a counter.

Wrap the core to support capabilities

First, let's add some boilerplate code to wrap our core and handle the capabilities that we are using. For this example, we only need to support the Render capability, which triggers a render of the UI.

Let's create a file "File, New, Kotlin Class/File, File" called Core.

Note

This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.

Edit Android/app/src/main/java/com/example/simple_counter/Core.kt to look like the following. This code sends our (UI-generated) events to the core, and handles any effects that the core asks for. In this simple example, we aren't calling any HTTP APIs or handling any side effects other than rendering the UI, so we just handle this render effect by updating the published view model from the core.

{{#include ../../../../examples/simple_counter/Android/app/src/main/java/com/example/simple_counter/Core.kt}}

Tip

That when statement, above, is where you would handle any other effects that your core might ask for. For example, if your core needs to make an HTTP request, you would handle that here. To see an example of this, take a look at the counter example in the Crux repository.

Edit /Android/app/src/main/java/com/example/simple_counter/MainActivity.kt to look like the following:

{{#include ../../../../examples/simple_counter/Android/app/src/main/java/com/example/simple_counter/MainActivity.kt}}

Success

You should then be able to run the app in the simulator, and it should look like this:

simple counter app

Web — TypeScript and React (Next.js)

Warning

This section has not been updated to match the rest of the documentation and some parts may not match how Crux works any more.

Bear with us while we update and use the iOS section as the template to follow.

These are the steps to set up and run a simple TypeScript Web app that calls into a shared core.

Note

This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.

Info

There are many frameworks available for writing Web applications with JavaScript/TypeScript. We've chosen React with Next.js for this walk-through because it is simple and popular. However, a similar setup would work for other frameworks.

Create a Next.js App

For this walk-through, we'll use the pnpm package manager for no reason other than we like it the most!

Let's create a simple Next.js app for TypeScript, using pnpx (from pnpm). You can probably accept the defaults.

pnpx create-next-app@latest

Compile our Rust shared library

When we build our app, we also want to compile the Rust core to WebAssembly so that it can be referenced from our code.

To do this, we'll use wasm-pack, which you can install like this:

# with homebrew
brew install wasm-pack

# or directly
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh

Now that we have wasm-pack installed, we can build our shared library to WebAssembly for the browser.

(cd shared && wasm-pack build --target web)

Tip

You might want to add a wasm:build script to your package.json file, and call it when you build your nextjs project.

{
  "scripts": {
    "build": "pnpm run wasm:build && next build",
    "dev": "pnpm run wasm:build && next dev",
    "wasm:build": "cd ../shared && wasm-pack build --target web"
  }
}

Add the shared library as a Wasm package to your web-nextjs project

cd web-nextjs
pnpm add ../shared/pkg

Add the Shared Types

To generate the shared types for TypeScript, we can just run cargo build from the root of our repository. You can check that they have been generated correctly:

ls --tree shared_types/generated/typescript
shared_types/generated/typescript
├── bincode
│  ├── bincodeDeserializer.d.ts
│  ├── bincodeDeserializer.js
│  ├── bincodeDeserializer.ts
│  ├── bincodeSerializer.d.ts
│  ├── bincodeSerializer.js
│  ├── bincodeSerializer.ts
│  ├── mod.d.ts
│  ├── mod.js
│  └── mod.ts
├── node_modules
│  └── typescript -> .pnpm/typescript@4.8.4/node_modules/typescript
├── package.json
├── pnpm-lock.yaml
├── serde
│  ├── binaryDeserializer.d.ts
│  ├── binaryDeserializer.js
│  ├── binaryDeserializer.ts
│  ├── binarySerializer.d.ts
│  ├── binarySerializer.js
│  ├── binarySerializer.ts
│  ├── deserializer.d.ts
│  ├── deserializer.js
│  ├── deserializer.ts
│  ├── mod.d.ts
│  ├── mod.js
│  ├── mod.ts
│  ├── serializer.d.ts
│  ├── serializer.js
│  ├── serializer.ts
│  ├── types.d.ts
│  ├── types.js
│  └── types.ts
├── tsconfig.json
└── types
   ├── shared_types.d.ts
   ├── shared_types.js
   └── shared_types.ts

You can see that it also generates an npm package that we can add directly to our project.

pnpm add ../shared_types/generated/typescript

Create some UI

Example

There are other, more advanced, examples of Next.js apps in the Crux repository.

However, we will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.

Simple counter example

A simple app that increments, decrements and resets a counter.

Wrap the core to support capabilities

First, let's add some boilerplate code to wrap our core and handle the capabilities that we are using. For this example, we only need to support the Render capability, which triggers a render of the UI.

Note

This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.

Edit src/app/core.ts to look like the following. This code sends our (UI-generated) events to the core, and handles any effects that the core asks for. In this simple example, we aren't calling any HTTP APIs or handling any side effects other than rendering the UI, so we just handle this render effect by updating the component's view hook with the core's ViewModel.

Notice that we have to serialize and deserialize the data that we pass between the core and the shell. This is because the core is running in a separate WebAssembly instance, and so we can't just pass the data directly.

{{#include ../../../../examples/simple_counter/web-nextjs/src/app/core.ts}}

Tip

That switch statement, above, is where you would handle any other effects that your core might ask for. For example, if your core needs to make an HTTP request, you would handle that here. To see an example of this, take a look at the counter example in the Crux repository.

Create a component to render the UI

Edit src/app/page.tsx to look like the following. This code loads the WebAssembly core and sends it an initial event. Notice that we pass the setState hook to the update function so that we can update the state in response to a render effect from the core.

{{#include ../../../../examples/simple_counter/web-nextjs/src/app/page.tsx}}

Now all we need is some CSS. First add the Bulma package, and then import it in layout.tsx.

pnpm add bulma
{{#include ../../../../examples/simple_counter/web-nextjs/src/app/layout.tsx}}

Build and serve our app

We can build our app, and serve it for the browser, in one simple step.

pnpm dev

Success

Your app should look like this:

simple counter app

Web — Rust and Leptos

Warning

This section has not been updated to match the rest of the documentation and some parts may not match how Crux works any more.

Bear with us while we update and use the iOS section as the template to follow.

These are the steps to set up and run a simple Rust Web app that calls into a shared core.

Note

This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.

Info

There are many frameworks available for writing Web applications in Rust. Here we're choosing Leptos for this walk-through as a way to demonstrate how Crux can work with web frameworks that use fine-grained reactivity rather than the conceptual full re-rendering of React. However, a similar setup would work for other frameworks that compile to WebAssembly.

Create a Leptos App

Our Leptos app is just a new Rust project, which we can create with Cargo. For this example we'll call it web-leptos.

cargo new web-leptos

We'll also want to add this new project to our Cargo workspace, by editing the root Cargo.toml file.

[workspace]
members = ["shared", "web-leptos"]

Now we can cd into the web-leptos directory and start fleshing out our project. Let's add some dependencies to shared/Cargo.toml.

{{#include ../../../../examples/simple_counter/web-leptos/Cargo.toml}}

Tip

If using nightly Rust, you can enable the "nightly" feature for Leptos. When you do this, the signals become functions that can be called directly.

However in our examples we are using the stable channel and so have to use the get() and update() functions explicitly.

We'll also need a file called index.html, to serve our app.

{{#include ../../../../examples/simple_counter/web-leptos/index.html}}

Create some UI

Example

There is slightly more advanced example of a Leptos app in the Crux repository.

However, we will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.

Simple counter example

A simple app that increments, decrements and resets a counter.

Wrap the core to support capabilities

First, let's add some boilerplate code to wrap our core and handle the capabilities that we are using. For this example, we only need to support the Render capability, which triggers a render of the UI.

Note

This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.

Edit src/core.rs to look like the following. This code sends our (UI-generated) events to the core, and handles any effects that the core asks for. In this simple example, we aren't calling any HTTP APIs or handling any side effects other than rendering the UI, so we just handle this render effect by sending the new ViewModel to the relevant Leptos signal.

Also note that because both our core and our shell are written in Rust (and run in the same memory space), we do not need to serialize and deserialize the data that we pass between them. We can just pass the data directly.

{{#include ../../../../examples/simple_counter/web-leptos/src/core.rs}}

Tip

That match statement, above, is where you would handle any other effects that your core might ask for. For example, if your core needs to make an HTTP request, you would handle that here. To see an example of this, take a look at the counter example in the Crux repository.

Edit src/main.rs to look like the following. This code creates two signals — one to update the view (which starts off with the core's current view), and the other to capture events from the UI (which starts of by sending the reset event). We also create an effect that sends these events into the core whenever they are raised.

{{#include ../../../../examples/simple_counter/web-leptos/src/main.rs}}

Build and serve our app

The easiest way to compile the app to WebAssembly and serve it in our web page is to use trunk, which we can install with Homebrew (brew install trunk) or Cargo (cargo install trunk).

We can build our app, serve it and open it in our browser, in one simple step.

trunk serve --open

Success

Your app should look like this:

simple counter app

The Weather App

So far, we've explained the basics on a very simple counter app. So simple in fact, that it barely demonstrated any of the key features of Crux.

Time to ditch the training wheels and dive into something real. We'll need to demonstrate a few key concepts. How the Elm architecture works at a larger scale, how we manage navigation in a multi-screen apps, and the main focus will be on managed effects and capabilities. To that end, we'll need an app that does enough interesting things, while staying reasonably small.

So we're going to build a Weather app. It is certainly interesting enough - it needs to call an API, store some data locally, and even uses location APIs to show local weather. That's plenty of effects for us to play with and see how Crux supports this.

TODO: add iOS and Android screenshots

The app works very similarly to a system weather utility: you get multiple screens with basic weather information and forecast, you get to search for locations and save favourites.

You can look at the full example code in the Crux Github repo, but we'll walk through the key parts. As before, we're going to start with the core and once we have it, look at the shells.

Unlike in Part I, we will not build the app step by step, it would be very long and repetitive, we will instead do more of a code review of the key parts.

Before we dive in though, lets quickly establish some foundations about the app architecture Crux follows, known most widely as the Elm architecture, based on the language which popularised it.

Elm Architecture

Now we've had a bit of a feel for what writing Crux apps is like, we'll add more context to the different components and the overall architecture of Crux apps. The architecture is heavily inspired by Elm, and if you'd like to compare, the Architecture page of their guide is an excellent starting point.

Event Sourcing as a model for UI

User Interface is fundamentally event-driven. Unlike batch or stream processing, all changes in apps with UI are driven by events happening in the outside world, most commonly the user interface itself – the user touching the screen, typing on a keyboard, executing a CLI command, etc. In response, the app updates its internal state, changes what's shown on the screen, starts an interaction with the outside world, or all of the above.

The Elm architecture is a very direct translation of this pattern in code. User interactions (along with other changes in the outside world, such as time passing) are represented by events, and in response to them, the app updates its internal state represented by a model. The link between them is a simple, pure function which takes the model and the event, and updates the model based on the events. The actual UI on screen is a projection of (i.e "is built only from") the model. Because there is virtually no other state in the app, the model must contain enough information to decide what should be on screen. As a more direct representation of the information, we can use the view model as a step between the model and the UI.

That gives us two functions:

fn update(event: Event, model: &mut Model);

fn view(model: &Model) -> ViewModel;

That's enough for a Counter app, but not for our Weather app. What we're missing is for the app to be able to interact with the outside world and respond to events in it. We can't perform side-effects yet. Conceptually, we need to extend the update function to not only mutate the model, but also to emit some side-effects (or just "effects" for short).

fn update(event: Event, model: &mut Model) -> Vec<Effect>

fn view(model: &Model) -> ViewModel;

This more complete model is a function which takes an event and a model, mutates the model and optionally produces some effects. This is still quite a simple and pure (well, there is an &mut... call it pure enough) function, and is completely predictable, for the same inputs, it will always yield the same outputs (and changes to the model, guaranteed by Rust's borrow checker), and that is a very important design choice. It enables very easy testability, and that is what we need to build quality apps.

UI, effects and testability

User interface and effects are normally where testing gets very difficult.

If the application logic can directly cause changes in the outside world (or input/output — I/O, in computer parlance), the only way to verify the logic completely is to look at the result of those changes. The results, however, are pixels on screen, elements in the DOM, packets going over the network and other complex, difficult to inspect and often short-lived things. The only viable strategy to test them in this direct scenario is to take on the role of the particular device the app is working with, and pretending to be that device – a practice known as mocking (or stubbing, or faking, depending who you talk to). The APIs used to interact with these things are really complicated though, and rarely built with testing in mind. Even if you emulate them well, tests based on this approach won't be stable against changes in that API. When the API changes, your code and your tests will both have to change, taking any confidence they gave you in the first place with them. What's more, they also differ across platforms. Now we have that problem twice or more times.

The problem is in how apps are normally written (when written in a direct, imperative style). When it comes time to perform an effect, the most straightforward code just performs it straight away. The solution, as usual, is to add indirection. What Crux does (inspired by Elm, Haskell and others) is separate the intent from the execution, with a managed effects system.

Crux's effect approach focuses on capturing the intent of the effect, not the specific implementation of executing it. The intent is captured as data to benefit from type checking and from all the tools the language already provides for working with data. The business logic can stay pure, but express all the behaviour: state changes and effects. The intent is also the thing that needs to be tested. We can reasonably afford to trust that the authors of a HTTP client library, for example, have tested it and it does what it promises to do — all we need to check is that we're sending the right requests1.

Executing the effects: the runtime Shell

In Elm, the responsibility to execute the requested effects falls on the Elm runtime. Crux is very similar, except both the app and (some of) the runtime is your responsibility. This means some more work, but it also means you only bring what you need and nothing more, both in terms of supported platforms and the necessary APIs.

In Crux, business logic written in Rust is captured in the update function mentioned above and the other pieces that the function needs: events, model and effects, each represented by a type. This code forms a Core, which is portable, and really easily testable.

The execution of effects, including drawing the user interface, is done in a native Shell. Its job is to draw the appropriate UI on screen, translate user interactions into events to send to the Core, and when requested, perform effects and return their outcomes back to the Core.

The two sides of the Shell

The Shell thus has two sides: the driving side – the interactions causing events which push the Core to action, and the driven side, which services the Core's requests for side effects. The Core itself is also driven -– Without being prompted by the Shell, the Core does nothing, it can't – with no other I/O, there are no other triggers which could cause the Core code to run. To the Shell, the Core is a simple library, providing some computation. From the perspective of the Core, the Shell is a platform the Core runs on.

The effect runtime is also driven

Note that this driven nature impacts how effects execute in Crux. In the next few chapters, you'll see that you can write effect orchestration with async Rust, but because the entirety of the core is driven, this async code only executes when the core APIs are called by the shell.

Don't worry if this means nothing to you for now, it'll make sense later.

Managed effects: the complex interactions between the core and the shell

While the basic effects are quite simple (e.g. "fetch a response over HTTP"), real world apps tend to compose them in quite complicated patterns with data dependencies between them, and we need to support this use well. In the next chapter, we'll introduce the Command API used to compose the basic effects into more complex interactions, and later we'll build on this with Capabilities, which provide an abstraction on top of these basic building blocks with a more ergonomic API.

Capabilities not only provide a nicer API for creating effects and effect orchestrations; in the future, they will likely also provide implementations of the effect execution for the various supported platforms.

With commands, our API evolves one final time, to the signature in the App trait:

fn update(event: Event, model: &mut Model) -> Command<Effect, Event>;

fn view(model: &Model) -> ViewModel;

The Commands are generic over two types an Effect describing the interactions with the outside world we want to do, and our Event, acting as a callback when those interactions are complete and return a value of some kind.

We will look at how effects are created and passed to the shell in a chapter following the next one, in which we'll first have a look at how larger apps fit together in Crux.


  1. In reality, we do need to check that at least one of our HTTP requests executes successfully, but once one does, it is very likely that so long as they are described correctly, all of them will.

Structuring larger apps

Now we have a better handle on what Crux apps are made of, lets have a think about how we might build our Weather app. It is certainly small enough to be built by just blindly following the simple counter example. There is only about 25 different events total, but you probably agree that some more structure would be good.

Composition

Fortunately, all the key components of the architecture compose. We can have Event variants which carry other event types, Model fields containing other models and update functions calling other module's update function. And looking at the main app.rs module of the Weather app, this is exactly what's going on:

Here's the Event

#![allow(unused)]
fn main() {
#[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)]
#[repr(C)]
pub enum Event {
    Navigate(Box<Workflow>),
    Home(Box<WeatherEvent>),
    Favorites(Box<FavoritesEvent>),
}
}

There are only three options - navigate somewhere, an event on the home screen, or an event in the Favourites section.

The update function reflects this too:

#![allow(unused)]
fn main() {
    fn update(&self, event: Self::Event, model: &mut Self::Model) -> Command<Effect, Event> {
        match event {
            Event::Navigate(next) => {
                model.workflow = *next;
                render()
            }
            Event::Home(home_event) => {
                let mut commands = Vec::new();
                if let WeatherEvent::Show = *home_event {
                    commands.push(
                        favorites::events::update(FavoritesEvent::Restore, model)
                            .map_event(|fe| Event::Favorites(Box::new(fe))),
                    );
                }

                commands.push(
                    weather::events::update(*home_event, model)
                        .map_event(|we| Event::Home(Box::new(we))),
                );

                Command::all(commands)
            }

            Event::Favorites(fav_event) => favorites::events::update(*fav_event, model)
                .map_event(|e| Event::Favorites(Box::new(e))),
        }
    }

}

We'll look closer at the navigation in the next section, but the other two events simply forward to a different module's update function. In a special case we actually call two different updates from two different modules in response to the same event. In this example, we pass down the whole model as is, but we could also just pass down a single field of it.

You can also see another kind of composition - a composition of commands. both favorites::events:update and weather::events::update return a Command, and the Event::Home branch uses Command::all to run those commands in parallel. You might be wondering what's going on with the .map_event. The Command returned by favorites::events can emit the FavoritesEvent type, but we need our commands to emit them wrapped in the Event::Favorites (and boxed, because they are a larger type), so that when they arrive back to this update function, they get recognized as favorites events and sent down the third branch of the match.

The main thing to remember about this is that the events always come in from the top, and they get routed by the layers to the right function which can process them (or they can be processed directly, if the parent module knows better and wants to do something special).

Model can compose in a similar way, but in our case it's more of a mix:

#![allow(unused)]
fn main() {
#[derive(Default, Debug)]
pub struct Model {
    pub weather_data: CurrentWeatherResponse,
    pub workflow: Workflow,
    pub favorites: Favorites,
    pub search_results: Option<Vec<GeocodingResponse>>,
    pub location_enabled: bool,
    pub last_location: Option<Location>,
}
}

The favorites field is a type from the favourites module, but weather_data looks useful globally, so does search_results and the location related fields.

The most interesting of these is the Workflow type, which manages our navigation state - what page of the app are we currently on.

The main takeaway is that Crux is design such that whole apps can be composed - an existing type implementing App can be used, unchanged from a "parent" app, by

  1. adding an event variant which carries the child's event
  2. storing the child's model in the model
  3. calling the child's update where appropriate
  4. mapping the commands returned to the parent's event, and effect types (using .map_event and .map_effect)

That doesn't mean you should always subdivide apps in the same way, it is often a lot more convenient to share a model, or even a event type across two or more modules. Just know that should you need to reuse a whole Crux app later on, you can.

Typical apps involve some type of geography. The smaller the screen, the more moving between sections the user needs to do. But in principle, this is just more state, typically of the exclusive nature - the user can't be in two places at once. To avoid thinking too much about screens or windows (what if we need to build a CLI or a VR version?), lets generalise this idea in the concept of a Workflow. These are in no way a special type, we're simply modeling our domain in Rust.

In our Weather app, the Workflow is an enum:

#![allow(unused)]
fn main() {
#[derive(Facet, Default, Serialize, Deserialize, Clone, Debug, PartialEq)]
#[repr(C)]
pub enum Workflow {
    #[default]
    Home,
    Favorites(FavoritesState),
    AddFavorite,
}
}

In other words - the user can be either on the Home page, or in the Favorites section (which has some additional state), or they can be adding a favorite. No other options currently exist, and they can only be doing one of those things at once.

At this point, it might be helpful to look at how this is reflected in the view model:

#![allow(unused)]
fn main() {
#[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)]
pub struct ViewModel {
    pub workflow: WorkflowViewModel,
}

#[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)]
#[repr(C)]
pub enum WorkflowViewModel {
    Home {
        weather_data: Box<CurrentWeatherResponse>,
        favorites: Vec<FavoriteView>,
    },
    Favorites {
        favorites: Vec<FavoriteView>,
        delete_confirmation: Option<Location>,
    },
    AddFavorite {
        search_results: Option<Vec<GeocodingResponse>>,
    },
}

#[derive(Facet, Serialize, Deserialize, Clone, Debug, PartialEq)]
pub struct FavoriteView {
    name: String,
    location: Location,
    current: Box<Option<CurrentWeatherResponse>>,
}
}

It is also an enum, because we're currently thinking about the app as separate workflows. If we had a two-panel kind of UX with a list and detail, we might model this differently. It's worth spending some time thinking about this when building the app, and this is part of why we encourage building Crux apps from inside out.

The ViewModel's variants are a fair bit richer than the Workflow - while the workflow in the model is only concerned with where the user is, the ViewModel also carries the information they see. It is entirely enough for us to draw a user interface from.

To bring it home, lets look at the view function:

#![allow(unused)]
fn main() {
    fn view(&self, model: &Model) -> ViewModel {
        let favorites = model.favorites.iter().map(From::from).collect();

        let workflow = match &model.workflow {
            Workflow::Home => WorkflowViewModel::Home {
                weather_data: Box::new(model.weather_data.clone()),
                favorites,
            },
            Workflow::Favorites(favorites_state) => match favorites_state {
                FavoritesState::Idle => WorkflowViewModel::Favorites {
                    favorites,
                    delete_confirmation: None,
                },
                FavoritesState::ConfirmDelete(location) => WorkflowViewModel::Favorites {
                    favorites,
                    delete_confirmation: Some(*location),
                },
            },
            Workflow::AddFavorite => WorkflowViewModel::AddFavorite {
                search_results: model.search_results.clone(),
            },
        };

        ViewModel { workflow }
    }
}

As you may have guessed, it maps the workflow to a view model, inserting some data from the model along the way.

That's enough to express the idea of navigation, and what workflow the user is meant to be in. How it specifically works on each platform is up to each Shell.

Managed Effects

It's time to get the Weather app to actually fetch some weather information and let us store some favourites. And for that, we will need to interact with the outside world - we will need to perform side-effects.

As we mentioned before, the approach to side-effects Crux uses is sometimes called managed side-effects. Your app's core is not allowed to perform side-effects directly. Instead, whenever it wants to interact with the outside world, it needs to request the interaction from the shell.

It's not quite enough to do one side-effect at a time, however. In our weather app example we may want to load the list of favourite locations in parallel with checking the current location. We may also want to run a sequence, such as checking whether location services are enabled, then fetching a location if they are.

The abstraction Crux uses to capture the potentially complex orchestration of effects in response to an event is a type called Command.

Think of your whole app as a robot, where the Core is the brain of the robot and the Shell is the body of the robot. The brain instructs the body through commands and the body passes information about the outside world back to it with Events.

In this chapter we will explore how commands are created and used, before the next chapter, where we dive into capabilities, which provide a convenient way to create common commands.

Note on intent and execution

Managed effects are the key to Crux being portable across as many platforms as is sensible. Crux apps are, in a sense, built in the abstract, they describe what should happen in response to events, but not how it should happen. We think this is important both for portability, and for testing and general separation of concerns. What should happen is inherent to the product, and should behave the same way on any platform – it's part of what your app is. How it should be executed (and exactly what it looks like) often depends on the platform.

Different platforms may support different ways, for example a biometric authentication may work very differently on various devices and some may not even support it at all. Different platforms may also have different practical restrictions: while it may be perfectly appropriate to write things to disk on one platform, but internet access can't be guaranteed (e.g. on a smart watch), on another, writing to disk may not be possible, but internet connection is virtually guaranteed (e.g. in an API service, or on an embedded device in a factory). The specific storage solution for persistent caching would be implemented differently on different platforms, but would potentially share the key format and eviction strategy across them.

The hard part of designing effects is working out exactly where to draw the line between what is the intent and what is the implementation detail, what's common across platforms and what may be different on each, and implementing the former in Rust as a set of types, and the latter on the native side in the Shell, however is appropriate.

Because Effects define the "language" used to express intent, your Crux application code can be portable onto any platform capable of executing the intent in some way. Clearly, the number of different effects we can think of, and platforms we can target is enormous, and Crux doesn't want to force you to implement the entire portfolio of them on every platform.

Instead, your app is expected to define an Effect type which covers the kinds of effects which your app needs in order to work, and every time it responds to an Event, it is expected to return a Command.

Here is the Weather apps Effect type:

#![allow(unused)]
fn main() {
#[effect(facet_typegen)]
pub enum Effect {
    Render(RenderOperation),
    KeyValue(KeyValueOperation),
    Http(HttpRequest),
    Location(LocationOperation),
}
}

This tells us the app does four kinds of side effects: Rendering the UI, storing something in Key-Value store, using a HTTP client and using Location services. That's all it does, that's also call it can possibly do, until we expand this type further.

What is a Command

The Command is a recipe for a side-effects workflow which may perform several effects and also send events back to the app.

Core, updated and command

Crux expects a Command to be returned by the update function. A basic Command will result in an effect request to the Shell, and when the request is resolved by the Shell, the Command will pass the output to the app in an Event. The interaction can be more complicated than this, however. You can imagine a command running a set of Effects concurrently (say a few http requests and a timer), then follow some of them with additional effects based on their outputs, and finally send an event with the result of some of the outputs combined. So in principle, Command is a state machine which emits effects (for the Shell) and Events (for the app) according to the internal logic of what needs to be accomplished.

Command provides APIs to iterate over the effects and events emitted so far. This API can be used both in tests and in Rust-based shells, and for some advanced use cases when composing applications.

Effects and Events

Lets look closer at Effects. Each effect carries a request for an Operation (e.g. a HTTP request), which can be inspected and resolved with an operation output (e.g. a HTTP response). After effect requests are resolved, the command may have further effect requests or events, depending on the recipe it's executing.

Types acting as an Operation must implement the crux_core::capability::Operation trait, which ties them to the type of output. These two types are the protocol between the core and the shell when requesting and resolving the effects. The other types involved in the exchange are various wrappers to enable the operations to be defined in separate crates. The operation is first wrapped in a Request, which can be resolved, and then again with an Effect, like we saw above. This allows multiple Operation types from different crates to coexist, and also enables the Shells to "dispatch" to the right implementation to handle them.

The Effect type is typically defined with the help of the #[effect] macro. Here is the Weather app's effect again:

#![allow(unused)]
fn main() {
#[effect(facet_typegen)]
pub enum Effect {
    Render(RenderOperation),
    KeyValue(KeyValueOperation),
    Http(HttpRequest),
    Location(LocationOperation),
}
}

The four operations it carries are actually defined by four different Capabilities, so lets talk about those.

Capabilities

Capabilities are developer-friendly, ergonomic APIs to construct commands, from very basic ones all the way to complex stateful orchestrations. Capabilities are an abstraction layer that bundles related operations together with code to create them, and cover one kind of a side-effect (e.g. HTTP, or timers).

We will look at writing capabilities in the next chapter, but for now, it's useful to know that their API often doesn't return Commands straight away, but instead returns command builders, which can be converted into a Command, or converted into a future and used in an async context.

To help that make more sense, lets look at how Commands are typically used.

Working with Commands

The intent behind the command API is to cover 80% of effect orchestration without asking developers to use async Rust. We will look at the async use in a minute, but first lets look at what can be done without it.

A typical use of a Command in an update function will look something like this:

Http::get(API_URL)
    .expect_json()
    .build()
    .then_send(Event::ReceivedResponse),

This code is using a HTTP capability and its API up to the .build() call which returns a CommandBuilder. This is a lot like a Future – its type carries the output type, and it represents the eventual result of the effect. The difference is that it can be converted either into a Command or into a Future to be used in an async context. In this case, the .then_send part is building the command by binding it to an Event to send the output of the request back to the app.

Here's an example of the same from the Weather app:

#![allow(unused)]
fn main() {
            KeyValue::get(FAVORITES_KEY).then_send(FavoritesEvent::Load)
}

the get() call again returns a command builder, which is used to create a command with .then_send(). The Command is now fully baked and bound to the specific callback event, and can no longer be meaningfully chained into an "effect pipeline".

One special, but common case of creating a command is creating a Command which does nothing, because there are no more side-effects:

#![allow(unused)]
fn main() {
Command::done()
}

Soon enough, your app will get a little more complicated, you will need to run multiple commands concurrently, but your update function only returns a single value. To get around this, you can combine existing commands into one using either the all function, or the .and method.

We've seen an example of this already, but here it is again:

#![allow(unused)]
fn main() {
                let mut commands = Vec::new();
                if let WeatherEvent::Show = *home_event {
                    commands.push(
                        favorites::events::update(FavoritesEvent::Restore, model)
                            .map_event(|fe| Event::Favorites(Box::new(fe))),
                    );
                }

                commands.push(
                    weather::events::update(*home_event, model)
                        .map_event(|we| Event::Home(Box::new(we))),
                );

                Command::all(commands)
}

The two update calls involved each return a command, and we want to run them concurrently. The result is another Command, which can be returned from update.

Note

Commands (or more precisely command builders) can be created without capabilities. That what capabilities do internally. You shouldn't really need this in your app code, so we will cover that side of Commands in the next chapter, when we look at building Capabilities.

You might also want to run effects in a sequence, passing output of one as the input of another. This is another thing the command builders can facilitate. Let's look at that.

Command builders

Command builders come in three flavours:

  • RequestBuilder - the most common, builds a request expecting a single response from the shell (think HTTP client)
  • StreamBuilder - builds a request expecting a (possibly infinite) sequence of responses from the shell (think WebSockets)
  • NotificationBuilder - builds a shell notification, which does not expect a response. The best example is notifying the shell that a new view model is available

All builders share a common API. Request and stream builder can be converted into commands with a .then_send.

Both also support .then_request and .then_stream calls, for chaining on a function which takes the output of the first builder and returns a new builder. This can be used to build things like automatic pagination through an API for example.

You can also .map the output of the request/stream to a new value.

Here's an example of a more complicated chaining from the Command test suite:

#![allow(unused)]
fn main() {
#[test]
fn complex_concurrency() {
    fn increment(output: AnOperationOutput) -> AnOperation {
        let AnOperationOutput::Other([a, b]) = output else {
            panic!("bad output");
        };

        AnOperation::More([a, b + 1])
    }

    let mut cmd = Command::all([
        Command::request_from_shell(AnOperation::More([1, 1]))
            .then_request(|out| Command::request_from_shell(increment(out)))
            .then_send(Event::Completed),
        Command::request_from_shell(AnOperation::More([2, 1]))
            .then_request(|out| Command::request_from_shell(increment(out)))
            .then_send(Event::Completed),
    ])
    .then(Command::request_from_shell(AnOperation::More([3, 1])).then_send(Event::Completed));

// ... the assertions are omitted for brevity, see crux_core/src/cpommand/tests/combinators.rs
}

Forgive the abstract nature of the operations involved, these constructions are relatively uncommon in real code, and have not been used anywhere in our example code yet.

For more details of this, we recommend the Command API docs.

Combining all these tools provides a fair bit of flexibility to create fairly complex orchestrations of effects. Sometimes, you might want to go more complex than that, however. In such cases, Crux attempting to create more APIs trying to achieve every conceivable orchestration with closures would have diminishing returns. In such cases, you probably just want to write async code instead.

Warning

Notice that nowhere in the above examples have we mentioned working with the model during the execution of the command. This is very much by design: Once started, commands do not have model access, because they execute asynchronously, possibly in parallel, and access to model would introduce data races, which are very difficult to debug.

In order to update state, you should pass the result of the effect orchestration back to your app using an Event (as a kind of callback). It's relatively typical for apps to have a number of "internal" events, which handle results of effects. Sometimes these are also useful in tests, if you want to start a particular journey "from the middle".

Commands with async

The real power of commands comes from the fact that they build on async Rust. Each Command is a little async executor, which runs a number of tasks. The tasks get access to the crux context (represented by CommandContext), which gives them the ability to communicate with the shell and with the app.

TODO: image illustration of the command structure

You can create a raw command like this:

Command::new(|ctx| async move {
    let output = ctx.request_from_shell(AnOperation::One).await;
    ctx.send_event(Event::Completed(output));
    let output = ctx.request_from_shell(AnOperation::Two).await;
    ctx.send_event(Event::Completed(output));
});

Command::new takes a closure, which receives the CommandContext and returns a future, which will become the Command's main task (it is not expected to return anything, it's Output is (). The provided context can be used to start shell requests, streams, and send events back to the app.

The Context can also be used to spawn more tasks in the command.

There is a very similar async API in command builders too, except the returned future/stream is expected to return a value.

Builders can be converted into a future/stream for use in the async blocks with .into_future(ctx) and .into_stream(ctx), so long as you hold an instance of a CommandContext (otherwise those futures/streams would have no ability to communicate with the shell or the app).

Crux async vs Tokio, async-std et al.

While commands do execute on an async runtime, the runtime does not run on its own - it's part of the core and needs to be driven by the Shell calling the Core APIs. We use async rust as a convenient way to build the cooperative multi-tasking state machines involved in managing side effects.

This is also why combining the Crux async runtime with something like Tokio will appear to somewhat work (because the futures involved are mostly compatible), but it will have odd stop-start behaviours, because the Crux runtime doesn't run all the time, and some futures won't work, because they require specific Tokio support.

That said, a lot of universal async code (like async channels for example), work just fine.

There is more to the async effect API than we can or should cover here. Most of what you'd expect in async rust is supported – join handles, aborting tasks (and even Commands), spawning tasks and communicating between them, etc. Again, we recommend the API docs for the full coverage

Migrating from previous versions of Crux

You can probably skip this

If you're new to Crux, it's unlikely you need to read this section. The original API for side-effects was very different from Commands and this section is kept to help migrate from that API

The migration from the previous API is in two steps - first, make your app compatible with newer versions of Crux, then, when you're done with migrating your effect handling, move away from using Capabilities.

The change to Command is a breaking one for all Crux apps, but the fix is quite minimal.

There are two parts to it:

  1. declare the Effect type on your App
  2. return Command from update

Here's an example:

#![allow(unused)]
fn main() {
impl crux_core::App for App {
    type Event = Event;
    type Model = Model;
    type ViewModel = ViewModel;

    type Capabilities = Capabilities;
    type Effect = Effect; // 1. add the associated type

    fn update(
        &self,
        event: Event,
        model: &mut Model,
        caps: &Capabilities,
    ) -> crux_core::Command<Effect, Event> {
        crux_core::Command::done() // 2. return a Command
    }
}

}

In a typical app the Effect will be derived from Capabilities, so the added line should just work.

To begin with, you can simply return a Command::done() from the update function. Command::done() is a no-op effect.

Testing with managed effects

We have seen how to use effects, and we have seen a little bit about the testing, but we should look at that closer.

Crux was expressly designed to support easy, fast, comprehensive testing of your application. Everyone is generally on board with unit tests and TDD when it comes to basic pure logic. But as soon as any I/O or UI gets involved, the dread sets in. We're going to have to set up some fakes, introduce additional traits just to test things, or just bite the bullet and build tests around a fully integrated app and wait for them to run (and probably fail on a race condition sometimes). So most people give up.

Managed effects smooth over that big hump. You pay for it a little bit in how the code is written, but you reap the reward in testing it. This is because the core that uses managed effects is pure and therefore completely deterministic — all the side effects are pushed to the shell.

It's straightforward to write an exhaustive set of unit tests that give you complete confidence in the correctness of your application code — you can test the behavior of your application independently of platform-specific UI and API calls.

There is no need to mock/stub anything, and there is no need to write integration tests.

Not only are the unit tests easy to write, but they run extremely quickly, and can be run in parallel.

For example, here's a test checking that when the weather screen is shown, a location gets checked and the weather gets refreshed.

#![allow(unused)]
fn main() {
    #[test]
    fn test_show_triggers_set_weather() {
        let mut model = Model::default();

        // 1. Trigger the Show event
        let event = WeatherEvent::Show;
        let mut cmd = update(event, &mut model);

        let mut location = cmd.expect_one_effect().expect_location();

        assert_eq!(location.operation, LocationOperation::IsLocationEnabled);

        // 2. Simulate the Location::is_location_enabled effect (enabled = true)
        location
            .resolve(LocationResult::Enabled(true))
            .expect("to resolve");
        let event = cmd.expect_one_event();

        let mut cmd = update(event, &mut model);

        let mut location = cmd.expect_one_effect().expect_location();
        assert_eq!(location.operation, LocationOperation::GetLocation);

        // 3. Simulate the Location::get_location effect (with a test location)
        let test_location = Location {
            lat: 33.456_789,
            lon: -112.037_222,
        };
        location
            .resolve(LocationResult::Location(Some(test_location)))
            .expect("to resolve");

        let event = cmd.expect_one_event();
        let mut cmd = update(event, &mut model);

        // 4. Resolve the weather HTTP effect
        let mut request = cmd.expect_one_effect().expect_http();

        assert_eq!(&request.operation, &WeatherApi::build(test_location));

        // 5. Resolve the HTTP request with a simulated response from the web API
        request
            .resolve(HttpResult::Ok(
                HttpResponse::ok()
                    .body(test_response_json().as_bytes())
                    .build(),
            ))
            .unwrap();

        // 6. The next event should be SetWeather
        let actual = cmd.expect_one_event();
        assert!(matches!(actual, WeatherEvent::SetWeather(_)));

        // 7. Send the SetWeather event back to the app
        let _ = update(actual.clone(), &mut model);

        // Now check the model in detail
        assert_eq!(model.weather_data, test_response());
    }
}

You can see it's a test of a whole interaction with multiple kinds of effects, and it runs in 11 ms and is entirely deterministic.

Here's the corresponding code it's testing:

#![allow(unused)]
fn main() {
pub fn update(event: WeatherEvent, model: &mut Model) -> Command<Effect, WeatherEvent> {
    match event {
        WeatherEvent::Show => is_location_enabled().then_send(WeatherEvent::LocationEnabled),
        WeatherEvent::LocationEnabled(enabled) => {
            model.location_enabled = enabled;
            if enabled {
                get_location().then_send(WeatherEvent::LocationFetched)
            } else {
                Command::done()
            }
        }
        WeatherEvent::LocationFetched(location) => {
            model.last_location.clone_from(&location);
            if let Some(loc) = location {
                update(WeatherEvent::Fetch(loc), model)
            } else {
                Command::done()
            }
        }

        // Internal events related to fetching weather data
        WeatherEvent::Fetch(location) => WeatherApi::fetch(location)
            .then_send(move |result| WeatherEvent::SetWeather(Box::new(result))),
        WeatherEvent::SetWeather(result) => {
            if let Ok(weather_data) = *result {
                model.weather_data = weather_data;
            }

            update(WeatherEvent::FetchFavorites, model).and(render())
        }
        WeatherEvent::FetchFavorites => {
            if model.favorites.is_empty() {
                return Command::done();
            }

            model
                .favorites
                .iter()
                .map(|f| {
                    let location = f.geo.location();

                    WeatherApi::fetch(location).then_send(move |result| {
                        WeatherEvent::SetFavoriteWeather(Box::new(result), location)
                    })
                })
                .collect()
        }
        WeatherEvent::SetFavoriteWeather(result, location) => {
            if let Ok(weather) = *result {
                // Update the weather data for the matching favorite
                model
                    .favorites
                    .update(&location, |favorite| favorite.current = Some(weather));
            }

            render()
        }
    }
}
}

Hopefully this illustrates that the managed effects let you test entire transactions involving effects, without ever executing any.

The full suite of 18 tests of the Weather app runs in 49 milliseconds. In practice, it's rare for a test suite of a Crux app to take longer than compiling it (even incrementally). Even apps with thousands of tests usually run them in seconds, and sadly they do not yet compile in seconds.

cargo nextest run
   Compiling shared v0.1.0 (/Users/viktor/Projects/crux/examples/weather/shared)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 1.11s
────────────
 Nextest run ID 4f51de83-8f2e-4acf-b75f-03969767e886 with nextest profile: default
    Starting 18 tests across 1 binary
        PASS [   0.020s] shared app::tests::test_navigation
        PASS [   0.020s] shared favorites::events::tests::test_add_multiple_favorites
        PASS [   0.019s] shared favorites::events::tests::test_delete_confirmed
        PASS [   0.020s] shared favorites::events::tests::test_cancel_returns_to_favorites
        PASS [   0.019s] shared favorites::events::tests::test_kv_set_and_load
        PASS [   0.023s] shared favorites::events::tests::test_delete_cancelled
        PASS [   0.023s] shared favorites::events::tests::test_delete_pressed
        PASS [   0.022s] shared favorites::events::tests::test_delete_with_persistence
        PASS [   0.022s] shared favorites::events::tests::test_kv_load_empty
        PASS [   0.013s] shared favorites::events::tests::test_kv_load_error
        PASS [   0.011s] shared favorites::events::tests::test_submit_duplicate_favorite
        PASS [   0.012s] shared favorites::events::tests::test_submit_adds_favorite
        PASS [   0.013s] shared favorites::events::tests::test_submit_persists_favorite
        PASS [   0.011s] shared weather::events::tests::test_fetch_favorites_triggers_fetch_for_all_favorites
        PASS [   0.011s] shared weather::events::tests::test_show_triggers_set_weather
        PASS [   0.012s] shared weather::events::tests::test_fetch_triggers_favorites_fetch_when_favorites_exist
        PASS [   0.027s] shared weather::events::tests::test_current_weather_fetch
        PASS [   0.027s] shared favorites::events::tests::test_search_triggers_api_call
────────────
     Summary [   0.049s] 18 tests run: 18 passed, 0 skipped

The test steps

Crux provides a test APIs to make the tests a bit more readable and nicer to write, but it's still up to the test to execute the app loop.

Let's have a look at a simpler test from the Weather app and go through it step by step:

    #[test]
    fn test_delete_with_persistence() {
        let mut model = Model::default();
        let favorite = test_favorite();
        model.favorites.insert(favorite.clone());

        // Set the state to ConfirmDelete with the favorite's coordinates
        model.workflow = Workflow::Favorites(FavoritesState::ConfirmDelete(Location {
            lat: favorite.geo.lat,
            lon: favorite.geo.lon,
        }));

        // Delete and verify KV is updated
        let mut cmd = update(FavoritesEvent::DeleteConfirmed, &mut model);
        let kv_request = cmd.expect_effect().expect_key_value();
        cmd.expect_one_effect().expect_render();

        assert!(matches!(
            kv_request.operation,
            KeyValueOperation::Set { .. }
        ));

        assert!(model.favorites.is_empty());

        cmd.expect_no_effects();
        cmd.expect_no_events();
    }

First, we do some setup - create a model, create a favorite and insert it, and make sure the app is in the right Workflow state.

Then, we call update with FavoritesEvent::DeleteConfirmed and get back a command, which we store in cmd.

The next line is our assertion on the command - we expect an effect, and we expect it to be a key value effect. The expectation either returns the KeyValueRequest or panics.

Then we inspect the request's operation to check it's a Set – for the purposes of this test that's enough.

We can then check the favourites in the model are gone, and there is nothing else to do.

More integrated tests and deterministic simulation testing

We could test the key-value storage in a more integrated fashion too - instead of asserting on the key value operation, we can provide a very basic implementation of a key value store to use in tests, using a HashMap as storage for example. Then we could simply forward the key-value effects to it and make sure the storage is managed correctly. Similarly, we could build a predictable replica of an API service we need to test against, etc.

While that's all starting to sounds a lot like mocking, remember that we're not implementing Redis or building an actual HTTP server. It's all very simple code. And if we do that for all the different effects our app needs and provide a realistic enough implementations to mimic the real things, a very interesting thing happens - we get the entire app stack, with the nitty gritty technical details taken out, running in a unit test.

Mocking with Crux

With that, we can create an app instance and send it completely random (but deterministic) events, and make sure "nothing bad happens". The definition of what that means is specific to each app, but just to illustrate some options:

  • Introduce randomised errors to your fake API and see they are handled correctly
  • Randomly lose data in storage and make sure the app recovers
  • Make sure timeouts work correctly by randomly firing them first
  • Check that any other invariants hold, e.g. anything time-related only moves forward (counters count up), storage remains referentially consistent, logically impossible states do not happen (ideally they would be impossible to represent, but sometimes that's too hard)

When we do that, we can then run this pseudo random process, for hours if we like, and let it find any bugs for us. To reproduce them, all we need is the random seed used for the specific test run.

In practice, Crux apps will mostly be able to run at thousands of events a second, and these tests will explore more of the state space than we ever could with manual unit tests.

This type of testing is usually reserved to consensus algorithms and network protocols (where anything that can happen will happen and they have to be rock solid), because setting up the test harness is just too much work. But with managed effects it is a few hundred lines of additional code. For a modestly sized app, a testing harness like that will only take a few days to write. We may even ship building blocks of such test harness with Crux in the future.

Building capabilities

The final piece of the puzzle we should look at in our exploration of the Weather app before we move to the Shell is Capabilities.

We looked at effects a fair bit and explored the Commands and CommandBuilders, but in practice, it's quite rare that you'd interact with those directly from your app.

Typically, you'll be working with effects using Capabilities - more developer-friendly APIs which implement a specific kind of side-effect in a generic fashion. They define the core-shell message protocol for the side-effect and provide an ergonomic API to create the right CommandBuilders. Examples include: HTTP client, Timer operations, Key-Value storage, Secrets provider, Geolocation, etc.

In practice, we find there is a limited number of these effect packages, the should be very reusable, and an individual app will typically need around seven of them, almost certainly less than ten.

Included capabilities

The weather app uses two out of the three capabilities provided with Crux: HTTP client (crux_http), Key-Value store (crux_kv) (the third is the time capability – crux_time).

These are the most common things we think people will want to use in their apps. There are more, and we will probably build those over time as well, we just haven't worked on a motivating use-case ourselves yet. If you have and built a capability which you'd like to donate, definitely get in touch!

Let's look at the use of crux_http quickly, as it's the most extensive of the three. The Weather app makes a pretty typical move and centralises the weather API use in a client:

#![allow(unused)]
fn main() {
pub struct WeatherApi;

impl WeatherApi {
    /// Build an `HttpRequest` for testing purposes
    #[cfg(test)]
    pub fn build(location: Location) -> HttpRequest {
        use crate::weather::model::current_response::WEATHER_URL;

        HttpRequest::get(WEATHER_URL)
            .query(&CurrentWeatherQuery {
                lat: location.lat.to_string(),
                lon: location.lon.to_string(),
                units: "metric",
                appid: API_KEY.clone(),
            })
            .expect("could not serialize query string")
            .build()
    }

    /// Fetch current weather for a specific location
    pub fn fetch<Effect, Event>(
        location: Location,
    ) -> RequestBuilder<
        Effect,
        Event,
        impl std::future::Future<Output = Result<CurrentWeatherResponse, WeatherError>>,
    >
    where
        Effect: From<Request<HttpRequest>> + Send + 'static,
        Event: Send + 'static,
    {
        Http::get(WEATHER_URL)
            .expect_json::<CurrentWeatherResponse>()
            .query(&CurrentWeatherQuery {
                lat: location.lat.to_string(),
                lon: location.lon.to_string(),
                units: "metric",
                appid: API_KEY.clone(),
            })
            .expect("could not serialize query string")
            .build()
            .map(|result| match result {
                Ok(mut response) => match response.take_body() {
                    Some(weather_data) => Ok(weather_data),
                    None => Err(WeatherError::ParseError),
                },
                Err(_) => Err(WeatherError::NetworkError),
            })
    }
}
}

The main method there is fetch, which uses Http::get from crux_http to create a GET request expecting a json response which deserialises into a specific type, and provides a URL query to specify the search. At the end of that chained call is a .map unpicking the response and turning it into a more convenient Result type for the app code.

The interesting thing here is that the fetch method returns a RequestBuilder. In a way, this makes it a half-way step to a custom capability, but it also just means the fetch call is convenient to use from both normal and async context.

This is one of the things capabilities do - they map the lower-level FFI protocols into a more convenient API for the app developer.

Let's look at the other thing they do.

Custom capabilities

The Weather app has one specialty - it works with location services. This is an example of a capability which we'd probably struggle to find a cross-platform crate for. It's also not so common and complex, that we feel we should develop and maintain an official one. So a custom capability in the app is the way to go.

The capability defines two things:

  1. The protocol for communicating to the Shell
  2. The APIs used by the programmer of the Core

Here is Weather app's Location capability in full:

#![allow(unused)]
fn main() {
// This module defines the effect for accessing location information in a cross-platform way using Crux.
// The structure here is designed to be serializable, portable, and to fit into Crux's command/request architecture.

use std::future::Future;

use crux_core::{Command, Request, capability::Operation, command::RequestBuilder};
use facet::Facet;
use serde::{Deserialize, Serialize};

use super::Location;

// The operations that can be performed related to location.
// Using an enum allows us to easily add more operations in the future and ensures type safety.
#[derive(Facet, Clone, Serialize, Deserialize, Debug, PartialEq)]
#[repr(C)]
pub enum LocationOperation {
    IsLocationEnabled,
    GetLocation,
}

// The response structure for a location request.
// This is serializable so it can be sent across the FFI boundary.

// The possible results from performing a location operation.
// This enum allows us to handle different response types in a type-safe way.
#[derive(Facet, Clone, Serialize, Deserialize, Debug, PartialEq)]
#[repr(C)]
pub enum LocationResult {
    Enabled(bool),
    Location(Option<Location>),
}

#[must_use]
pub fn is_location_enabled<Effect, Event>()
-> RequestBuilder<Effect, Event, impl Future<Output = bool>>
where
    Effect: Send + From<Request<LocationOperation>> + 'static,
    Event: Send + 'static,
{
    Command::request_from_shell(LocationOperation::IsLocationEnabled).map(|result| match result {
        LocationResult::Enabled(val) => val,
        LocationResult::Location(_) => false,
    })
}

#[must_use]
pub fn get_location<Effect, Event>()
-> RequestBuilder<Effect, Event, impl Future<Output = Option<Location>>>
where
    Effect: Send + From<Request<LocationOperation>> + 'static,
    Event: Send + 'static,
{
    Command::request_from_shell(LocationOperation::GetLocation).map(|result| match result {
        LocationResult::Location(loc) => loc,
        LocationResult::Enabled(_) => None,
    })
}

// Implement the Operation trait so that Crux knows how to handle this effect.
// This ties the operation type to its output/result type.
impl Operation for LocationOperation {
    type Output = LocationResult;
}
}

There are two interesting types: LocationOperation and LocationResult - they are the request and response pair for the capability. The capability tells Crux that LocationResult is the expected output for the LocationOperation with the trait implementation at the very bottom. It marks the LocationOperation as an Operation as defined by Crux and associates the output type.

That's number 1 done - protocol defined. This is what the Shell will need to understand and return back in order to implement the location capability.

The rest of the code are the two APIs used by the Core developer - is_location_enabled and get_location. Their type signatures are fairly complex, so lets pick them apart.

First, they are both generic over Effect and Event. This isn't strictly necessary for local capabilities, but it makes the capability reusable for any Effect and Event, not just the ones from the Weather app.

The other interesting thing is the trait bound Effect: From<Request<LocationOperation>>, which says that the Effect type needs to be able to convert from a location Request, or in other words - we need to be able to wrap a Request<LocationOperation> into the app's Effect type. All Effect types generated with the #[effect] macro already do this.

Other than that, the APIs just create command builds and return them. Those types are also somewhat gnarly, but it's mostly the impl Future<Output = [value]>, that's interesting. Notice that the Output types are not LocationResult, they are the specific convenient type the Core developer wants.

And that's all Capabilities do - they provide a convenient API for creating CommandBuilders, and converting between convenient Rust types and an FFI "wire protocol" used to communicate with the Shell.

In the ports and adapters architecture, Capabilities are the ports, and the shell-side implementations are the adapters.

In fact, let's go build one in the next chapter.

The shell

We've looked at how the Weather app fits together, how it's tested, and if you were deveoping it along the way, you would now have a core with the important business logic, fully tested and rock solid. Time to build the UI.

(Okay sure, in practice, you would not build the whole core first, then the whole UI, you'd probably go feature by feature, but the point stands - we now know for a fact that the core does the right thing.)

The shell will have two responsibilities:

  1. Laying out the UI components, like we've already seen in Part I
  2. Supporting the app's capabilities. This will be new to us

Like in Part I, you can choose which Shell language you'd like to see this in, but first lets talk about what they all have in common.

Message interface between core and shell

In Part I, we learned to use the update and view APIs of the core. We also learned that in their raw form, they take serialized values as byte buffers.

What we skimmed the return value of update very quickly. In that case it only ever return a request for a RenderOperation - a signal that a new view model is available.

In the Weather's case, more options are possible. Recall the effect type:

#![allow(unused)]
fn main() {
#[effect(facet_typegen)]
pub enum Effect {
    Render(RenderOperation),
    KeyValue(KeyValueOperation),
    Http(HttpRequest),
    Location(LocationOperation),
}
}

Those are the four possible variants we'll see in the return from update. It is essentially telling us "I did the state update, and here are some side-effects for you to perform".

Lets say that the effect is a HTTP request. We execute it, get a response, and what do we do then? Well that's what the third core API, resolve is for:

#![allow(unused)]
fn main() {
pub fn update(data: &[u8]) -> Vec<u8>
pub fn resolve(id: u32, data: &[u8]) -> Vec<u8>
pub fn view() -> Vec<u8>
}

Each effect request comes with an identifier. We use resolve to return the output of the effect back to the app, alongside the identifier, so that it can be paired correctly.

Let's look at how this works in practice

Platforms

You can continue with you platform of choice:

iOS

Lets start with the new part, and also typically the shorter part – implementing the capabilities.

Capability implementation

This is what Weather's core.swift look like

{{#include ../../../../examples/weather/iOS/Weather/Core.swift:core_base}}
        // ...
    }
}

It's slightly more complicated, but broadly the same as the Counter's core. We've have an extra logger which is not really important for us, and we also hold on to a KeyValueStore, which is the storage for the key-value implementation.

We've truncated the processEffect method, because it's fairly long, but the basic structure is this:

    func processEffect(_ request: Request) {
        switch request.effect {
        case .render:
            DispatchQueue.main.async {
                self.view = try! .bincodeDeserialize(input: [UInt8](self.core.view()))
            }
        case .http(let req):
            // ...

        case .keyValue(let keyValue):
            // ...

        case .location(let locationOp):
            // ...
        }
    }

We get a Request, and do an exhaustive match on what the requested effect is. In swift we have tagged unions, so we can also destructure the operation requested.

We can have a look at what the HTTP branch does:

{{#include ../../../../examples/weather/iOS/Weather/Core.swift:http}}

We start a new Task to run this job off the main thread, then we use the async requestHttp() call to run the request.

Then it takes the response, serializes it and passes it to core.resolve, which returns more effect requests. This is perhaps unexpected, but it's the direct consequence of the Commands async nature. There can easily be a command which does something along the lines of:

Command::new(|ctx| {
    let http_req = Http::get(url).expect_json<Counter>().build().into_future(ctx);
    let resp = http_req.await; // effect 1

    let counter = resp.map(|result| match result {
        Ok(mut response) => match response.take_body() {
            Some(counter) => {
                Ok(results)
            }
            None => Err(ApiError::ParseError),
        },
        Err(_) => Err(ApiError::NetworkError),
    });

    let _ = KeyValue::set(COUNTER, counter).into_future(ctx).await // effect 2

    // ...

    ctx.send_event(Event::Done);
})

Once we resolve the http request at the .await point marked "effect 1", this future can proceed and make a KeyValue request at the "effect 2" .await point. So on the shell end, we need to be able to respond appropriately.

What we do is loop through those effect requests (there could easily be multiple requests at once), go through them and recurse - call processEffect again to handle it.

Just for completeness, this is what requestHttp looks like:

import App
import SwiftUI

enum HttpError: Error {
    case generic(Error)
    case message(String)
}

func requestHttp(_ request: HttpRequest) async -> Result<HttpResponse, HttpError> {
    var req = URLRequest(url: URL(string: request.url)!)
    req.httpMethod = request.method

    for header in request.headers {
        req.addValue(header.value, forHTTPHeaderField: header.name)
    }

    do {
        let (data, response) = try await URLSession.shared.data(for: req)
        if let httpResponse = response as? HTTPURLResponse {
            let status = UInt16(httpResponse.statusCode)
            let body = [UInt8](data)
            return .success(HttpResponse(status: status, headers: [], body: body))
        } else {
            return .failure(.message("bad response"))
        }
    } catch {
        return .failure(.generic(error))
    }
}

Not that interesting, it's a wrapper around URLRequest and friends which takes and returns the generated HttpRequest and HttpResponse, originall defined in Rust by crux_http.

The pattern repeats similarly for key-value store and the location capability.

User interface and navigation

It's worth looking at how Weather handles the Workflow navigation in Swift UI.

As in the Counter example, the Weather's core has a @Published var view: ViewModel which we can use in the Views.

Here's the root content view:

struct ContentView: View {
    @ObservedObject var core: Core

    init(core: Core) {
        self.core = core
    }

    var body: some View {
        NavigationStack {
            ZStack {
                // Base background that's always present
                Color(.systemGroupedBackground)
                    .ignoresSafeArea()

                // Content views
                switch core.view.workflow {
                case .home:
                    HomeView(core: core)
                        .transition(
                            .opacity.combined(with: .offset(x: 0, y: 10))
                        )
                case .favorites:
                    FavoritesView(core: core)
                        .transition(
                            .opacity.combined(with: .offset(x: 0, y: 10))
                        )
                case .addFavorite:
                    AddFavoriteView(core: core)
                        .transition(
                            .opacity.combined(with: .offset(x: 0, y: 10))
                        )
                }
            }
            .animation(.easeOut(duration: 0.2), value: core.view.workflow)
        }
    }
}

Thanks to the declarative nature of SwiftUI, we can show the view we need to, depending on the workflow, and pass the core down.

We could do this differently - core could stay in the root view and we could pass an update callback in an environment, and just the appropriate section of the view model to each view, it's up to you how you want to go about it.

Let's look at the HomeView as well, just to complete the picture:

struct HomeView: View {
    @ObservedObject var core: Core
    @State private var hasLoadedInitialData = false
    @State private var selectedPage = 0

    var body: some View {
        Group {
            if case .home(let weatherData, let favorites) = core.view.workflow {
                VStack {
                    TabView(selection: $selectedPage) {
                        // Main weather card
                        Group {
                            if weatherData.cod == 200 && weatherData.main.temp.isFinite {
                                WeatherCard(weatherData: weatherData)
                                    .transition(.opacity)
                            } else {
                                LoadingCard()
                            }
                        }
                        .frame(width: UIScreen.main.bounds.width)
                        .tag(0)

                        // Favorite weather cards
                        ForEach(Array(favorites.enumerated()), id: \.element.name) { idx, favorite in
                            Group {
                                if let current = favorite.current {
                                    WeatherCard(weatherData: current)
                                        .transition(.opacity)
                                } else {
                                    LoadingCard()
                                }
                            }
                            .frame(width: UIScreen.main.bounds.width)
                            .tag(idx + 1)
                        }
                    }
                    .tabViewStyle(PageTabViewStyle(indexDisplayMode: .automatic))
                }
                .padding(.vertical)
                .toolbar {
                    ToolbarItem(placement: .navigationBarTrailing) {
                        Button {
                            withAnimation(.easeOut(duration: 0.2)) {
                                core.update(.navigate(.favorites(FavoritesState.idle)))
                            }
                        } label: {
                            Image(systemName: "star")
                        }
                    }
                }
            } else {
                Color.clear // Placeholder for transition
            }
        }

        .onAppear {
            if !hasLoadedInitialData {
                core.update(.home(.show))
                hasLoadedInitialData = true
            }
        }
    }
}

It simply caters for the possible situations in the view model, draws the weather cards for each favorite and adds a toolbar with an item, which when tapped calls core.update with the swift equivalent of the .navigate event we saw earlier in the call.

This is quite a simple navigation setup in that it is a static set of screens we're managing. Sometimes a more dynamic navigation is necessary, but SwiftUI's NavigationStack in recent iOS supports quite complex scenarios in a declarative fashion using NavigationPath, so the general principle of naively projecting the view model into the user interface broadly works even there.

There isn't much more to it, the rest of the app is rinse and repeat. It is relatively rare to implement a new capability, so most of the work is in finessing the user interface. Crux tends to work reasonably well with SwiftUI previews as well so you can typically avoid the Simulator or device for the inner development loop.

What's next

Congratulations! You know now all you will likely need to build Crux apps. The following parts of the book will cover advanced topics, other support platforms, and internals of Crux, should you be interested in how things work.

Happy building!

Android

We're still working on writing the Android version of this section.

The Android version of the Weather app works, it just needs documenting like the iOS version. If you'd like to help, please feel free to open a PR!

React

The React version of the Weather app is yet to be written.

If you'd like to help, please get in touch and build it!

Leptos

The Leptos version of the Weather app is yet to be written.

If you'd like to help, please get in touch and build it!

Middleware

Middleware is a relatively new, and somewhat advanced feature for split effect handling, i.e. handling some effects in the shell, and some still in the core, but outside of the Crux state loop.

Middleware can be useful when you have an existing 3rd party library written in Rust which you want to use, but it isn't written in a sans-I/O way with managed effects or otherwise isn't compatible with Crux. This is sadly most libraries with side effects.

It is quite likely most apps will never need to use middleware. Before reaching for middleware, we encourage you to consider:

  • Implementing the side-effect in each Shell using native, platform SDKs. Shared libraries give a productivity boost at first, but for the same reason Crux uses Capabilities, they can't always be the best platform citizens, and often rely on very low-level system APIs which compromise the experience, don't collaborate well with platform security measures, etc.
  • Moving coordination logic from the Rust implementation into a custom capability in the core and implementing it on top of lower level capabilities, e.g. HTTP. This would be the case for HTTP API SDK type libraries, but may well not be practical at first

Only if neither of these is a good option, reach for a middleware. The cost of using it is that the effect handling becomes less straightforward, which may cause some headaches debugging effect ordering, etc.

We are also still learning how middleware operates in the wild, and the API may change more than the rest of Crux tends to.

All that said, the feature is used in production with success today and should work well.

Info

This chapter is not finished, we're working on it. For examples of middleware use you can read the API docs, the tests of the module and one example in the Counter next example code

Other platforms

This section is a collection of instructions for using Crux with other platforms than the ones we've chosen to write Part I and Part II for. The support is just as mature for all of them, we are simply more familiar with the four we've shown in detail.

You can read about using Crux with:

Web — TypeScript and React (Remix)

Warning

This was written for previous versions of Crux and needs updating. Proceed with caution. If you'd like to help update it, you'd be very welcome!

These are the steps to set up and run a simple TypeScript Web app that calls into a shared core.

Note

This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.

Info

There are many frameworks available for writing Web applications with JavaScript/TypeScript. We've chosen React with Remix for this walk-through. However, a similar setup would work for other frameworks.

Create a Remix App

For this walk-through, we'll use the pnpm package manager for no reason other than we like it the most! You can use npm exactly the same way, though.

Let's create a simple Remix app for TypeScript, using pnpx (from pnpm). You can give it a name and then probably accept the defaults.

pnpx create-remix@latest

Compile our Rust shared library

When we build our app, we also want to compile the Rust core to WebAssembly so that it can be referenced from our code.

To do this, we'll use wasm-pack, which you can install like this:

# with homebrew
brew install wasm-pack

# or directly
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh

Now that we have wasm-pack installed, we can build our shared library to WebAssembly for the browser.

(cd shared && wasm-pack build --target web)

Tip

You might want to add a wasm:build script to your package.json file, and call it when you build your Remix project.

{
  "scripts": {
    "build": "pnpm run wasm:build && remix build",
    "dev": "pnpm run wasm:build && remix dev",
    "wasm:build": "cd ../shared && wasm-pack build --target web"
  }
}

Add the shared library as a Wasm package to your web-remix project

cd web-remix
pnpm add ../shared/pkg

We want to tell the Remix server to bundle our shared Wasm package, so we need to add a serverDependenciesToBundle key to the object exported in remix.config.js:

/** @type {import('@remix-run/dev').AppConfig} */
module.exports = {
  ignoredRouteFiles: ["**/.*"],

  // make sure the server bundles our shared library
  serverDependenciesToBundle: [/^shared.*/],

  serverModuleFormat: "cjs",
};

Add the Shared Types

To generate the shared types for TypeScript, we can just run cargo build from the root of our repository. You can check that they have been generated correctly:

ls --tree shared_types/generated/typescript
shared_types/generated/typescript
├── bincode
│  ├── bincodeDeserializer.d.ts
│  ├── bincodeDeserializer.js
│  ├── bincodeDeserializer.ts
│  ├── bincodeSerializer.d.ts
│  ├── bincodeSerializer.js
│  ├── bincodeSerializer.ts
│  ├── mod.d.ts
│  ├── mod.js
│  └── mod.ts
├── node_modules
│  └── typescript -> .pnpm/typescript@4.8.4/node_modules/typescript
├── package.json
├── pnpm-lock.yaml
├── serde
│  ├── binaryDeserializer.d.ts
│  ├── binaryDeserializer.js
│  ├── binaryDeserializer.ts
│  ├── binarySerializer.d.ts
│  ├── binarySerializer.js
│  ├── binarySerializer.ts
│  ├── deserializer.d.ts
│  ├── deserializer.js
│  ├── deserializer.ts
│  ├── mod.d.ts
│  ├── mod.js
│  ├── mod.ts
│  ├── serializer.d.ts
│  ├── serializer.js
│  ├── serializer.ts
│  ├── types.d.ts
│  ├── types.js
│  └── types.ts
├── tsconfig.json
└── types
   ├── shared_types.d.ts
   ├── shared_types.js
   └── shared_types.ts

You can see that it also generates an npm package that we can add directly to our project.

pnpm add ../shared_types/generated/typescript

Load the Wasm binary when our Remix app starts

The app/entry.client.tsx file is where we can load our Wasm binary. We can import the shared package and then call the init function to load the Wasm binary.

Note

Note that we import the wasm binary as well — Remix will automatically bundle it for us, giving it a cache-friendly hash-based name.

/**
 * By default, Remix will handle hydrating your app on the client for you.
 * You are free to delete this file if you'd like to, but if you ever want it revealed again, you can run `npx remix reveal` ✨
 * For more information, see https://remix.run/file-conventions/entry.client
 */

import { RemixBrowser } from "@remix-run/react";
import { startTransition, StrictMode } from "react";
import { hydrateRoot } from "react-dom/client";
import init from "shared/shared";
import wasm from "shared/shared_bg.wasm";

init(wasm).then(() => {
  startTransition(() => {
    hydrateRoot(
      document,
      <StrictMode>
        <RemixBrowser />
      </StrictMode>
    );
  });
});

Create some UI

Example

We will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.

Simple counter example

A simple app that increments, decrements and resets a counter.

Wrap the core to support capabilities

First, let's add some boilerplate code to wrap our core and handle the capabilities that we are using. For this example, we only need to support the Render capability, which triggers a render of the UI.

Note

This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.

Edit app/core.ts to look like the following. This code sends our (UI-generated) events to the core, and handles any effects that the core asks for. In this simple example, we aren't calling any HTTP APIs or handling any side effects other than rendering the UI, so we just handle this render effect by updating the component's view hook with the core's ViewModel.

Notice that we have to serialize and deserialize the data that we pass between the core and the shell. This is because the core is running in a separate WebAssembly instance, and so we can't just pass the data directly.

import { Dispatch, RefObject, SetStateAction } from "react";
import { CoreFFI } from "shared";
import type { Effect, Event } from "shared_types/app";
import {
  EffectVariantRender,
  RenderOperation,
  Request,
  ViewModel,
} from "shared_types/app";
import { BincodeDeserializer, BincodeSerializer } from "shared_types/bincode";
import init_core from "shared/shared";

type Response = RenderOperation;

export class Core {
  core: CoreFFI | null = null;
  setState: Dispatch<SetStateAction<ViewModel>>;

  constructor(setState: Dispatch<SetStateAction<ViewModel>>) {
    // Don't initialize CoreFFI here - wait for WASM to be loaded
    this.setState = setState;
  }

  initialize(should_load: boolean) {
    if (!this.core) {
      const load = should_load ? init_core() : Promise.resolve();
      load
        .then(() => {
          this.core = new CoreFFI();
          this.setState(this.view());
        })
        .catch((error) => {
          console.error("Failed to initialize wasm core:", error);
        });
    }
  }

  view(): ViewModel {
    if (!this.core) {
      throw new Error("Core not initialized. Call initialize() first.");
    }
    return deserializeView(this.core.view());
  }

  update(event: Event) {
    if (!this.core) {
      throw new Error("Core not initialized. Call initialize() first.");
    }
    console.log("event", event);

    const serializer = new BincodeSerializer();
    event.serialize(serializer);

    const effects = this.core.update(serializer.getBytes());

    const requests = deserializeRequests(effects);
    for (const { id, effect } of requests) {
      this.processEffect(id, effect);
    }
  }

  private processEffect(id: number, effect: Effect) {
    console.log("effect", effect);

    switch (effect.constructor) {
      case EffectVariantRender: {
        this.setState(this.view());
        break;
      }
    }
  }
}

function deserializeRequests(bytes: Uint8Array): Request[] {
  const deserializer = new BincodeDeserializer(bytes);
  const len = deserializer.deserializeLen();
  const requests: Request[] = [];
  for (let i = 0; i < len; i++) {
    const request = Request.deserialize(deserializer);
    requests.push(request);
  }
  return requests;
}

function deserializeView(bytes: Uint8Array): ViewModel {
  return ViewModel.deserialize(new BincodeDeserializer(bytes));
}

Tip

That switch statement, above, is where you would handle any other effects that your core might ask for. For example, if your core needs to make an HTTP request, you would handle that here. To see an example of this, take a look at the counter example in the Crux repository.

Create a component to render the UI

Edit app/routes/_index.tsx to look like the following. Notice that we pass the setState hook to the update function so that we can update the state in response to a render effect from the core (as seen above).

import { useEffect, useRef, useState } from "react";

import {
  ViewModel,
  EventVariantReset,
  EventVariantIncrement,
  EventVariantDecrement,
} from "shared_types/app";
import { Core } from "../core";

export const meta = () => {
  return [
    { title: "New Remix App" },
    { name: "description", content: "Welcome to Remix!" },
  ];
};

export default function Index() {
  const [view, setView] = useState(new ViewModel(""));
  const core = useRef(new Core(setView));

  // Initialize
  useEffect(
    () =>
      core.current.initialize(/* loading is done in entry.client.tsx */ false),
    // eslint-disable-next-line react-hooks/exhaustive-deps
    /*once*/ [],
  );

  return (
    <main>
      <section className="box container has-text-centered m-5">
        <p className="is-size-5">{view.count}</p>
        <div className="buttons section is-centered">
          <button
            className="button is-primary is-danger"
            onClick={() => core.current.update(new EventVariantReset())}
          >
            {"Reset"}
          </button>
          <button
            className="button is-primary is-success"
            onClick={() => core.current.update(new EventVariantIncrement())}
          >
            {"Increment"}
          </button>
          <button
            className="button is-primary is-warning"
            onClick={() => core.current.update(new EventVariantDecrement())}
          >
            {"Decrement"}
          </button>
        </div>
      </section>
    </main>
  );
}

Now all we need is some CSS.

To add a CSS stylesheet, we can add it to the Links export in the app/root.tsx file.

export const links: LinksFunction = () => [
  ...(cssBundleHref ? [{ rel: "stylesheet", href: cssBundleHref }] : []),
  {
    rel: "stylesheet",
    href: "https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css",
  },
];

Build and serve our app

We can build our app, and serve it for the browser, in one simple step.

pnpm dev

Success

Your app should look like this:

simple counter app

Web — Rust and Yew

Warning

This was written for previous versions of Crux and needs updating. Proceed with caution. If you'd like to help update it, you'd be very welcome!

These are the steps to set up and run a simple Rust Web app that calls into a shared core.

Note

This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.

Info

There are many frameworks available for writing Web applications in Rust. We've chosen Yew for this walk-through because it is arguably the most mature. However, a similar setup would work for any framework that compiles to WebAssembly.

Create a Yew App

Our Yew app is just a new Rust project, which we can create with Cargo. For this example we'll call it web-yew.

cargo new web-yew

We'll also want to add this new project to our Cargo workspace, by editing the root Cargo.toml file.

[workspace]
members = ["shared", "web-yew"]

Now we can start fleshing out our project. Let's add some dependencies to web-yew/Cargo.toml.

[package]
name = "web-yew"
version = "0.1.0"
authors.workspace = true
repository.workspace = true
edition.workspace = true
license.workspace = true
keywords.workspace = true
rust-version.workspace = true

[dependencies]
shared = { path = "../shared" }
yew = { version = "0.22.1", features = ["csr"] }

We'll also need a file called index.html, to serve our app.

<!doctype html>
<html>
    <head>
        <meta charset="utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
        <title>Yew Counter</title>
        <link
            rel="stylesheet"
            href="https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css"
        />
        <link data-trunk rel="rust" />
    </head>
</html>

Create some UI

Example

There are several, more advanced, examples of Yew apps in the Crux repository.

However, we will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.

Simple counter example

A simple app that increments, decrements and resets a counter.

Wrap the core to support capabilities

First, let's add some boilerplate code to wrap our core and handle the capabilities that we are using. For this example, we only need to support the Render capability, which triggers a render of the UI.

Note

This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.

Edit src/core.rs to look like the following. This code sends our (UI-generated) events to the core, and handles any effects that the core asks for. In this simple example, we aren't calling any HTTP APIs or handling any side effects other than rendering the UI, so we just handle this render effect by sending it directly back to the Yew component. Note that we wrap the effect in a Message enum because Yew components have a single associated type for messages and we need that to include both the events that the UI raises (to send to the core) and the effects that the core uses to request side effects from the shell.

Also note that because both our core and our shell are written in Rust (and run in the same memory space), we do not need to serialize and deserialize the data that we pass between them. We can just pass the data directly.

use shared::{Counter, Effect, Event};
use std::rc::Rc;
use yew::Callback;

pub type Core = Rc<shared::Core<Counter>>;

pub enum Message {
    Event(Event),
    #[allow(dead_code)]
    Effect(Effect),
}

pub fn new() -> Core {
    Rc::new(shared::Core::new())
}

pub fn update(core: &Core, event: Event, callback: &Callback<Message>) {
    for effect in core.process_event(event) {
        process_effect(core, effect, callback);
    }
}

pub fn process_effect(_core: &Core, effect: Effect, callback: &Callback<Message>) {
    match effect {
        render @ Effect::Render(_) => callback.emit(Message::Effect(render)),
    }
}

Tip

That match statement, above, is where you would handle any other effects that your core might ask for. For example, if your core needs to make an HTTP request, you would handle that here. To see an example of this, take a look at the counter example in the Crux repository.

Edit src/main.rs to look like the following. The update function is interesting here. We set up a Callback to receive messages from the core and feed them back into Yew's event loop. Then we test to see if the incoming message is an Event (raised by UI interaction) and if so we use it to update the core, returning false to indicate that the re-render will happen later. In this app, we can assume that any other message is a render Effect and so we return true indicating to Yew that we do want to re-render.

mod core;

use crate::core::{Core, Message};
use shared::Event;
use yew::prelude::*;

#[derive(Default)]
struct RootComponent {
    core: Core,
}

impl Component for RootComponent {
    type Message = Message;
    type Properties = ();

    fn create(_ctx: &Context<Self>) -> Self {
        Self { core: core::new() }
    }

    fn update(&mut self, ctx: &Context<Self>, msg: Self::Message) -> bool {
        let link = ctx.link().clone();
        let callback = Callback::from(move |msg| {
            link.send_message(msg);
        });
        if let Message::Event(event) = msg {
            core::update(&self.core, event, &callback);
            false
        } else {
            true
        }
    }

    fn view(&self, ctx: &Context<Self>) -> Html {
        let link = ctx.link();
        let view = self.core.view();

        html! {
            <section class="box container has-text-centered m-5">
                <p class="is-size-5">{&view.count}</p>
                <div class="buttons section is-centered">
                    <button class="button is-primary is-danger"
                        onclick={link.callback(|_| Message::Event(Event::Reset))}>
                        {"Reset"}
                    </button>
                    <button class="button is-primary is-success"
                        onclick={link.callback(|_| Message::Event(Event::Increment))}>
                        {"Increment"}
                    </button>
                    <button class="button is-primary is-warning"
                        onclick={link.callback(|_| Message::Event(Event::Decrement))}>
                        {"Decrement"}
                    </button>
                </div>
            </section>
        }
    }
}

fn main() {
    yew::Renderer::<RootComponent>::new().render();
}

Build and serve our app

The easiest way to compile the app to WebAssembly and serve it in our web page is to use trunk, which we can install with Homebrew (brew install trunk) or Cargo (cargo install trunk).

We can build our app, serve it and open it in our browser, in one simple step.

trunk serve --open

Success

Your app should look like this:

simple counter app

Web — Rust and Dioxus

Warning

This was written for previous versions of Crux and needs updating. Proceed with caution. If you'd like to help update it, you'd be very welcome!

These are the steps to set up and run a simple Rust Web app that calls into a shared core.

Note

This walk-through assumes you have already added the shared and shared_types libraries to your repo, as described in Shared core and types.

Info

There are many frameworks available for writing Web applications in Rust. We've chosen Dioxus for this walk-through. However, a similar setup would work for other frameworks that compile to WebAssembly.

Create a Dioxus App

Tip

Dioxus has a CLI tool called dx, which can initialize, build and serve our app.

cargo install dioxus-cli

Test that the executable is available.

dx --help

Before we create a new app, let's add it to our Cargo workspace (so that the dx tool won't complain), by editing the root Cargo.toml file.

For this example, we'll call the app web-dioxus.

[workspace]
members = ["shared", "web-dioxus"]

Now we can create a new Dioxus app. The tool asks for a project name, which we'll provide as web-dioxus.

dx create

cd web-dioxus

Now we can start fleshing out our project. Let's add some dependencies to the project's Cargo.toml.

[package]
name = "web-dioxus"
version = "0.1.0"
authors.workspace = true
repository.workspace = true
edition.workspace = true
license.workspace = true
keywords.workspace = true
rust-version.workspace = true

[dependencies]
console_error_panic_hook = "0.1.7"
dioxus = { version = "0.7.3", features = ["web"] }
dioxus-logger = "0.7.3"
futures-util = "0.3.32"
shared = { path = "../shared" }
tracing = "0.1.44"

Create some UI

Example

There is slightly more advanced example of a Dioxus app in the Crux repository.

However, we will use the simple counter example, which has shared and shared_types libraries that will work with the following example code.

Simple counter example

A simple app that increments, decrements and resets a counter.

Wrap the core to support capabilities

First, let's add some boilerplate code to wrap our core and handle the capabilities that we are using. For this example, we only need to support the Render capability, which triggers a render of the UI.

Note

This code that wraps the core only needs to be written once — it only grows when we need to support additional capabilities.

Edit src/core.rs to look like the following. This code sends our (UI-generated) events to the core, and handles any effects that the core asks for. In this simple example, we aren't calling any HTTP APIs or handling any side effects other than rendering the UI, so we just handle this render effect by updating the component's view hook with the core's ViewModel.

Also note that because both our core and our shell are written in Rust (and run in the same memory space), we do not need to serialize and deserialize the data that we pass between them. We can just pass the data directly.

use std::rc::Rc;

use dioxus::{
    prelude::{Signal, UnboundedReceiver},
    signals::WritableExt as _,
};
use futures_util::StreamExt;
use shared::{Counter, Effect, Event, ViewModel};
use tracing::debug;

type Core = Rc<shared::Core<Counter>>;

pub struct CoreService {
    core: Core,
    view: Signal<ViewModel>,
}

impl CoreService {
    pub fn new(view: Signal<ViewModel>) -> Self {
        debug!("initializing core service");
        Self {
            core: Rc::new(shared::Core::new()),
            view,
        }
    }

    #[allow(clippy::future_not_send)] // WASM is single-threaded
    pub async fn run(&self, rx: &mut UnboundedReceiver<Event>) {
        let mut view = self.view;
        view.set(self.core.view());
        while let Some(event) = rx.next().await {
            self.update(event, &mut view);
        }
    }

    fn update(&self, event: Event, view: &mut Signal<ViewModel>) {
        debug!("event: {:?}", event);

        for effect in &self.core.process_event(event) {
            process_effect(&self.core, effect, view);
        }
    }
}

fn process_effect(core: &Core, effect: &Effect, view: &mut Signal<ViewModel>) {
    debug!("effect: {:?}", effect);

    match effect {
        Effect::Render(_) => {
            view.set(core.view());
        }
    }
}

Tip

That match statement, above, is where you would handle any other effects that your core might ask for. For example, if your core needs to make an HTTP request, you would handle that here. To see an example of this, take a look at the counter example in the Crux repository.

Edit src/main.rs to look like the following. This code sets up the Dioxus app, and connects the core to the UI. Not only do we create a hook for the view state but we also create a coroutine that plugs in the Dioxus "service" we defined above to constantly send any events from the UI to the core.

mod core;

use dioxus::prelude::*;
use tracing::Level;

use shared::{Event, ViewModel};

use core::CoreService;

#[allow(clippy::volatile_composites)] // false positive from Dioxus asset! macro internals
#[component]
fn App() -> Element {
    let view = use_signal(ViewModel::default);

    let core = use_coroutine(move |mut rx| {
        let svc = CoreService::new(view);
        async move { svc.run(&mut rx).await }
    });
    rsx! {
        document::Link {
            rel: "stylesheet",
            href: asset!("../public/css/bulma.min.css")
        }
        main {
            section { class: "section has-text-centered",
                p { class: "is-size-5", "{view().count}" }
                div { class: "buttons section is-centered",
                    button { class:"button is-primary is-danger",
                        onclick: move |_| {
                            core.send(Event::Reset);
                        },
                        "Reset"
                    }
                    button { class:"button is-primary is-success",
                        onclick: move |_| {
                            core.send(Event::Increment);
                        },
                        "Increment"
                    }
                    button { class:"button is-primary is-warning",
                        onclick: move |_| {
                            core.send(Event::Decrement);
                        },
                        "Decrement"
                    }
                }
            }
        }
    }
}

fn main() {
    dioxus_logger::init(Level::DEBUG).expect("failed to init logger");
    console_error_panic_hook::set_once();

    launch(App);
}

We can add a title and a stylesheet by editing examples/simple_counter/web-dioxus/Dioxus.toml.

[application]
name = "web-dioxus"
default_platform = "web"
out_dir = "dist"
asset_dir = "public"

[web.app]
title = "Crux Simple Counter example"

[web.watcher]
reload_html = true
watch_path = ["src", "public"]

Build and serve our app

Now we can build our app and serve it in one simple step.

dx serve

Success

Your app should look like this:

simple counter app

Capability Runtime

Warning

This was written for previous versions of Crux and needs rewriting. Most of the code it references has been removed. Proceed with caution. If you'd like to help update it, you'd be very welcome!

In the previous sections we focused on building applications in Crux and using its public APIs to do so. In this and the following chapters, we'll look at how the internals of Crux work, starting with the capability runtime.

The capability runtime is a set of components that processes effects, presenting the two perspectives we previously mentioned:

  • For the core, the shell appears to be a platform with a message based system interface
  • For the shell, the core appears as a stateful library responding to events with request for side-effects

There are a few challenges to solve in order to facilitate this interface. First, each run of the update function can call several capabilities. The requested effects are expected to be emitted together, and each batch of effects will be processed concurrently, so the calls can't be blocking. Second, each effect requested from a capability may require multiple round-trips between the core and shell to conclude and we don't want to require a call to update per round trip, so we need some ability to "suspend" execution in capabilities while waiting for an effect to be fulfilled. The ability to suspend effects introduces a new challenge - effects started in a particular capability and suspended, once resolved, need to continue execution in the same capability.

Given this concurrency and execution suspension, an async interface seems like a good candidate. Capabilities request work from the shell, .await the results, and continue their work when the result has arrived. The call to request_from_shell or stream_from_shell translates into an effect request returned from the current core "transaction" (one call to process_event or resolve).

Note

In this chapter, we will focus on the runtime and the core interface and ignore the serialization, bridge and FFI, and return to them in the following sections. The examples will assume a Rust based shell.

Async runtime

One of the fairly unique aspects of Rust's async is the fact that it doesn't come with a bundled runtime. This is recognising that asynchronous execution is useful in various different scenarios, and no one runtime can serve all of them. Crux takes advantage of this and brings its own runtime, tailored to the execution of side-effects on top of a message based interface.

For a deeper background on Rust's async architecture, we recommend the Asynchronous Programming in Rust book, especially the chapter about executing futures and tasks. We will assume you are familiar with the basic ideas and mechanics of async here.

The job of an async runtime is to manage a number of tasks, each driving one future to completion. This management is done by an executor, which is responsible for scheduling the futures and polling them at the right time to drive their execution forward. Most "grown up" runtimes will do this on a number of threads in a thread pool, but in Crux, we run in the context of a single function call (of the app's update function) and potentially in a webassembly context which is single threaded anyway, so our baby runtime only needs to poll all the tasks sequentially, to see if any of them need to continue.

Polling all the tasks would work, and in our case wouldn't even be that inefficient, but the async system is set up to avoid unnecessary polling of futures with one additional concept - wakers. A waker is a mechanism which can be used to signal to the executor that something that a given task is waiting on has changed, and the task's future should be polled, because it will be able to proceed. This is how "at the right time" from the above paragraph is decided.

In our case there's a single situation which causes such a change - a result has arrived from the shell, for a particular effect requested earlier.

Warning

Always use the capability APIs provided by Crux for async work (see the capabilities chapter). Using other async APIs can lead to unexpected behaviour, because the resulting futures are not tied to crux effects. Such futures will resolve, but only after the next shell request causes the crux executor to execute.

One effect's life cycle

So, step by step, our strategy for the capabilities to handle effects is:

  1. A capability spawns a task and submits a future with some code to run
  2. The new task is scheduled to be polled next time the executor runs
  3. The executor goes through the list of ready tasks until it gets to our task and polls it
  4. The future runs to the point where the first async call is awaited. In capabilities, this should only be a future returned from one of the calls to request something from the shell, or a future resulting from a composition of such futures (through async method calls or combinators like select or join).
  5. The shell request future's first step is to create the request and prepare it to be sent. We will look at the mechanics of the sending shortly, but for now it's only important that part of this request is a callback used to resolve it.
  6. The request future, as part of the first poll by the executor, sends the request to be handed to the shell. As there is no result from the shell yet, it returns a pending state and the task is suspended.
  7. The request is passed on to the shell to resolve (as a return value from process_event or resolve)
  8. Eventually, the shell has a result ready for the request and asks the core to resolve the request.
  9. The request callback mentioned above is executed, puts the provided result in the future's mutable state, and calls the future's waker, also stored in the future's state, to wake the future up. The waker enqueues the future for processing on the executor.
  10. The executor runs again (asked to do so by the core's resolve API after calling the callback), and polls the awoken future.
  11. the future sees there is now a result available and continues the execution of the original task until a further await or until completion.

The cycle may repeat a few times, depending on the capability implementation, but eventually the original task completes and is removed.

This is probably a lot to take in, but the basic gist is that capability futures (the ones submitted to spawn) always pause on request futures (the ones returned from request_from_shell et al.), which submit requests. Resolving requests updates the state of the original future and wakes it up to continue execution.

With that in mind we can look at the individual moving parts and how they communicate.

Spawning tasks on the executor

The first step for anything to happen is spawning a task from a capability. Each capability is created with a CapabilityContext. This is the definition:

There are a couple of sending ends of channels for requests and events, which we will get to soon, and also a spawner, from the executor module. The Spawner looks like this:

{{#include ../../../crux_core/src/capability/executor.rs:spawner}}

also holding a sending end of a channel, this one for Tasks.

Tasks are a fairly simple data structure, holding a future and another sending end of the tasks channel, because tasks need to be able to submit themselves when awoken.

{{#include ../../../crux_core/src/capability/executor.rs:task}}

Tasks are spawned by the Spawner as follows:

{{#include ../../../crux_core/src/capability/executor.rs:spawning}}

after constructing a task, it is submitted using the task sender.

The final piece of the puzzle is the executor itself:

{{#include ../../../crux_core/src/capability/executor.rs:executor}}

This is the receiving end of the channel from the spawner.

The executor has a single public method, run_all:

{{#include ../../../crux_core/src/capability/executor.rs:run_all}}

besides the locking and waker mechanics, the gist is quite simple - while there are tasks in the ready_queue, poll the future held in each one.

The last interesting bit of this part is how the waker is provided to the poll call. The waker_ref creates a waker which, when asked to wake up, will call this method on the task:

{{#include ../../../crux_core/src/capability/executor.rs:wake}}

this is where the task resubmits itself for processing on the next run.

While there are a lot of moving pieces involved, the basic mechanics are relatively straightforward - tasks are submitted either by the spawner, or the futures awoken by arriving responses to the requests they submitted. The queue of tasks is processed whenever run_all is called on the executor. This happens in the Core API implementation. Both process_event and resolve call run_all after their respective task - calling the app's update function, or resolving the given task.

Now we know how the futures get executed, suspended and resumed, we can examine the flow of information between capabilities and the Core API calls layered on top.

Requests flow from capabilities to the shell

The key to understanding how the effects get processed and executed is to name all the various pieces of information, and discuss how they are wrapped in each other.

The basic inner piece of the effect request is an operation. This is the intent which the capability is submitting to the shell. Each operation has an associated output value, with which the operation request can be resolved. There are multiple capabilities in each app, and in order for the shell to easily tell which capability's effect it needs to handle, we wrap the operation in an effect. The Effect type is a generated enum based on the app's set of capabilities, with one variant per capability. It allows us to multiplex (or type erase) the different typed operations into a single type, which can be matched on to process the operations.

Finally, the effect is wrapped in a request which carries the effect, and an associated resolve callback to which the output will eventually be given. We discussed this callback in the previous section - its job is to update the paused future's state and resume it. The request is the value passed to the shell, and used as both the description of the effect intent, and the "token" used to resolve it.

Now we can look at how all this wrapping is facilitated. Recall from the previous section that each capability has access to a CapabilityContext, which holds a sending end of two channels, one for events - the app_channel and one for requests - the shell_channel, whose type is Sender<Request<Op>>. These channels serve both as thread synchronisation and queueing mechanism between the capabilities and the core of crux. As you can see, the requests expected are typed for the capability's operation type.

Looking at the core itself, we see their Receiver ends.

pub struct Core<A>
where
    A: App,
{
    // WARNING: The user controlled types _must_ be defined first
    // so that they are dropped first, in case they contain coordination
    // primitives which attempt to wake up a future when dropped. For that
    // reason the executor _must_ outlive the user type instances

    // user types
    model: RwLock<A::Model>,
    app: A,

    // internals
    root_command: Mutex<Command<A::Effect, A::Event>>,
}

One detail to note is that the receiving end of the requests channel is a Receiver<Ef>. The channel has an additional feature - it can map between the input types and output types, and, in this case, serve as a multiplexer, wrapping the operation in the corresponding Effect variant. Each sending end is specialised for the respective capability, but the receiving end gets an already wrapped Effect.

A single update cycle

To piece all these things together, lets look at processing a single call from the shell. Both process_event and resolve share a common step advancing the capability runtime.

Here is process_event:

    pub fn process_event(&self, event: A::Event) -> Vec<A::Effect> {
        let mut model = self.model.write().expect("Model RwLock was poisoned.");

        let command = self.app.update(event, &mut model);

        // drop the model here, we don't want to hold the lock for the process() call
        drop(model);

        let mut root_command = self
            .root_command
            .lock()
            .expect("Capability runtime lock was poisoned");
        root_command.spawn(|ctx| command.into_future(ctx));

        drop(root_command);

        self.process()
    }

and here is resolve:

    pub fn resolve<Output>(
        &self,
        request: &mut impl Resolvable<Output>,
        result: Output,
    ) -> Result<Vec<A::Effect>, ResolveError>
    {
        let resolve_result = request.resolve(result);
        debug_assert!(resolve_result.is_ok());

        resolve_result?;

        Ok(self.process())
    }

The interesting things happen in the common process method:

    pub(crate) fn process(&self) -> Vec<A::Effect> {
        let mut root_command = self
            .root_command
            .lock()
            .expect("Capability runtime lock was poisoned");

        let mut events: VecDeque<_> = root_command.events().collect();

        while let Some(event_from_commands) = events.pop_front() {
            let mut model = self.model.write().expect("Model RwLock was poisoned.");
            let command = self.app.update(event_from_commands, &mut model);
            drop(model);

            root_command.spawn(|ctx| command.into_future(ctx));

            events.extend(root_command.events());
        }

        root_command.effects().collect()
    }

First, we run all ready tasks in the executor. There can be new tasks ready because we just ran the app's update function (which may have spawned some task via capability calls) or resolved some effects (which woke up their suspended futures).

Next, we drain the events channel (where events are submitted from capabilities by context.update_app) and one by one, send them to the update function, running the executor after each one.

Finally, we collect all of the effect requests submitted in the process and return them to the shell.

Resolving requests

We've now seen everything other than the mechanics of resolving requests. This is ultimately just a callback carried by the request, but for additional type safety, it is tagged by the expected number of resolutions

type ResolveOnce<Out> = Box<dyn FnOnce(Out) + Send>;
type ResolveMany<Out> = Box<dyn Fn(Out) -> Result<(), ()> + Send>;

/// Resolve is a callback used to resolve an effect request and continue
/// one of the capability Tasks running on the executor.
pub enum RequestHandle<Out> {
    Never,
    Once(ResolveOnce<Out>),
    Many(ResolveMany<Out>),
}

We've already mentioned the resolve function itself briefly, but for completeness, here's an example from request_from_shell:

{{#include ../../../crux_core/src/capability/shell_request.rs:resolve}}

Bar the locking and sharing mechanics, all it does is update the state of the future (shared_state) and then calls wake on the future's waker to schedule it on the executor.

In the next chapter, we will look at how this process changes when Crux is used via an FFI interface where requests and responses need to be serialised in order to pass across the language boundary.

FFI bridge

Warning

This was written for previous versions of Crux and needs rewriting. Most of the code it references has been removed. Proceed with caution. If you'd like to help update it, you'd be very welcome!

In the previous chapter, we saw how the capability runtime facilitates the orchestration of effect processing by the shell. We looked at the simpler scenario where the shell was built in Rust. Now we'll extend this to the more common scenario where the shell is written in a different language and the core APIs are called over a Foreign Function Interface, passing events, requests and responses back and forth, serialised as bytes.

The FFI bridge

The FFI bridge has two key parts, the serialisation part converting from typed effect requests to serializable types, and the FFI implementation itself, facilitated by UniFFI.

The serialisation part is facilitated by the Bridge. It is a wrapper for the Core with its own definition of Request. Its API is very similar to the Core API - it has an identical set of methods, but their type signatures are different.

For example, here is Core::resolve

    pub fn resolve<Output>(
        &self,
        request: &mut impl Resolvable<Output>,
        result: Output,
    ) -> Result<Vec<A::Effect>, ResolveError>

and here's its counterpart, Bridge::handle_response

    #[deprecated(
        since = "0.17.0",
        note = "Bridge API returning vectors has been deprecated. Please use the 'resolve' method."
    )]
    pub fn handle_response(&self, id: u32, output: &[u8]) -> Result<Vec<u8>, BridgeError<Format>>

where the core expects to be given a Request<Op> to resolve, the bridge expects a id - a unique identifier of the request being resolved.

This makes sense - the Requests include callback closures working with the capability runtime, they can't be easily serialised and sent back and forth across the language boundary. Instead, the bridge "parks" them in a registry, to be picked up later. Like in a theatre cloakroom, the registry returns a unique number under which the request is stored.

The implementation of the serialization/deserialization process is slightly complicated by the fact that Crux allows you to supply your own serializer and deserializer should you need to, so the actual bridge implementation does not work on bytes but on serializers. The Bridge type used in examples and all the documentation is a default implementation, which uses bincode serialization, which is also supported by the type generation subsystem.

We won't go into the detail of working with Serde and the erased_serde crate to make all the serialization happen without leaking deserialization lifetimes out of the bridge. You can read the implementation of BridgeWithSerializer if you're interested in the gory details. For our purposes, the type definition will suffice.

The bridge holds an instance of the Core and a ResolveRegistry to store the effect requests in.

The processing of the update loop is quite similar to the Core update loop:

  • When a serialized event arrives, it is deserialized and passed to the Core's process_event
  • When a request response arrives, its id is forwarded to the ResolveRegistry's resume method, and the Core's process method is called to run the capability runtime

You may remember that both these calls return effect requests. The remaining step is to store these in the registry, using the registry's register method, exchanging the core Request for a bridge variant, which looks like this:

#[derive(Facet, Debug, Serialize, Deserialize)]
pub struct Request<Eff>
where
    Eff: Serialize,
{
    pub id: EffectId,
    pub effect: Eff,
}

Unlike the core request, this does not include any closures and is fully serializable.

ResolveRegistry

It is worth pausing for a second on the resolve registry. There is one tricky problem to solve here - storing the generic Requests in a single store. We get around this by making the register method generic and asking the effect to "serialize" itself.

    pub fn register<Eff>(&self, effect: Eff) -> Request<Eff::Ffi>
    where
        Eff: EffectFFI,
    {
        let (effect, resolve) = effect.serialize();

        let id = self
            .0
            .lock()
            .expect("Registry Mutex poisoned.")
            .insert(resolve);

        Request {
            id: EffectId(id.try_into().expect("EffectId overflow")),
            effect,
        }
    }

this is named based on our intent, not really based on what actually happens. The method comes from an Effect trait:

pub trait Effect: Send + 'static {}

Like the Effect type which implements this trait, the implementation is macro generated, based on the Capabilities used by your application. We will look at how this works in the Effect type chapter.

The type signature of the method gives us a hint though - it converts the normal Effect into a serializable counterpart, alongside something with a ResolveSerialized type. This is stored in the registry under an id, and the effect and the id are returned as the bridge version of a Request.

The definition of the ResolveSerialized type is a little bit convoluted:

type ResolveOnceSerialized<T> = Box<dyn FnOnce(&[u8]) -> Result<(), BridgeError<T>> + Send>;
type ResolveManySerialized<T> = Box<dyn FnMut(&[u8]) -> Result<(), BridgeError<T>> + Send>;

/// A deserializing version of Resolve
///
/// `ResolveSerialized` is a separate type because lifetime elision doesn't work
/// through generic type arguments. We can't create a `ResolveRegistry` of
/// Resolve<&[u8]> without specifying an explicit lifetime.
/// If you see a better way around this, please open a PR.
pub enum ResolveSerialized<T: FfiFormat> {
    Never,
    Once(ResolveOnceSerialized<T>),
    Many(ResolveManySerialized<T>),
}

but the gist of it is that it is a mirror of the Resolve type we already know, except it takes a Deserializer. More about this serialization trickery in the next chapter.

FFI interface

The final piece of the puzzle is the FFI interface itself. All it does is expose the bridge API we've seen above.

Note

You will see that this part, alongside the type generation, is a fairly complicated constellation of various existing tools and libraries, which has a number of rough edges. It is likely that we will explore replacing this part of Crux with a tailor made FFI bridge in the future. If/when we do, we will do our best to provide a smooth migration path.

Here's a typical app's shared crate src/lib.rs file:

pub mod app;
#[cfg(any(feature = "wasm_bindgen", feature = "uniffi"))]
mod ffi;

pub use app::*;
pub use crux_core::Core;

#[cfg(any(feature = "wasm_bindgen", feature = "uniffi"))]
pub use ffi::CoreFFI;

#[cfg(feature = "uniffi")]
const _: () = assert!(
    uniffi::check_compatible_version("0.29.4"),
    "please use uniffi v0.29.4"
);
#[cfg(feature = "uniffi")]
uniffi::setup_scaffolding!();

Ignore the TODO, we will get to that eventually, I promise. There are two forms of FFI going on - the wasm_bindgen annotations on the three functions, exposing them when built as webassembly, and also the line saying

uniffi::include_scaffolding!("shared");

which refers to the shared.udl file in the same folder

{{#include ../../../examples/bridge_echo/shared/src/shared.udl}}

This is UniFFI's interface definition used to generate the scaffolding for the FFI interface - both the externally callable functions in the shared library, and their counterparts in the "foreign" languages (like Swift or Kotlin).

The scaffolding is built in the build.rs script of the crate

{{#include ../../../examples/bridge_echo/shared/build.rs}}

The foreign language code is built by an additional binary target for the same crate, in src/bin/uniffi-bindgen.rs

{{#include ../../../examples/bridge_echo/shared/src/bin/uniffi-bindgen.rs}}

this builds a CLI which can be used as part of the build process for clients of the library to generate the code.

The details of this process are well documented in UniFFI's tutorial.

The Effect type

Info

Coming soon.

Type generation

Info

Coming soon.

RFC: New side effect API - Command

This is a proposed implementation of a new API for creating (requesting) side-effects in crux apps. It is quite a significant part of the Crux API surface, so we'd really appreciate feedback on the direction this is taking.

Why?

Why a new effect API, you may ask. Was there anything wrong with the original one? Not really. Not critically wrong anyway. One could achieve all the necessary work with it just fine, it enables quite complex effect orchestration with async Rust and ultimately enables Crux cores to stay fully pure and therefore be portable and very cheaply testable. This new proposed API is an evolution of the original, building on it heavily.

However.

There's a number of paper cuts, obscured intentions, limitations, and misaligned incentives which come with the original API:

The original API is oddly imperative, but non-blocking.

A typical call to a capability looks roughly like: caps.some_capability.do_a_thing(inputs, Event::ThingDone). This call doesn't block, but also doesn't return any value. The effect request is magically registered, but is not represented as a value - it's gone. Fell through the floor and into the Crux runtime.

Other than being a bit odd, this has two consequences. One is that it is impossible to combine or modify capability calls in any other way than executing them concurrently. This includes any type of transformation, like a declarative retry strategy or a timeout and, in particular, cancellation.

The more subtle consequence (and my favourite soap box) is that it encourages a style of architecture inside Crux apps, which Crux itself is working quite hard to discourage overall.

The temptation is to build custom types and functions, pass capability instances to them and call them deep down in the call stacks. This obscures the intended mental model in which the update call mutates state and results in some side-effects - it returns them back to the shell to execute. Except in the actual type signature of update it does not.

The intent of Crux is to very strictly separate code updating state from code interacting with the outside world with side-effects. Borrowing ChatGPT's analogy - separate the brain of the robot from its body. The instructions for movement of the body should be an overall result of the brain's decision making, not sprinkled around it.

The practical consequence of the sprinkling is that code using capabilities is difficult to test fully without the AppTester for the same reason it is difficult to test any other code doing I/O of any sort. In our case it's practically impossible to test code using capabilities without the AppTester, because creating instances of capabilities is not easy.

The original API doesn't make it obvious that the intent is to avoid this mixing and so people using it tend to try to mix it and find themselves in various levels of trouble.

In summary: Effects should be a return value from the update function, so that all code generating side effects makes it very explicit, making it part of its return type. That should in turn encourage more segregation of stateful but pure logic from effects.

Access to async code is limited to capabilities and orchestration of effects is limited to async code

This, like most things, started with good intention - people find async code to be "doing Rust on hard mode", and so for the most part Crux is designed to avoid async until you genuinely need it. Unfortunately it turned out to mean that you need async (or loads of extra events) as soon as any level of effect chaining is required.

For example - fetching an API auth token, followed by three HTTP API calls, followed by writing each response to disk (or local storage) is a sequence of purely side-effect operations. The recipe is known up front, and at no point are any decisions using the model being made.

With the original API the choices were either to introduce intermediate events between the steps of the effect state machine OR to implement the work in async rust in a capability, which gets to spawn async tasks. The limitation there was that the tasks can't easily call other capabilities.

With the introduction of the Compose capability that last limitation was removed, allowing composition of effects across capabilities, even within the update function body, so long as the capabilities offer an async API. The result was calls to caps.compose.spawn ending up all over and leading to creation of a new kind of type - a capability orchestrator, for example an API client, which is built from a few capabilities (lets say HTTP, KV and a custom authentication) and Compose. This kind of type is basically untestable on its own.

In summary: It should be possible to do simple orchestration of effects without async code and gradually move into async code when its expressivity becomes more convenient.

Testing code with side-effects requires the AppTester

As covered above, the code producing side effects requires a special runtime in order to run fully and be tested, and so any code using capabilities automatically makes testing impossible without the AppTester. And of course apps themselves are only testable with the AppTester harness.

In summary: It should be possible for tests to call the update function and inspect the side effects like any other return value.

Capabilities have various annoying limitations

  • Capabilities are "tethered" to the crux core, allowing them to submit effects and events and spawn tasks, but it means instances of capabilities have to be created by Crux and injected into the update function.
  • The first point means that capability authors don't have any control over their capability's main type, and therefore it's impossible for capabilities to have any internal state. For capabilities managing ongoing effects, like WebSocket connections for example, this is limiting. It can be worked around by the capability creating and returning values representing such ongoing connections, which can communicate with a separate task spawned by the capability over an async channel. The task read from the channel in a loop, and therefore can have local state. It works, but it is far from obvious.
  • They have to offer two sets of APIs: one for event callback use, one for async use. This largely just adds boilerplate, but there is a clear pattern where the event callback calls simply use their async twins. That can be done by Crux and the boilerplate removed.
  • They don't compose cleanly. With the exception of Compose, capabilities are expected to emit their own assigned variant of Effect, preordained at the time the capability instance is created. This doesn't specifically stop one capability from asking another to emit an effect, but it is impossible to modify the second capability's request in any way - block specific one, retry it, combine it with a timeout, all the way up to completely virtualising it: resolving the effect using other data, like an in-memory cache, instead of passing it to the shell.

In summary: Capabilities should not be special and should simply be code which encapsulates

  • the definition of the core/shell communication protocol for the particular type of effect
  • creation of the requests and any additional orchestration needed for this communication

App composition is not very flexible

This is really just another instance of the limitations of capabilities and the imperative effect API. Any transformations involved in making an app a "child" of another app has to be done up front. Specifically, this involves mapping or wrapping effects and events of the child app onto the effect and event type of the parent, typically with a From implementation which can't make contextual decisions.

In summary: Instead, this mapping should be possible after the fact, and be case specific. It should be possible for apps to partially virtualise effects of the child apps, re-route or broadcast the events emitted by one child app to other children, etc.

How does this improve the situation?

So, with all (or at least most of) the limitations exposed, how is this proposed API better?

Enter Command

As others before us, such as Elm and TCA, we end up with a solution where update returns effects. Specifically, it is expected to returns an effect orchestration which is the result of the update call as an instance of a new type called Command.

In a way, Command is the current effect executor in a box, with some extras. Command is a lot like FuturesUnordered - it holds one or more futures which it runs in the order they get woken up until they all complete.

On top of this, Command provides to the futures a "context" - a type with an API which allows the futures within the command to submit effects and events. This is essentially identical to the current CapabilityContext. The difference is that the effects and events get collected in the Command and can be inspected, modified, forwarded or ignored.

In a general shape, Command is a stream of Effect or Events created as an orchestration of futures. It also implements Stream, which means commands can wrap other commands and use them for any kind of orchestration.

Orchestration with or without async

Since Commands are values, we can work with them after they are created. It's pretty simple to take several commands and join them into one which runs the originals concurrently:

#![allow(unused)]
fn main() {
let command = Command::all([command_a, command_b, command_c]);
}

Commands provide basic constructors for the primitives:

  • A command that does nothing (a no-op): Command::done()
  • A command that emits an event: Command::event(Event::NextEvent))
  • A command that sends a notification to the shell: Command::notify_shell(my_notification)
  • A command that sends a request to the shell: Command::request_from_shell(my_request).then_send(Event::Response)
  • A command that sends a stream request to the shell: Command::stream_from_shell(my_request).then_send(Event::StreamEvent)

Notice that the latter two use chaining to register the event handler. This is because the other useful orchestration ability is chaining - creating a command with a result of a previous command. This requires a form of the builder pattern, since commands themselves are streams, not futures, and doing a simple .then would require a fair bit of boilerplate.

Instead, to create a request followed by another request you can use the builder pattern as follows:

#![allow(unused)]
fn main() {
let command = Command::request_from_shell(a_request)
    .then_request(|response| Command::request_from_shell(make_another_request_from(response)))
    .then_send(Event::Done);
}

This works just the same with streams or combinations of requests and streams.

.then_* and Command::all are nice, but on occasion, you will need the full power of async. The equivalent of the above with async works like this:

#![allow(unused)]
fn main() {
let command = Command::new(|ctx| async {
    let response = ctx.request_from_shell(a_request).await;
    let second_response = ctx.request_from_shell(make_another_request_from(response)).await;

    ctx.send_event(Event::Done(second_response))
})
}

Alternatively, you can create the futures from command builders:

#![allow(unused)]
fn main() {
let command = Command::new(|ctx| async {
    let response = Command::request_from_shell(a_request)
        .into_future(ctx).await;
    let second_response = Command::request_from_shell(make_another_request_from(response))
        .into_future(ctx).await;

    ctx.send_event(Event::Done(second_response))
})
}

You might be wondering why that's useful, and the answer is that it allows capabilities to return the result of Command::request_from_shell for simple shell interactions and not worry about whether they are being used in a sync or async context. It would be ideal if the command builders themselves could implement Future or Stream, but unfortunately, to be useful to us, the futures need access to the context which will only be created once the Command itself is created.

Testing without AppTester

Commands can be tested by inspecting the resulting effects and events. The testing API consist of essentially three functions: effects(), events() and is_done(). All three first run the Command's underlying tasks until they settle and then return an iterator over the accumulated effects or events, and in the case of is_done a bool indicating whether there is any more work to do.

An example test looks like this:

#![allow(unused)]
fn main() {
#[test]
fn request_effect_can_be_resolved() {
    let mut cmd = Command::request_from_shell(AnOperation)
        .then_send(Event::Completed);

    let effect = cmd.effects().next();
    assert!(effect.is_some());

    let Effect::AnEffect(mut request) = effect.unwrap();

    assert_eq!(request.operation, AnOperation);

    request
        .resolve(AnOperationOutput)
        .expect("Resolve should succeed");

    let event = cmd.events().next().unwrap();

    assert_eq!(event, Event::Completed(AnOperationOutput));

    assert!(cmd.is_done())
}
}

In apps, this will be very similar, except the cmd will be returned by the app's update function.

This API is mainly for testing, but is available to all consumers in all contexts, as it can easily become very useful for special cases when composing applications and virtualising commands in various ways.

Capabilities are no longer special

With the Command, capabilities become command creators and transformers. This makes them no different from user code in a lot of ways.

The really basic ones can just be a set of functions. Any more complicated ones can now have state, call other capabilities, transform the commands produced by them, etc.

The expectation is that the majority of low-level capability APIs will return a CommandBuilder, so that they can be used from both event callback context and async context equally easily.

Better app composition

Instead of transforming the app's Capabilities types in order to wrap them in another app up front, when composing apps the resulting commands get transformed. More specifically, this involves two map calls:

#![allow(unused)]
fn main() {
    let original: Command<Effect, Event> = capability_call();

    let cmd: Command<NewEffect, Event> = original.map_effect(|effect| effect.into()); // provided there's a From impl
    let cmd: Command<Effect, NewEvent> = original.map_event(|event| Event::Child(event));
}

The basic mapping is pretty straightforward, but can become as complex as required. For example, events produced by a child app can be consumed and re-routed, duplicated and broadcast across multiple children, etc. The mapping can also be done by fully wrapping the original Command in another using async

#![allow(unused)]
fn main() {
    let original: Command<Effect, Event> = capability_call();

    let cmd = Command::new(|ctx| async move {
        while let Some(output) = original.next().await {
            match output {
                CommandOutput::Effect(effect) => {
                    // ... do things using `ctx`
                }
                CommandOutput::Event(event) => {
                    // ... do things using `ctx`
                }

            }
        }
    });

}

Other benefits and features

A grab bag of other things:

  • Spawn now returns a JoinHandle which can be .awaited
  • Tasks can be aborted by calling .abort() on a JoinHandle
  • Whole commands can be aborted using an AbortHandle returned by .abort_handle(). The handle can be stored in the model and used later.
  • Commands can be "hosted" on a pair of channel senders returning a future which should be compatible with the existing executor enabling a reasonably smooth migration path
  • This API should in theory enable declarative effect middlewares like caching, retries, throttling, timeouts, etc...

Limitations and drawbacks

I'm sure we'll find some. :)

For one, the return type signature for capabilities is not great, for example: RequestBuilder<Effect, Event, impl Future<Output = AnOperationOutput>>.

One major perceived limitation which still remains is that model is not accessible from the effect code. This is by design, to avoid data races from concurrent access to the model. It should hopefully be a bit more obvious now that the effect code is returned from the update function wrapped in a Command.

Open questions and other considerations

  • The command API expects the Effect type to implement From<Request<Op>> for any capability Operations it is used with. This is derived by the Effect macro, and is expected to be supported by a derive macro even in the future state.
  • We have not fully thought about back-pressure in the Commands (for events, effects and spawned tasks) even to the level of "is any needed?"
  • We will explore ways to make the code that interleaves effects and state updates more "linear" - require fewer intermediate events - separately at a later stage

Type generation migration

This note proposes a roadmap of migrating Crux and the apps to a new type generation system, based on rustdoc JSON.

Why?

The current system has limitations, is quite fiddly to set up, and the generated code is not great.

The new system removes a lot of the limitations, but to get the most benefit, we will want to generate code, which is fundamentally not compatible with the original output. Given this output is used by existing shell implementations, we need to provide a smooth upgrade path.

At the same time, due to the limitations, we know of Crux users who opted out of the typegen and use something like 1Password's Typeshare. They may not want to continue doing so, and we should enable a migration path from this to the new typegen as well.

To coordinate this transition, this RFC is aiming to provide the roadmap we can follow, which a majority of people are happy with, and had a chance to comment on.

Current system

The current system is based on Serde. It's a combination of two related crates: serde-reflection and serde-generate. Serde-reflection uses the derived Deserialize implementations on types to discover their shape and captures it as data. This data is used by serde-generate to generate equivalent types in TypeScript, Swift, Kotlin and some other languages, alongside an implementation of serialization on the "foreign" side.

This system has limitations:

  • Only supports types which are Deserialize (this is not a big problem in practice).
  • Only supports types expressed using the Serde data model. This means it fundamentally can't capture certain kinds of metadata, e.g. use of generic types and their trait bounds, any additional decorations on the foreign side (implemented protocols in Swift, etc.), any other information not provided to the deserializer.
  • We have no control over the generated code. This is especially problematic in TypeScript and the representation of enums.

Why not use Typeshare?

Typeshare is great, but has its own set of limitations. The very key one is that it is based on analysing Rust code with the compiler, only operates on types which are annotated with a #[typeshare] procmacro and doesn't see into crates. It also doesn't understand macro generated code, which is problematic for discovery of relevant types, when some of the code is generated by derive macros (especially the Effect type cluster).

Rustdoc based system

The system we've been working on is based on rustdoc, specifically it's ability to output the type metadata in JSON. This actually covers all the bases:

  • It sees all the types in all the used crates (including core and std)
  • It can see macro generated code
  • It has all the relevant metadata - generic arguments, trait bounds, implementors of traits, etc.

It is only a data set however. We need to do the work of finding the types, understanding them, and generating the foreign equivalents.

This broadly happens in three phases.

  1. Discover entry points – this is a Crux specific part, which can find implementations of crux_core::App and find their associated types, to use as entry points to start the type discovery
  2. Walk the type graph from entry points down to primitives – this is the key component of the work, which translates the raw type information into an "intermediate representation" (IR), which can be used in step 3
  3. Generation – converting the IR into equivalent types in the selected foreign language

Key benefits and features to enable

Because we can discover the apps and find entry points from there, we no longer need the separate shared_types crate, where developers can register extra types and direct the typegen.

We can support a wider feature set:

  • Better generated code, including "decorations" (e.g. Swift protocols)
  • Generic types support (in languages which support them)
  • Additional foreign code extensions:
    • Different serialisation format
    • Data based interior mutability support (to support fine-grained updates of the view model in the future)
    • Custom code extensions

The goal of this work is to make the type generation completely transparent to the developers most of the time. It should Just Work ™️.

Reducing boilerplate

The other change we'll want to make is how the typegen is executed. Instead of having a separate crate with a build.rs file (which was necessary in order to see the code of the core crate while running build.rs), the new type generation will come as a separate CLI tool, produced as an additional target on the core crate, similar to the uniffi-bindgen tool today. In fact as part of this, we will try to subsume the uniffi-bindgen tool into the same CLI tool, and move from writing a .udl file for the FFI interface to using annotations, likely Crux ones, so that we can take control of how we generate FFI for different platforms (UniFFI, WebAssembly and others).

Ultimately, creating the full FFI interface for the Crux core should become a single build command.

Ways to migrate which we want to support

There are a few migrations involved in this transition:

  1. Migrating the core from original type generation to the new one
  2. Migrating the shells from original generated types to the new ones
  3. Migrating from another type generation system to Crux typegen

We want to support all of them independently, and, crucially, gradually, type by type. For the second and third migration above, it MUST be possible to mix and match approaches at all times.

From old typegen

The initial implementation of the new type generation will aim for parity with the original, while removing all the configuration boilerplate in the process.

It is likely that this stage will remain experimental for some time, because the type discovery mechanism works quite differently to the Serde based one. We will invite community feedback on how well the system is working on their existing code and collect examples where it doesn't work too well.

The second stage of the migration, which is likely to happen concurrently, is the implementation of various improvements to the generated code. To facilitate this being opt-in by type, we're likely to use an annotation driven optionality, along the lines of (the names and specific format may of course change):

#![allow(unused)]
fn main() {
// Generate legacy output at definition site
#[cruxgen(legacy = true)]
MyType(bool)

// Enable future output at reference site
SomeType {
    other: cruxgen! { future = true, OtherType }
}
}

This should allow apps to migrate slowly, one type at a time, making necessary changes to the consuming code on the shell side.

From a different typegen (e.g. Typeshare)

The strategy to support migrating from a different type generation system is similar, but by using a full opt-out of the Crux type generation:

// Skip at definition site
#[cruxgen(skip)]
MyType(bool)

// Skip at reference site
SomeType {
    other: cruxgen! { skip = true, OtherType }
}

In order to cover discrepancies in feature set, we will also do our best to support custom code generation extensions quite early on, but the strategy for specifying them is not yet very clear

Migration roadmap

The migration needs somewhat careful orchestration, so that big step changes are not required for Crux users to adopt. It should go something like this:

Phase 1 - develop the frontend and IR

In this phase we still use serde-generate as a backend, and focus on getting the frontend - the type discovery and the developer interface.

1 - serde-generate feature parity and start validating

Gets us to a working, reliable type generation front end, able to discover all the relevant types and capture the metadata. This is likely to require a period of testing with real-world codebases.

2 - enable annotation controlled feature selection

Support annotations to skip types, ignore fields and similar basic things which previously relied on serde annotations. Both ways should work for the time being, but with future mode enabled, the serde annotations should start being ignored. This is the start of the two modes diverging.

If possible, the annotations should be allowed on both definition sites and reference sites. We need to think about how conflict resolution works in this case, if multiple sites are annotated but with different directions.

Phase 2 - replace the backend, stabilise the IR

In this phase, we replace the serde-generate backend and gradually change what the output looks like. At the same time we gain features.

1 – take over generation of the code to parity

Replace or vendor in serde-generate in order to support outputting all the original code in supported languages At this point, we can retire the serde implementation fully, so long as we're confident with.

We should also introduce the legacy switch which forces backwards compatibility.

2 – change future output to be more idiomatic

Make changes to the generated code to better represent the types idiomatically to the language.

3 - stabilise the IR

To enable extension points on the backend side, we'll need to stabilise the intermediate representation of the discovered types and their relationships.

4 - enable custom extensions

Allow users to add extensions to the generated code, given the IR. This is almost like a derive macro but for the foreign language(s). The exact mechanism is to be decided, but it should be possible to make them language specific (e.g. Swift-only).

5 - support optionality, especially in serialisation

The goal of the output is to be idiomatic to the target codebase, which will likely require some optionality (e.g. which serialisation library to use in Kotlin). We should do our best to pick sensible defaults, and delegate as much as possible to custom extensions, otherwise we risk an explosion in the features we need to support.

One specific optionality we should enable support for is the serialisation format over the FFI boundary.

Phase 3 - enable as default

At some point when the base future output stops evolving too much, we can make it default (when neither future nor legacy is specified).

Further down the line, we retire the legacy support as the final step.