Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Preparing to add the Shell

So far, we've built a basic app in relatively basic Rust. If we now want to expose it to a Shell written in a different language, we'll have to set up the necessary plumbing, starting with the foreign function interface.

The core FFI bindings

From the work so far, you may have noticed the app has a pretty limited API, basically the update and view methods. There's one more for resolving effects (called resolve), but that really is it. We need to make those three methods available to the Shell, but once that's done, we don't have to touch it again.

Let's briefly talk about what we want from this interface. Ideally, in our language of choice we would:

  • have a native equivalent of the update, view and resolve function
  • have an equivalent for our Event, Effect and ViewModel types
  • not have to worry about what black magic is happening behind the scenes to make that work

Crux provides code generation support for all of the above.

Note

It isn't in any way actual black magic. What happens is Crux exposes FFI calls taking and returning the values serialized with bincode (by default), and generated "foreign" (Swift, Kotlin, ...) types handling the foreign side of the serialization.

Yes, this introduces some extra work to the FFI, but generally, for each user interaction we make a relatively small number of round-trips (almost certainly less than ten), and our benchmarks say we can make thousands of them per second. The real throughput is dependent on how much data gets serialized, but it only becomes a problem with really large messages, and advanced workarounds exists. You most likely don't need to worry about it, at least not for now.

Preparing the core

We will prepare the core for both kinds of supported shells - native ones and WebAssembly ones.

To help with the native setup, Crux uses Mozilla's Uniffi to generate the bindings. For WebAssembly, it uses wasm-bingen.

First, lets update our Cargo.toml:

# shared/Cargo.toml
[package]
name = "shared"
version = "0.1.0"
authors.workspace = true
edition.workspace = true
rust-version.workspace = true
repository.workspace = true
license.workspace = true
keywords.workspace = true

[lib]
crate-type = ["cdylib", "lib", "staticlib"]

[[bin]]
name = "codegen"
required-features = ["codegen"]

[features]
uniffi = ["dep:uniffi"]
wasm_bindgen = ["dep:wasm-bindgen", "getrandom/wasm_js"]
codegen = [
    "crux_core/cli",
    "dep:clap",
    "dep:log",
    "dep:pretty_env_logger",
    "uniffi",
]
facet_typegen = ["crux_core/facet_typegen"]

[dependencies]
crux_core.workspace = true
serde = { workspace = true, features = ["derive"] }
facet = "=0.31"

# optional dependencies
clap = { version = "4.5.60", optional = true, features = ["derive"] }
getrandom = { version = "=0.3", optional = true, default-features = false }
# js-sys = { version = "0.3.83", optional = true }
log = { version = "0.4.29", optional = true }
pretty_env_logger = { version = "0.5.0", optional = true }
uniffi = { version = "=0.29.4", optional = true }
wasm-bindgen = { version = "0.2.114", optional = true }

A lot has changed! The key things we added are:

  1. a bin target called codegen, which is how we're going to run all the code generation
  2. feature flags to optionally enable uniffi and wasm_bindgen, and grouped those under codegen alongside some dependencies which are optional depending on that feature flag being enabled
  3. dependencies we need for the code generation

And since we've declared the codegen target, we need to add the code for it.

// shared/src/bin/codegen.rs
use std::path::PathBuf;

use clap::{Parser, ValueEnum};
use crux_core::{
    cli::{BindgenArgsBuilder, bindgen},
    type_generation::facet::{Config, TypeRegistry},
};
use log::info;
use uniffi::deps::anyhow::Result;

use shared::Counter;

#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, ValueEnum)]
enum Language {
    Swift,
    Kotlin,
    Typescript,
}

#[derive(Parser)]
#[command(version, about, long_about = None)]
struct Args {
    #[arg(short, long, value_enum)]
    language: Language,
    #[arg(short, long)]
    output_dir: PathBuf,
}

fn main() -> Result<()> {
    pretty_env_logger::init();
    let args = Args::parse();

    let typegen_app = TypeRegistry::new().register_app::<Counter>()?.build()?;

    let name = match args.language {
        Language::Swift => "App",
        Language::Kotlin => "com.crux.examples.simplecounter",
        Language::Typescript => "app",
    };
    let config = Config::builder(name, &args.output_dir)
        .add_extensions()
        .add_runtimes()
        .build();

    match args.language {
        Language::Swift => {
            info!("Typegen for Swift");
            typegen_app.swift(&config)?;
        }
        Language::Kotlin => {
            info!("Typegen for Kotlin");
            typegen_app.kotlin(&config)?;

            info!("Bindgen for Kotlin");
            let bindgen_args = BindgenArgsBuilder::default()
                .crate_name(env!("CARGO_PKG_NAME").to_string())
                .kotlin(&args.output_dir)
                .build()?;
            bindgen(&bindgen_args)?;
        }
        Language::Typescript => {
            info!("Typegen for TypeScript");
            typegen_app.typescript(&config)?;
        }
    }

    Ok(())
}

This is essentially boilerplate for a CLI we can use to run the binding generation and type generation. But it's also a place where you can customize how they work if you have some more advanced needs.

It uses the facet based type generation from crux_core to scan the App for types which will cross the FFI boundary, collect them and then, depending on what language should be generated builds the code for it and places it into a specified output_dir directory.

We will call this CLI from the shell projects shortly.

Codegen, typegen, bindgen, which is it?

You'll here these terms thrown around here and there in the docs, so it's worth clarifying what we mean

bindgen – "bindings generation" – provides APIs in the foreign language to call the core's Rust FFI APIs. For most platforms we use UniFFI, except for WebAssembly, where we use wasm_bindgen

typegen – "type generation" – The core's FFI interface operates on bytes, but both Rust and the languages we're targeting are generally strongly typed. To facilitate the serialization / deserialization, we generate type definition reflecting the Rust types from the core in the foreign language (Swift, Kotlin, TypeScript, ...), which all serialize consistently.

codegen – you guessed it, "code generation" – is the two things above combined.

Bindings code

Now we need to add the Rust side of the bindings into our code. Update your lib.rs to look like this:

// shared/src/lib.rs
mod app;
pub mod ffi;

pub use app::*;
pub use crux_core::Core;

#[cfg(feature = "uniffi")]
const _: () = assert!(
    uniffi::check_compatible_version("0.29.4"),
    "please use uniffi v0.29.4"
);
#[cfg(feature = "uniffi")]
uniffi::setup_scaffolding!();

This code uses our feature flags to conditionally initialize the UniFFI bindings and check the version in use.

More importantly, it introduced a new ffi.rs module. Let's look at it closer:

#![allow(unused)]
fn main() {
// shared/src/ffi.rs
use crux_core::{
    Core,
    bridge::{Bridge, EffectId},
};

use crate::Counter;

/// The main interface used by the shell
#[cfg_attr(feature = "uniffi", derive(uniffi::Object))]
#[cfg_attr(feature = "wasm_bindgen", wasm_bindgen::prelude::wasm_bindgen)]
pub struct CoreFFI {
    core: Bridge<Counter>,
}

impl Default for CoreFFI {
    fn default() -> Self {
        Self::new()
    }
}

#[cfg_attr(feature = "uniffi", uniffi::export)]
#[cfg_attr(feature = "wasm_bindgen", wasm_bindgen::prelude::wasm_bindgen)]
impl CoreFFI {
    #[cfg_attr(feature = "uniffi", uniffi::constructor)]
    #[cfg_attr(
        feature = "wasm_bindgen",
        wasm_bindgen::prelude::wasm_bindgen(constructor)
    )]
    #[must_use]
    pub fn new() -> Self {
        Self {
            core: Bridge::new(Core::new()),
        }
    }

    /// Send an event to the app and return the effects.
    /// # Panics
    /// If the event cannot be deserialized.
    /// In production you should handle the error properly.
    #[must_use]
    pub fn update(&self, data: &[u8]) -> Vec<u8> {
        let mut effects = Vec::new();
        match self.core.update(data, &mut effects) {
            Ok(()) => effects,
            Err(e) => panic!("{e}"),
        }
    }

    /// Resolve an effect and return the effects.
    /// # Panics
    /// If the `data` cannot be deserialized into an effect or the `effect_id` is invalid.
    /// In production you should handle the error properly.
    #[must_use]
    pub fn resolve(&self, id: u32, data: &[u8]) -> Vec<u8> {
        let mut effects = Vec::new();
        match self.core.resolve(EffectId(id), data, &mut effects) {
            Ok(()) => effects,
            Err(e) => panic!("{e}"),
        }
    }

    /// Get the current `ViewModel`.
    /// # Panics
    /// If the view cannot be serialized.
    /// In production you should handle the error properly.
    #[must_use]
    pub fn view(&self) -> Vec<u8> {
        let mut view_model = Vec::new();
        match self.core.view(&mut view_model) {
            Ok(()) => view_model,
            Err(e) => panic!("{e}"),
        }
    }
}
}

Broad strokes: we define a type for core with FFI, which holds a Bridge wrapping our Counter, and provide implementations of the three API methods taking and returning byte buffers.

The translation between rust types and the byte buffers is the job of the bridge (it also holds the effect requests inside the core under an id, which can be sent out to the Shell and used to resolve the effect, but more on that later).

Notice the Shell is in charge of creating the instance of this type, so in theory your Shell can have several instances of the app if it wants to.

There are many attribute macros annotating the FFI type for uniffi and wasm_bindgen, which generate the actual code making them available as FFIs. We recommend the respective documentation if you're interested in the detail of how this works. The notable part is that both libraries have a level of support for various basic and structured data types which we don't use, and instead we serialize the data with Serde, and generate types with facet_generate to make the support consistent.

It's not essential for you to understand the detail of the above code now. You won't need to change it, unless you're doing something fairly advanced, by which time you'll understand it.

Platform native part

Okay, with that plumbing, the Core part of adding a shell is complete. It's not a one liner, but you will only set this up once, and most likely won't touch it again, but having the ability, should you need to, is important.

Now we can proceed to the actual shell for your platform of choice: