Create a Browser-Based Frontend UI

The last piece of our application is the UI, which will be based on the seed framework. We'll run this in the browser, by cross-compiling our rust code to WebAssembly (wasm). Please note that if you just want to play with seed, you should check out the quickstart repo that's linked from their documentation. We're going to set things up from scratch below so that you can get a feel for how everything is put together.

Cargo Workspace

This build works by generating a library. Cargo only allows one library per crate. We already have a library. That seems like a problem right?

No problem -- cargo supports "workspaces", where we can build multiple crates. We will build our backend (db + rest) into one library crate and our frontend into a separate crate. Any shared structs that we define will be in the root crate.

First, create a new crate as a subdirectory under our existing project directory:

$ cargo new --lib frontend

Then we need to move our existing code into a new crate:

$ cargo new --lib backend
$ mv src/lib.rs src/db src/bin/ backend/src/

And fix up crate references in backend/src/bin/backend.rs and backend/src/bin/todo.rs:

--- backend/src/bin/backend.rs
+++ backend/src/bin/backend.rs
@@ -9,8 +9,8 @@ extern crate serde;

 use rocket_contrib::json::Json;

-use mytodo::db::{query_task, establish_connection};
-use mytodo::db::models::Task;
+use backend::db::{query_task, establish_connection};
+use backend::db::models::Task;

 #[derive(Serialize)]
 struct JsonApiResponse {
--- backend/src/bin/todo.rs
+++ backend/src/bin/todo.rs
@@ -1,5 +1,5 @@
 use std::env;
-use mytodo::db::{create_task, query_task, establish_connection};
+use backend::db::{create_task, query_task, establish_connection};

 fn help() {
     println!("subcommands:");

We can build the backend and frontend by adding them as workspace members. We will modify the Cargo.toml to look like this:

[package]
name = "mytodo"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[workspace]
members = ["backend", "frontend"]

Notice that we've dropped the dependencies -- there is no longer any need for them in our root crate, but we need to add them to our backend/Cargo.toml:

[package]
name = "backend"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[dependencies]
diesel = { version = "1.0.0", features = ["sqlite"] }
rocket = "0.4.2"
serde = { version = "1.0", features = ["derive"] }

[dependencies.rocket_contrib]
version = "0.4.2"
default-features = false
features = ["json"]

Now you can do cargo build --all to build both workspaces, or specify just one with the -p flag. For example, cargo build -p frontend to build the frontend workspace -- although it's empty at the moment.

Install wasm toolchain

I mentioned above that we're going to be cross-compiling our code to wasm32. In order to do that we need to install the toolchain:

$ rustup target add wasm32-unknown-unknown

We also need to set up our crate to build wasm32 and add mytodo, seed, wasm-bindgen, and web-sys as dependencies. Modify frontend/Cargo.toml:

[package]
name = "frontend"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[lib]
crate-type = ["cdylib"]

[dependencies]
mytodo = { path = ".." }
seed = "^0.4.0"
wasm-bindgen = "^0.2.50"
web-sys = "^0.3.27"

We need to install wasm-pack, which requires some host-system support packages in order for the installation to work:

$ sudo apt install libssl-dev pkg-config
$ cargo install wasm-pack

We can build the wasm package by doing:

$ cd frontend
$ wasm-pack build --target web --out-name package --dev

This will leave output in frontend/dev/. Having this extra command to run is kind of tedious, especially with the two workspaces. Let's automate that out of our way.

cargo make

cargo make is a tool we can use to automate our build tasks. If you've ever written a Makefile you have an idea of what cargo make can do -- but the modern version adds about 100x more verbosity. On the bright side, cargo make's syntax is much easier to fathom.

To install it, run cargo install cargo-make.

To configure it, create a new file Makefile.toml in the root of our project directory:

[env]
CARGO_MAKE_EXTEND_WORKSPACE_MAKEFILE = "true"

[tasks.default]
clear = true
dependencies = ["build"]

This file just defines one task: The default task is what gets run when you just say cargo make. (You can optionally specify a task name to run like cargo make build.) We've added clear = true to this task because the tool has a builtin default task that runs a bunch of other tasks -- these are convenient, we don't want to get distracted by right now. The default task depends on the build task.

The build task is a built-in task that runs cargo build --all-features, which is perfect for what we need so we don't need to override it.

cargo make knows about workspaces, and will run each task in each workspace. But so far all we've got is what we had before -- we don't have it running wasm-pack yet. That's where the env variable that is set at the top of the file comes in. It means that cargo make will look in workspace directories for Makefile.toml files, and any tasks in those files will override the tasks in the workspace-level Makefile.toml.

So let's override default in frontend/Makefile.toml to do what we need:

[tasks.default]
dependencies = ["create_wasm"]

[tasks.create_wasm]
command = "wasm-pack"
args = ["build", "--target", "web", "--out-name", "package", "--dev"]
dependencies = ["build"]

Here the default task depends on create_wasm which runs wasm-pack as mentioned above.

With all that in place, now just running cargo make in the root will give us:

  • backend library and binaries under target/debug
  • browser-loadable web assembly package in frontend/pkg/package_bg.wasm

With all that build infrastructure out of the way, we can move on to coding the UI.

Behind the Scenes

The way our frontend app is going to work:

  • we write some rust
  • wasm-pack generates some files
    • the .wasm file is a WebAssembly binary
    • the .js file is a javascript loader that will pull in the wasm, and it acts as the gatekeeper between javascript and rust
    • package.json has some metadata in case we want to integrate with npm and friends
  • we write an html stub file, that loads the .js, which loads the .wasm
  • our app attaches itself to a DOM element in the html file
  • the browser shows our app's UI elements
  • our users rejoice

Create a Stub App

Create frontend/index.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>MyTODO</title>
  </head>
  <body>
    <div id="app"></div>
    <script type="module">
      // https://rustwasm.github.io/docs/wasm-bindgen/examples/without-a-bundler.html
      import init from '/pkg/package.js';
      init('/pkg/package_bg.wasm');
    </script>
  </body>
</html>

As you can see from the comment, this is based on a wasm-bindgen example that doesn't use a bundler (like webpack). This is for simplicity in this example -- in a larger app we may want other web assets that we want to pack together with our application.

All our html needs to do is provide a div with an id of app and the script snippet that loads the package. Then the loader will take over and inject our elements into the DOM.

Then we just need to add some code in frontend/src/lib.rs:

#[macro_use]
extern crate seed;
use seed::prelude::*;

#[derive(Clone, Debug)]
enum Direction {
    Coming,
    Going,
}

struct Model {
    direction: Direction,
}

#[derive(Clone, Debug)]
enum Msg {
    Click,
}

fn update(msg: Msg, model: &mut Model, _orders: &mut impl Orders<Msg>) {
    match msg {
        Msg::Click => {
            model.direction = match model.direction {
                Direction::Coming => Direction::Going,
                Direction::Going => Direction::Coming,
            }
        }
    }
}

fn view(model: &Model) -> impl View<Msg> {
    let greeting = match model.direction {
        Direction::Coming => "Hello, World!",
        Direction::Going => "¡Hasta la vista!",
    };
    h1![
        class! {"heading"},
        style!["height" => "100vh",
               "width" => "100vw",
        ],
        { greeting },
        simple_ev(Ev::Click, Msg::Click),
    ]
}

fn init(_url: Url, _orders: &mut impl Orders<Msg>) -> Model {
    Model {
        direction: Direction::Coming,
    }
}

#[wasm_bindgen(start)]
pub fn render() {
    seed::App::build(init, update, view).finish().run();
}

Let's walk through this starting from the bottom. Everything kicks off with our render function because we added the start attribute to the #[wasm_bindgen] macro. This sets things up so that our function is called as soon as the module is loaded.

This function creates a seed app, passing in our init, update, and view functions, and then launches the app.

Our init function gets called first, and is responsible for potentially doing anything with an url path that the app was started from (we don't handle that here -- we won't handle any routing at all in this guide). It then needs to create and return a Model that will store state for the app. Our app just has two silly states so that we can see how basic event handling works.

Moving up a block, our view function takes the model and returns a DOM node. Here we're simply matching on coming or going and setting an appropriate greeting in our <h1>. Seed provides macros for all valid HTML5 tags, and as you can see in the xample it also has macros for things like class and style.

Also you can see here how we've attached a simple event handler: whenever a click occurs on our h1 (which is the entire size of the viewport thanks to the styling) it will send a click message to our update function.

In our update function, we simply dispatch on the message type (there's only one for this tiny example) and then toggle the model's direction. Our view will get called again and the DOM will be re-rendered (we could call orders.skip() to prevent this), and we will see the greeting toggle.

And now that we gone over the basics we can move on to fetching and displaying some tasks, so that how we can be more productive!

Fetch Tasks from Backend

Seed provides some useful tools for fetching data, so the first thing we need to do is import those from the seed namespace in frontend/src/lib.rs:

use seed::{fetch, Request};

Then, since the first thing we want to do is load the tasks from the backend, we'll change our init function (and add a new function):

fn fetch_drills() -> impl Future<Item = Msg, Error = Msg> {
    Request::new("http://localhost:8000/tasks/").fetch_json_data(Msg::FetchedTasks)
}

fn init(_url: Url, orders: &mut impl Orders<Msg>) -> Model {
    orders.perform_cmd(fetch_drills());
    Model {
        direction: Direction::Coming,
    }
}

Let's talk a little bit about what's going on here, because it's not necessarily obvious at first glance. In our original init we just ignored the Orders object. Now we're going to use it. Orders provides a mechanism for us to be able to add messages or futures to a queue. We can send multiple messages or futures and they will be performed in the order that we call the functions, with futures being scheduled after the model update.

Since we want to fetch our tasks, we create a future using the Requests struct, which is seed's wrapper around the Fetch API. We create a new request for a hard-coded (gasp!) url, and then call its fetch_json_data method which returns a Future. This future will create the Msg we provided, which will then get pumped into our uupdate function when the request completes (or fails).

If we try compiling now, we get several errors. First, we haven't imported Future. Second, we forgot to define Msg::FetchedTasks. The first one is simplest so let's tackle that. First add a dependency on the futures crate to frontend/Cargo.toml:

[dependencies]
mytodo = { path = ".." }
futures = "^0.1.28"

and then add a use in frontend/src/lib.rs:

use futures::Future;

To fix the second error we have to dive into what fetch_json_data is really doing with the Msg that we give it. What we're really providing (as shown in the api docs) is a FnOnce that takes a ResponseDataResult<T> where the latter is really a type alias for a Result<T, FailReason<T>>. Sheesh, that's a mouthful. But really all we need to provide is an enum member that takes a ResponseDataResult<T> where T is a serde Deserialize. (I think that's simpler. A little bit. How about an example?)

Back near the top of lib.rs, let's remove Msg::Click because we don't need it any more, and add FetchedTasks:

#[derive(Clone, Debug)]
enum Msg {
    FetchedTasks(fetch::ResponseDataResult<JsonApiResponse>),
}

If we try to build now... oh no, we made it worse. Several of the errors are about the newly-missing Click. As an exercise: go through and get rid of those errors -- modify every place there's a reference to Click. It's easy. I'll be here when you're done.

Hint: completely remove the simple_ev call in view, and the entire match arm in update.

Ok now when we build we're down to just two compile errors. Both of these are missing structs that we defined earlier in the backend crate. It seems like the right thing to do would be to add a dependency on the backend crate (you can try it and see what happens).

However, that is not the right thing. It is in fact the wrong thing. The biggest and most immediately obvious reason is that the backend pulls in dependencies that won't even build for wasm. The second is really kind of the same reason: we don't want to be forced to build those extra dependencies, and while certain techniques can be used to keep dependencies that we're not actually using out of our final package, we don't want to risk bloating our package with a bunch of unneeded stuff.

A better solution is to simply move the structs up into the root of our project.

We need serde, so add it to Cargo.toml:

[dependencies]
serde = { version = "1.0", features = ["derive"] }

Create a new file src/lib.rs:

#[macro_use]
extern crate serde;

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Task {
    pub id: i32,
    pub title: String,
}

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct JsonApiResponse {
    pub data: Vec<Task>,
}

This is pretty straightforward: we just define the two structs we need, deriving from serde so that we have bidirectional serialization, and from Clone and Debug so we can have that easily on our Msg type.

Then we can modify our backend REST API to use the new structs. The backend needs to add the root crate as a dependency in backend/Cargo.toml:

[dependencies]
mytodo = { path = ".." }

And in backend.rs we need to remove our existing definition of JsonApiResponse, remove the use of backend::db::Task, and add a use from mytodo:

use mytodo::JsonApiResponse;

With this change, the backend almost builds. Unfortunately, almost doesn't count with compilers. This error is interesting:

error[E0308]: mismatched types
  --> backend/src/bin/backend.rs:21:28
   |
21 |         response.data.push(task);
   |                            ^^^^ expected struct `mytodo::Task`, found struct `backend::db::models::Task`
   |
   = note: expected type `mytodo::Task`
              found type `backend::db::models::Task`

Our loop is trying to push a database-task, but our response object wants an api-task. Obviously we should get rid of the db::models::Task struct and just have it use the mytodo::Task struct instead, right? The way it is now is repetitive, and that violates DRY, and we want to stay DRY!

Well, let's think about how the different pieces are potentially going to change. We might enhance our application in many different ways. Some of those ways might change our database schema -- which will require changes to db::models structs. We would like to avoid being forced to change our REST API models every time our database changes.

Right now it seems like it's repetitive, but that's only because our task model is ultra-simple. If our app grows new features it's very likely we will need two different models, so we will keep them separate. And since they're separate, we need to manually convert from db_task into api_task in our loop over the query:

    for db_task in query_task(&conn) {
        let api_task = mytodo::Task {
            id: db_task.id,
            title: db_task.title,
        };
        response.data.push(api_task);
    }

All right! Now our backend builds cleanly, and there's only one more (easy) error to fix up in the frontend!

error[E0004]: non-exhaustive patterns: pattern `FetchedTasks` of type `Msg` is not handled
  --> frontend/src/lib.rs:25:11
   |
20 | / enum Msg {
21 | |     FetchedTasks(fetch::ResponseDataResult<JsonApiResponse>),
   | |     ------------ variant not covered
22 | | }
   | |_- `Msg` defined here
...
25 |       match msg {
   |             ^^^
   |
   = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms

We just need to handle Msg::FetchedTasks in update:

fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
    match msg {
        Msg::FetchedTasks(Ok(result)) => {
            // TODO: update the model
        }
        Msg::FetchedTasks(Err(reason)) => {
            log!(format!("Error fetching: {:?}", reason));
            orders.skip();
        }
    }
}

Here we pattern-match on the Ok or Err of the Result, logging the latter to the console. (In production we'd probably want to notify the user or retry the operation.) If we get an Ok then the fetch has succeeded and we should update the model so we can render the tasks in the DOM. But we're stubbing it now so we can finally have a successful build after that long refactoring.

A Side Note On Refactoring and Testing

In retrospect, we could have made that refactoring in smaller, safer chunks if we had known in advance how many things we were going to have to change. A better approach would have been to move the structs up to the root first, then fix up the backend, add the fetched message, and finally remove the click message. Each of those steps could have been built and tested separately. But since we're stumbling through the changes a little bit we let the compiler guide us through the refactorings to a large extent.

Also, refactorings like this get scary as the app gets bigger. It's still small enough that we can easily test the backend, the frontend, and the end-to-end by hand. If we are serious about this at all, we'd definitely want some unit and integration tests wrapped around our app. I've left tests out of the scope of this book for the sake of brevity, clarity, and forward momentum, but they're a critical piece of any development effort, and they make up a big part of an upcoming book ("Engineering Rust Web Applications") that will be the big brother this little guide always wanted.

Displaying the Tasks

We have some tasks we want to display. Our displaying machinery lives in the view function. We need a way to get the tasks from our update function (where we get the fetch result) to the view function (where we make the nodes). The one thing these have in common is our Model. Let's remove the now-useless Direction struct and replace it with a Vec<Task>. We have to touch a few different places, so here's the whole frontend/src/lib.rs:

#[macro_use]
extern crate seed;
use futures::Future;
use seed::prelude::*;
use seed::{fetch, Request};

use mytodo::{JsonApiResponse, Task};

struct Model {
    tasks: Vec<Task>,
}

#[derive(Clone, Debug)]
enum Msg {
    FetchedTasks(fetch::ResponseDataResult<JsonApiResponse>),
}

fn update(msg: Msg, model: &mut Model, _orders: &mut impl Orders<Msg>) {
    match msg {
        Msg::FetchedTasks(Ok(mut result)) => {
            model.tasks.clear();
            model.tasks.append(&mut result.data);
        }
        Msg::FetchedTasks(Err(reason)) => {
            log!(format!("Error fetching: {:?}", reason));
        }
    }
}

fn view(model: &Model) -> impl View<Msg> {
    let tasks: Vec::<Node<Msg>> = model.tasks.iter().map(|t| {
        li![{t.title.clone()}]
    }).collect();

    h1![
        {"Tasks"},
        ul![
            tasks,
        ],
    ]
}

fn fetch_drills() -> impl Future<Item = Msg, Error = Msg> {
    Request::new("http://localhost:8000/tasks/").fetch_json_data(Msg::FetchedTasks)
}

fn init(_url: Url, orders: &mut impl Orders<Msg>) -> Model {
    orders.perform_cmd(fetch_drills());
    Model {
        tasks: vec![],
    }
}

#[wasm_bindgen(start)]
pub fn render() {
    seed::App::build(init, update, view).finish().run();
}

Notice that we've deleted the Direction struct and replaced it's presence in Model with a vector of tasks.

In update we set the model to contain the vec from the result.

In view, the h1 now just contains a heading "Tasks" and we've set up a ul underneath it. At the top of the function we're mapping over the tasks in the model to create some li elements that we can hang off the ul.

Everything builds cleanly! Let's test it. In one window, start the backend:

$ cargo run -p backend --bin backend

In another window, serve the frontend using a convenient rust crate that simply serves the current directory from a small web server:

$ cd frontend
$ cargo install microserver
$ microserver

Browse to http://localhost:9090/ and... and... nothing!? So disappointing.

We can see in the window that's running our backend that a GET request came in. So we know something is happening.

Let's open developer tools (in Chrome, Ctrl+Shift+I) and look first at the console (whole lot of nothing) and then at the network tab to see what it's doing with the json request to our backend. Hmm, the tasks request is showing as red, that can't be good.

Ahh, we forgot about CORS (Cross-Origin Resource Sharing). Since our REST API is being served from port 8000 and our main page is being served from 9090, they're two separate origins, and we have to make sure our backend is returning the proper CORS headers.

This is a bad news / good news thing. Great news, really. The bad news is that we have a little more work to do. The great news is that it's really simple to get it working for our small example.

Adding CORS Support in the Backend

Diving right in, the support we need for CORS is in the rocket_cors crate, so change backend/Cargo.toml:

[dependencies]
rocket_cors = { version = "0.5.0", default-features = false }

And at the top of backend.rs we need to use rocket_cors:

use rocket_cors::{AllowedHeaders, AllowedOrigins, Error};

Then add the CORS options to main:

fn main() -> Result<(), Error> {
    let allowed_origins = AllowedOrigins::all();

    let cors = rocket_cors::CorsOptions {
        allowed_origins,
        allowed_headers: AllowedHeaders::some(&["Authorization", "Accept"]),
        allow_credentials: true,
        ..Default::default()
    }
    .to_cors()?;

    rocket::ignite()
        .mount("/", routes![tasks_get])
        .attach(cors)
        .launch();

    Ok(())
}

It's worth noting that we could be more restrictive with our options here -- for our purposes today we just want to open it up, but for a public-facing application we would want to carefully examine the options in the rocket_cors docs.

Now we can try serving the backend and frontend in separate windows again, and refreshing our browser.

Victory! You should now see the two tasks we defined earlier. If you open yet another window, run:

$ cargo run -p backend --bin todo new celebrate

and then refresh the browser, you will see the new task.

Frontend Wrap-Up

We've built a functional web application, from the bottom up, almost entirely in rust!

But ... there are also an awful lot of things that have been left out of this guide:

  • The frontend doesn't do any kind of user input. None. Nada.
    • (I am going to fix this in an update. Stay tuned.)
  • The frontend doesn't do any kind of routing -- no browsing to multiple pages within the app, no pagination, etc.
  • There are ZERO tests. I feel kind of dirty.
  • Also, ZERO attempts at error handling. Any component will panic if anything goes the least bit wrong.
  • I only scratched the surface of cargo make.
  • There's nary a mention of continuous integration.
  • Nothing about app deployment, upgrades, troubleshooting/debugging, or maintenance.
  • The data model in the database and REST API is trivial; there are interesting ways that this could be made more instructive.
  • It's out of compliance with the JSON API spec.
  • No users or authentication, or security of any sort.
  • No web-side external plugins (e.g. npm packages).
  • Nothing about interfacing directly with javascript.

But ... my intent for the scope of this guide was to show how to put together an all-rust stack for getting an application skeleton built from end-to-end. In a short guide. With the exception of user input in the web ui, I think I've done that.

Those missing pieces are important! I have a rough outline for a full-length follow-up to this book ("Engineering Rust Web Applications") that will cover those topics and more, with a more rigorous approach, but similar style. It won't be based around a todo app -- it's tentatively a library management system.

I'd love to hear your feedback and/or corrections. Please email info at erwabook.com or open an issue at https://gitlab.com/bstpierre/irwa/.

To get updates on this book, email-only draft chapters of "Engineering Rust Web Applications", or other Rust articles that land on this site, subscribe:

Subscribe

* indicates required
Email Format

Full-Stack Exercise

With all of that out of the way, let's try one last exercise. This is going to be a feature that slices all the way up the stack: due dates.

There are two ways to do this.

The Right Way™ -- store a proper date type in the database, carry it up through the models as a date type, and convert it to a string at the last minute before presenting it to the user. (Obviously this is the only way to do it for a real app.) Give yourself 3 stars if you tackle it this way.

The Easy Way™ -- if you're just interested in seeing all the places you have to touch to make a feature like this work, just store it as a string in the db, and bring it up through the stack as a string that you can show directly to the user. As a bonus, if you implement this method, you can have a task like "exercise more" or "start a budget" with a due date of "tomorrow" and then you never have to follow through! (If you do this and try to use it for real, you're going to end up filled with self loathing. "Fix the ****** app" will be one of your tasks, with a due date of "yesterday".)