Introduction

This book provides a taste of the full-stack, all-rust approach to building web apps.

The world doesn't need another todo app, but we're going to make one anyway because it's become something of a tradition, and because it's easily understood.

We will walk through the layers of the application, starting at the bottom of the stack with the database. We will set up a database file, using SQLite3 as our database engine.

Then we will write a database access library that the higher layers will build on. For our convenience as developers, and for debugging and troubleshooting purposes, we will build cli-driven programs that use the database access layer to read and write the database. This is generally more convenient -- and safer -- than modifying or querying the database directly.

The next layer is the REST API. This will be built on Rocket, which is a rust web framework. Our frontend client will send HTTP requests to our Rocket program. When Rocket calls our code, we will use the database access layer to read or write the database. The REST API code will massage the data into an appropriate format (JSON) for returning to the client.

The top layer, or frontend, is the Web UI that we present to the user. This runs in a web browser as Web Assembly (JavaScript). We will use the Seed framework to build our rust code into a JS app that we can load into the browser.

Finally we'll look at some of the limitations of this simple app, discuss improvements, and provide some pointers to resources that can help in addressing those issues.

Assumptions

I'm assuming that you are somewhat comfortable with rust. This is not an introduction to rust and I won't go into the details of any syntax or meaning of code that is presented. With that said, we won't be using any hardcore rust language features so you should be able to follow along if you're still a rust beginner.

I'm assuming that you have general knowledge about web architecture at a high level: how databases work, what an ORM is, what a REST API is, and how HTML, CSS, and JavaScript interact in the browser. I'll go over details for the specific tools you're using -- you don't need to be an SQL or JavaScript expert, for example.

I'm assuming that you have access to a platform that is supported by rust and the other tools used in this book. I've used Ubuntu 18.04 to prepare and test all of the code that will be presented, along with current rust-nightly builds and current releases of all crates presented as of the time of this writing (August 2019).

Let's Get Started

Pre-Requisite Packages

In order to install rustup using the instructions on the website, you will need curl:

$ sudo apt update
$ sudo apt install curl

In order to run the compiler and other tools, we need some basic development tools installed:

$ sudo apt install build-essential

Installation on Windows

(NOTE: I haven't tested these instructions; they were contributed by reader Miodrag Milić. I don't run Windows, but server stats show that a lot of you do, so I wanted to add these instructions in the hope they are helpful. If you have problems, please open an issue.)

Windows users should have chocolatey: iwr https://chocolatey.org/install.ps1 | iex.

Subsequent opreations are run in the admin shell.

Install general prereqs:

c++ tools, rustup, rust

cinst -y visualstudio2017-workload-vctools
iwr https://win.rustup.rs/x86_64 -outf rustup-init.exe
./rustup-init -y
refreshenv

Install project prereqs

sqlite

cd ...\mytodo

cinst -y sqlite
sal lib "${Env:ProgramFiles(x86)}\Microsoft Visual
Studio\2017\BuildTools\VC\Tools\MSVC\14.11.25503\bin\HostX64\x64\lib.exe"
lib /def: $env:ChocolateyInstall\lib\SQLite\tools\sqlite3.def  /machine:X64
/out:sqlite3.lib
$Env:SQLITE3_LIB_DIR = $pwd
cargo install diesel_cli --no-default-features --features sqlite

cp $env:ChocolateyInstall\lib\SQLite\tools\sqlite3.dll . # diesel.exe requires it

'DATABASE_URL=./testdb.sqlite3' | Out-File .env  #doesn't seem to work, use
next line
$Env:DATABASE_URL = "$pwd/testdb.sqlite3"
diesel setup

Install via rustup

Go to https://rustup.rs and follow the instructions. Note that some linux distributions provide stable rust releases, but for the purposes of what we're doing here we want to use a nightly toolchain.

Install the nightly toolchain: rustup toolchain install nightly.

And then set it to be the global default: rustup default nightly.

Generate Our App

We can verify that our installation was successful and generate the skeleton for our app:

$ cargo new mytodo
$ cd mytodo
$ cargo run

You should see messages as your app builds, and then the "Hello World" greeting. If not, go back to https://rustup.rs and verify that you've followed directions correctly.

Set up ORM & database

At the bottom layer of our stack is the database. For our todo app we are using sqlite3.

Install sqlite3

We need the SQLite3 libraries, and it is convenient to have the SQLite3 cli. On Ubuntu, this is easy to install via:

$ sudo apt install sqlite3 libsqlite3-0 libsqlite3-dev

For installation on other linux distributions, check your distribution's documentation. For installation on other platforms, see sqlite.org.

Install MySQL and PostgreSQL Libs

Diesel's CLI needs to link against MySQL and PostgreSQL libs, so we need to install these:

$ sudo apt install libpq-dev libmysqlclient-dev

Install diesel

We could just make raw queries into the database and do some ad hoc marshalling of the data into the types that will be used by our application, but that's complicated and error prone. We don't like things that are messy and error prone -- one of the big things that rust gives us is a way to verify certain aspects of correctness at compile time.

Diesel is an ORM for rust that provides us with a type-safe way to interact with the database. It supports Sqlite3, Postgres, and Mysql. Take a look at the documentation for guides on using a different backend if you don't want to use Sqlite3.

Modify the dependencies section of your Cargo.toml so that the file looks like this:

[package]
name = "myblog"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[dependencies]
diesel = { version = "1.0.0", features = ["sqlite"] }

Now if you cargo run it should fetch diesel (and everything it needs), build it, and run your example.

But that's not all we want from diesel. There's also a cli that we can use to help manage our project. Since this is only used by us during development, and not by the app during runtime, we can install it onto our workstation instead of into our Cargo.toml. Do this via cargo install diesel_cli. This installs a cli tool -- see diesel help for a list of subcommands.

We need to tell the diesel cli what our database filename is. We can do this using the DATABASE_URL environment variable, or by setting the variable in a file called .env. Let's do the latter:

$ echo 'DATABASE_URL=./testdb.sqlite3' > .env

Now we can get diesel to set up our workspace for us:

$ diesel setup

If you inspect your directory contents, you will see a file called testdb.sqlite3.

Write the first migration

Now that we've got our database and ORM installed, we should do something with them. Maybe store some data, in some kind of structured way?

Diesel makes changes to the database structure as a series of "migrations". For each migration, we need to provide the SQL commands required both to implement whatever database change we're making, and to undo that change.

So whenever we write a migration that creates a new table, we also need to provide the SQL to remove that table if our migration needs to be undone.

Generate the skeleton

We'll use diesel's cli to generate our first migration:

$ diesel migration generate task

This will create a subdirectory under ./migrations/ which is named with a date/time stamp and the suffix "_task" (which you gave the generate command). In that timestamped subdirectory are two stub files. One is the "up" migration where we will create our new table, and the other is the "down" migration where we will remove the new table.

Write the SQL

Make up.sql look like this:

CREATE TABLE task (
    id INTEGER NOT NULL,
    title TEXT NOT NULL,
    PRIMARY KEY (id)
);

This will create a very simple table for storing our task list.

Test the migration

Let's test our migration:

$ diesel migration run
Running migration 2019-08-19-023055_task
$ echo .dump | sqlite3 testdb.sqlite3

That last command should show the table we created. Great!

The other part of testing a migration is ensuring that rollbacks work. This is done via the redo command, which reverts the latest migration and then re-runs it.

$ diesel migration redo
Rolling back migration 2019-08-19-023055_task
Running migration 2019-08-19-023055_task
Executing migration script
/projects/mytodo/migrations/2019-08-19-023055_task/up.sql
Failed with: table task already exists

Oh no! Our redo failed because we forgot to modify down.sql. Let's fix that -- to revert the change we made in the up migration, make down.sql look like this:

DROP TABLE task;

Now try running diesel migration redo again. This time it should be successful.

At this point, I don't know about you, but I'm itching to write some code. Good news: the next chapter has a bunch of code.

Create a Database Access Layer

For convenience of our upper-layer application, we'll write a set of thin wrappers around the diesel code and expose it as a module.

As one way of testing these wrappers, we'll also build an executable with subcommands that exercise each of the wrapper functions. Then this suite of subcommands can be used as a debugging/troubleshooting/experimentation tool.

Inserting a Task

Our database is kind of sad. Just one lonely table, and no signs of a row to be found anywhere. Let's fix this situation -- but first, there are two little surprises waiting for you.

You may have already noticed that you have diesel.toml in your directory:

# For documentation on how to configure this file,
# see diesel.rs/guides/configuring-diesel-cli

[print_schema]
file = "src/schema.rs"

This was generated when we ran diesel setup. Note that it refers to src/schema.rs. Let's take a peek at that:

table! {
    task (id) {
        id -> Integer,
        title -> Text,
    }
}

We got this for free when we ran our schema migration -- it will automatically get updated every time we run a migration (in either direction). The table! macro that you see there generates a bunch of code that we can use when we work with the tables in our database.

The next thing we need just a little bit of glue. Before we start typing, we need to think about where we want to keep this code. Right now all we have is a binary crate, but we want to build up a little library. Let's plan on having the following module structure:

mytodo
  +-- db
  |    +-- models
  |    +-- schema
  +-- rest

We'll create the db module now, and the rest later. (Pun intended. I'm so sorry.) We have to let rust know we're making the db module by creating a src/lib.rs:

#[macro_use]
extern crate diesel;

pub mod db;

That also pulls in diesel's macros -- our code will heavily rely on these.

And we want to get diesel to make changes to schema.rs in a new location, so let's change diesel.toml:

# For documentation on how to configure this file,
# see diesel.rs/guides/configuring-diesel-cli

[print_schema]
file = "src/db/schema.rs"

We can test that we've got diesel set up correctly by removing the existing schema and rerunning the migration to generate a new one:

$ rm src/schema.rs
$ diesel migration redo
Rolling back migration 2019-08-19-023055_task
Running migration 2019-08-19-023055_task
$ ls src/db/
schema.rs

That little bit of glue I mentioned is the models module shown in the tree above. This is where we define some types that we can use for reading and writing the database. And now we're ready to write some code. Create src/db/models.rs:

use super::schema::task;

#[derive(Insertable)]
#[table_name = "task"]
pub struct NewTask<'a> {
    pub title: &'a str,
}

By deriving our struct from Insertable and setting the table_name to what we've got in our schema, diesel will automagically give us code to perform database inserts -- in src/db/mod.rs we can add our function to take advantage of this:

use diesel::{prelude::*, sqlite::SqliteConnection};

pub mod models;
pub mod schema;

pub fn establish_connection() -> SqliteConnection {
    let db = "./testdb.sqlite3";
    SqliteConnection::establish(db)
        .unwrap_or_else(|_| panic!("Error connecting to {}", db))
}

pub fn create_task<'a>(connection: &SqliteConnection, title: &'a str) {
    let task = models::NewTask { title };

    diesel::insert_into(schema::task::table)
        .values(&task)
        .execute(connection)
        .expect("Error inserting new task");
}

The first line pulls in things from diesel that we need. The next two lines expose our models and schema submodules. Then there's a convenience function to create an SqliteConnection to our database. (Note: anything more serious than a toy application like we're building will need a better mechanism for setting the path to the database!)

And finally, the create_task function is almost anti-climactic in its simplicity. We create an object from the struct we declared in models. Diesel's insert_into takes the table from our schema, gives us back an object that we can add to with values, which gives us back an object that we can execute.

In a real application, we'd do a better job of error handling. Also note that we're intentionally throwing away the return value, which would be a usize representing the number of rows created. Other database engines (e.g. Postgres) can make the id of the just-inserted row available in the query result, but we don't get that with SQLite.

We can make sure we've got everything written correctly by building with cargo build. Expect to see a couple of dead_code warnings about unused functions -- we'll fix that soon by putting them to use. But first we need a hook to hang that usage on.

Create a Development Tool

Earlier I mentioned a tool that we could use to read and write from the database. We'll create the infrastructure for that now. First let's create a new binary:

$ mkdir src/bin/

Here's src/bin/todo.rs, piece by piece. First we need std::env so we can process the command line arguments, and we need the db functions we just wrote.

use std::env;
use mytodo::db::{create_task, establish_connection};

Then we've got a function to print help, in case the user provides some unparseable set of arguments.

fn help() {
    println!("subcommands:");
    println!("    new<title>: create a new task");
}

In our main function we just match the first argument against our set of possible subcommands and dispatch the remaining arguments to a handler -- and calling help in case there is no arg or we can't make sense out of it.

fn main() {
    let args: Vec<String> = env::args().collect();

    if args.len() < 2 {
        help();
        return;
    }

    let subcommand = &args[1];
    match subcommand.as_ref() {
        "new" => new_task(&args[2..]),
        _ => help(),
    }
}

And finally we have our subcommand handler that opens a database connection and passes the title down to the db layer.

fn new_task(args: &[String]) {
    if args.len() < 1 {
        println!("new: missing <title>");
        help();
        return;
    }

    let conn = establish_connection();
    create_task(&conn, &args[0]);
}

Now we can test it!

$ cargo run --bin todo new 'do the thing'
   Compiling mytodo v0.1.0 (/projects/rust/mytodo)
    Finished dev [unoptimized + debuginfo] target(s) in 1.47s
     Running `target/debug/todo new 'do the thing'`
$ cargo run --bin todo new 'get stuff done'
    Finished dev [unoptimized + debuginfo] target(s) in 0.01s
     Running `target/debug/todo new 'get stuff done'`
$ echo 'select * from task;' | sqlite3 testdb.sqlite3
1|do the thing
2|get stuff done

Be careful with that last command -- if you get a little SQL on your hands there, you should go wash up before it stains. When you come back we'll make a safer way to query tasks.

Querying Tasks

When we wrote our insertion function, we used a struct that was derived from Insertable. Maybe you won't find it surprising that we will want to use a struct derived from Queryable to perform queries. Add this to src/db/models.rs:

#[derive(Queryable)]
pub struct Task {
    pub id: i32,
    pub title: String,
}

It's worth noting -- because it's such an attractive thing to want -- that you can't just derive one struct from both Queryable and Insertable. This would require you to set the id when you perform the insert, and we almost always want to let the database engine automatically assign the id.

In src/db/mod.rs we can add a function that returns a Vec of this new model struct when querying the task table:

pub fn query_task(connection: &SqliteConnection) -> Vec<models::Task> {
    schema::task::table
        .load::<models::Task>(connection)
        .expect("Error loading tasks")
}

Add a new subcommand handler to src/bin/todo.rs:

fn show_tasks(args: &[String]) {
    if args.len() > 0 {
        println!("show: unexpected argument");
        help();
        return;
    }

    let conn = establish_connection();
    println!("TASKS\n-----");
    for task in query_task(&conn) {
        println!("{}", task.title);
    }
}

Adding the match in main and a help output line is a simple exercise for the reader.

Now we can test both our query function and our insertion function without getting too close to any SQL.

$ cargo run --bin todo show
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/todo show`
TASKS
-----
do the thing
get stuff done

Database Layer Wrap-Up

We've written a very simple (i.e. absolute minimum) data abstraction layer, and we've exercised it by writing a CLI tool for poking and peeking the database.

Our data model is so simple the app isn't actually capable of being useful (there's no mechansim to mark a task done, or even to delete a task). Exercises are suggested below to fix this situation.

We are completely missing:

  • comments
  • documentation (outside of help)
  • tests
  • continuous integration

and other quality-maintenance kind of stuff. In a production app we'd definitely add these as we go along.

However, even with these shortcomings we have enough infrastructure upon which to build our next layer, the REST API. Try your hand at the exercises, and we'll work on the REST API in Chapter @ref(create-a-rest-api-layer).

Database Layer Exercises

Add a "done" column to the table

Write and run a migration, update the models, update the insertion code to set the task as pending (not done) on creation, and update the show subcommand to show done/pending status.

Add a subcommand to mark tasks done. You'll need to decide whether to let the user provide the title or the id.

If the former, then in the db layer, you may need to add a function to look up by title, and call that from the subcommand so that you can pass the id to the other new db layer function you'll add that udpates the record to set done to true.

If the latter, you should probably modify the show subcommand to display ids.

This sounds like a lot of steps but they're all fairly simple. Check out the diesel guides for help with queries that filter, and with update operations.

Add a subcommand to delete a task

As above, you'll need to decide whether to have the user provide the id or the title. Then add a db layer function to delete a task, and a subcommand to call it.

Create a REST API Layer

Since our GUI is going to run in the browser, we need something for the browser to talk to. There are other choices we could make, but REST is the most sensible choice we can make for this app in 2019. So that's what we're going to do!

Rocket is a web framework for rust that will make it easy for us to quickly write a rust program to serve our API.

Add rocket to Cargo.toml

We need to add rocket into our app. Make your Cargo.toml's dependencies section look like:

[dependencies]
diesel = { version = "1.0.0", features = ["sqlite"] }
rocket = "0.4.2"

Run cargo build to compile all the new dependencies.

Create the Backend Binary

We'll write our backend in src/bin/backend.rs. Let's make a first rough pass:

#![feature(proc_macro_hygiene, decl_macro)]

#[macro_use]
extern crate rocket;

use mytodo::db::{query_task, establish_connection};

#[get("/tasks")]
fn tasks_get() -> String {
    "this is a response\n".into()
}

fn main() {
    rocket::ignite()
        .mount("/", routes![tasks_get])
        .launch();
}

The first line enables a couple of features that rocket's macros need. These are experimental features, and this is part of why we need to build with rust nightly instead of stable.

Then we pull in rocket's macros.

Then we've got the handler for our route. For the moment it just returns a static string.

In our main we create a rocket instance, mount our handler, and start it.

Now if you cargo run --bin backend it will compile everything and start our backend listening on localhost port 8000. If you open your browser to http://localhost:8000/tasks (or use curl) you will see "this is a response".

Query the Database

So far we've got a pretty lame API. It just spits out a static string. It's cool that we can create a functioning web server from so little code, but we want to return dynamic data -- info pulled from the database.

We've basically already written this code -- it's nearly the same as the subcommand in our CLI program:

#[get("/tasks")]
fn tasks_get() -> String {
    let mut response: Vec<String> = vec![];

    let conn = establish_connection();
    for task in query_task(&conn) {
        response.push(task.title);
    }

    response.join("\n")
}

Run this (cargo will rebuild the changes for you), and refresh your browser -- you should see the task titles we inserted earlier. Great! But also, not so great -- we might want to add more data to the tasks in the future as our app gets popular. Like done-ness, due dates, and priority.

Just printing a bunch of lines of output is going to be tedious and error-prone for our frontend to parse.

We can make it easier for our frontend and our backend to communicate with each other if we use a standard data serialization format for our API, and likewise if we use some standard tools to do it.

That format will be JSON. There are lots of tools for dealing with JSON, and luckily for us one of them is part of rocket. The other tool we need is the excellent serde framework for serializing and deserializing data (in JSON and many other formats).

Serializing to JSON

We need to add serde and JSON support from rocket_contrib to our Cargo.toml:

[package]
name = "mytodo"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[dependencies]
diesel = { version = "1.0.0", features = ["sqlite"] }
rocket = "0.4.2"
serde = { version = "1.0", features = ["derive"] }

[dependencies.rocket_contrib]
version = "0.4.2"
default-features = false
features = ["json"]

And then we need to use rocket_contrib and serde in backend.rs:

#[macro_use]
extern crate rocket_contrib;
#[macro_use]
extern crate serde;

Let's think about how we want to format our response. We could just send back an array of tasks, where each task is an object with a title key that has a string value:

[
    { "title": "do the thing" },
    { "title": "get stuff done" }
]

One problem with this is that we have no uniform way to indicate an error. We'll be better off if we're closer to complying with the JSON API spec, which requires an object at the top level, and a data key at that level -- something like this:

{
    "data": [
        { "id": 1, "title": "do the thing" },
        { "id": 2, "title": "get stuff done" },
    ]
}

Note that this isn't strictly conforming with the JSON API spec because the resource objects (the stuff in the data array) aren't formatted properly. But for the sake of keeping this exposition simple we're going to settle for being non-compliant for now -- see the exercises for an approach to getting this done the right way.

Now that we know what we want to return, let's put together a rust structure to represent it:

#[derive(Serialize)]
struct JsonApiResponse {
    data: Vec<Task>,
}

Here we're using the Serialize derive-macro from serde for our struct. This makes it so that we can magically get JSON out of it. However, if we try to build this we get an error:

error[E0277]: the trait bound `mytodo::db::models::Task: serde::ser::Serialize`
    is not satisfied

We can only Serialize a struct if all the things in the struct also implement Serialize -- and Task doesn't do that... yet.

Can we fix it? YES WE CAN!

At the top of both lib.rs and backend.rs we need to enable serde macros:

#[macro_use]
extern crate serde;

And then in db/models.rs we need to slap a Serialize on the Task struct:

#[derive(Queryable, Serialize)]
pub struct Task {
    pub id: i32,
    pub title: String,
}

Our handler function will use the Json type from rocket_contrib -- note that this is a different type than serde's Json type! We need to add a use declaration for it at the top of backend.rs:

use rocket_contrib::json::Json;

With all of that in place we can modify our handler function to push the tasks we get back from query_task onto a response object, and then convert that to Json on the way out:

#[get("/tasks")]
fn tasks_get() -> Json<JsonApiResponse> {
    let mut response = JsonApiResponse { data: vec![], };

    let conn = establish_connection();
    for task in query_task(&conn) {
        response.data.push(task);
    }

    Json(response)
}

Run the backend, refresh your browser, and you should see json similar to the sample above. (It won't be pretty-printed -- you can curl --silent http://localhost:8000/tasks/ | jq . if you're into that.)

REST API Layer Wrap-Up

We just built a functional REST API backend, and I don't know about you, but I didn't even break a sweat. Of course, there are things we'd do differently in a production app:

  • testing, of both the unit and integration varieties
  • documentation, from comments to docstrings to REST API user (developer) docs
  • stricter conformance to JSON API
  • API versioning
  • database connection pooling so that we don't have to do establish_connection for every request, which would be important under load
  • make our response object use a parameterized type so that we can return different object types as the API grows new features

In the next chapter we will place the final layer -- a browser-based UI based on the Seed framework. But first -- some exercises!

REST API Layer Exercises

These exercises are more challenging and will require consulting more external documentation than in the last chapter.

Bring the API in conformance with the JSON API Spec

Specifically, this part:

A resource object MUST contain at least the following top-level members:

  • id
  • type

Exception: The id member is not required when the resource object originates at the client and represents a new resource to be created on the server.

In addition, a resource object MAY contain any of these top-level members:

  • attributes: an attributes object representing some of the resource’s data.

To do this, create a new Task wrapper struct that contains the id, type, and attributes, modify the JsonApiResponse struct to contain a vector of that wrapper, and modify the loop around task_query to create and push this type onto the response.

Use Connection Pooling

Read the rocket documentation on database connection pooling and implement it in backend.rs.

Create a Browser-Based Frontend UI

The last piece of our application is the UI, which will be based on the seed framework. We'll run this in the browser, by cross-compiling our rust code to WebAssembly (wasm). Please note that if you just want to play with seed, you should check out the quickstart repo that's linked from their documentation. We're going to set things up from scratch below so that you can get a feel for how everything is put together.

Cargo Workspace

This build works by generating a library. Cargo only allows one library per crate. We already have a library. That seems like a problem right?

No problem -- cargo supports "workspaces", where we can build multiple crates. We will build our backend (db + rest) into one library crate and our frontend into a separate crate. Any shared structs that we define will be in the root crate.

First, create a new crate as a subdirectory under our existing project directory:

$ cargo new --lib frontend

Then we need to move our existing code into a new crate:

$ cargo new --lib backend
$ mv src/lib.rs src/db src/bin/ backend/src/

And fix up crate references in backend/src/bin/backend.rs and backend/src/bin/todo.rs:

--- backend/src/bin/backend.rs
+++ backend/src/bin/backend.rs
@@ -9,8 +9,8 @@ extern crate serde;

 use rocket_contrib::json::Json;

-use mytodo::db::{query_task, establish_connection};
-use mytodo::db::models::Task;
+use backend::db::{query_task, establish_connection};
+use backend::db::models::Task;

 #[derive(Serialize)]
 struct JsonApiResponse {
--- backend/src/bin/todo.rs
+++ backend/src/bin/todo.rs
@@ -1,5 +1,5 @@
 use std::env;
-use mytodo::db::{create_task, query_task, establish_connection};
+use backend::db::{create_task, query_task, establish_connection};

 fn help() {
     println!("subcommands:");

We can build the backend and frontend by adding them as workspace members. We will modify the Cargo.toml to look like this:

[package]
name = "mytodo"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[workspace]
members = ["backend", "frontend"]

Notice that we've dropped the dependencies -- there is no longer any need for them in our root crate, but we need to add them to our backend/Cargo.toml:

[package]
name = "backend"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[dependencies]
diesel = { version = "1.0.0", features = ["sqlite"] }
rocket = "0.4.2"
serde = { version = "1.0", features = ["derive"] }

[dependencies.rocket_contrib]
version = "0.4.2"
default-features = false
features = ["json"]

Now you can do cargo build --all to build both workspaces, or specify just one with the -p flag. For example, cargo build -p frontend to build the frontend workspace -- although it's empty at the moment.

Install wasm toolchain

I mentioned above that we're going to be cross-compiling our code to wasm32. In order to do that we need to install the toolchain:

$ rustup target add wasm32-unknown-unknown

We also need to set up our crate to build wasm32 and add mytodo, seed, wasm-bindgen, and web-sys as dependencies. Modify frontend/Cargo.toml:

[package]
name = "frontend"
version = "0.1.0"
authors = ["Your Name <[email protected]>"]
edition = "2018"

[lib]
crate-type = ["cdylib"]

[dependencies]
mytodo = { path = ".." }
seed = "^0.4.0"
wasm-bindgen = "^0.2.50"
web-sys = "^0.3.27"

We need to install wasm-pack, which requires some host-system support packages in order for the installation to work:

$ sudo apt install libssl-dev pkg-config
$ cargo install wasm-pack

We can build the wasm package by doing:

$ cd frontend
$ wasm-pack build --target web --out-name package --dev

This will leave output in frontend/dev/. Having this extra command to run is kind of tedious, especially with the two workspaces. Let's automate that out of our way.

cargo make

cargo make is a tool we can use to automate our build tasks. If you've ever written a Makefile you have an idea of what cargo make can do -- but the modern version adds about 100x more verbosity. On the bright side, cargo make's syntax is much easier to fathom.

To install it, run cargo install cargo-make.

To configure it, create a new file Makefile.toml in the root of our project directory:

[env]
CARGO_MAKE_EXTEND_WORKSPACE_MAKEFILE = "true"

[tasks.default]
clear = true
dependencies = ["build"]

This file just defines one task: The default task is what gets run when you just say cargo make. (You can optionally specify a task name to run like cargo make build.) We've added clear = true to this task because the tool has a builtin default task that runs a bunch of other tasks -- these are convenient, we don't want to get distracted by right now. The default task depends on the build task.

The build task is a built-in task that runs cargo build --all-features, which is perfect for what we need so we don't need to override it.

cargo make knows about workspaces, and will run each task in each workspace. But so far all we've got is what we had before -- we don't have it running wasm-pack yet. That's where the env variable that is set at the top of the file comes in. It means that cargo make will look in workspace directories for Makefile.toml files, and any tasks in those files will override the tasks in the workspace-level Makefile.toml.

So let's override default in frontend/Makefile.toml to do what we need:

[tasks.default]
dependencies = ["create_wasm"]

[tasks.create_wasm]
command = "wasm-pack"
args = ["build", "--target", "web", "--out-name", "package", "--dev"]
dependencies = ["build"]

Here the default task depends on create_wasm which runs wasm-pack as mentioned above.

With all that in place, now just running cargo make in the root will give us:

  • backend library and binaries under target/debug
  • browser-loadable web assembly package in frontend/pkg/package_bg.wasm

With all that build infrastructure out of the way, we can move on to coding the UI.

Behind the Scenes

The way our frontend app is going to work:

  • we write some rust
  • wasm-pack generates some files
    • the .wasm file is a WebAssembly binary
    • the .js file is a javascript loader that will pull in the wasm, and it acts as the gatekeeper between javascript and rust
    • package.json has some metadata in case we want to integrate with npm and friends
  • we write an html stub file, that loads the .js, which loads the .wasm
  • our app attaches itself to a DOM element in the html file
  • the browser shows our app's UI elements
  • our users rejoice

Create a Stub App

Create frontend/index.html:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>MyTODO</title>
  </head>
  <body>
    <div id="app"></div>
    <script type="module">
      // https://rustwasm.github.io/docs/wasm-bindgen/examples/without-a-bundler.html
      import init from '/pkg/package.js';
      init('/pkg/package_bg.wasm');
    </script>
  </body>
</html>

As you can see from the comment, this is based on a wasm-bindgen example that doesn't use a bundler (like webpack). This is for simplicity in this example -- in a larger app we may want other web assets that we want to pack together with our application.

All our html needs to do is provide a div with an id of app and the script snippet that loads the package. Then the loader will take over and inject our elements into the DOM.

Then we just need to add some code in frontend/src/lib.rs:

#[macro_use]
extern crate seed;
use seed::prelude::*;

#[derive(Clone, Debug)]
enum Direction {
    Coming,
    Going,
}

struct Model {
    direction: Direction,
}

#[derive(Clone, Debug)]
enum Msg {
    Click,
}

fn update(msg: Msg, model: &mut Model, _orders: &mut impl Orders<Msg>) {
    match msg {
        Msg::Click => {
            model.direction = match model.direction {
                Direction::Coming => Direction::Going,
                Direction::Going => Direction::Coming,
            }
        }
    }
}

fn view(model: &Model) -> impl View<Msg> {
    let greeting = match model.direction {
        Direction::Coming => "Hello, World!",
        Direction::Going => "¡Hasta la vista!",
    };
    h1![
        class! {"heading"},
        style!["height" => "100vh",
               "width" => "100vw",
        ],
        { greeting },
        simple_ev(Ev::Click, Msg::Click),
    ]
}

fn init(_url: Url, _orders: &mut impl Orders<Msg>) -> Model {
    Model {
        direction: Direction::Coming,
    }
}

#[wasm_bindgen(start)]
pub fn render() {
    seed::App::build(init, update, view).finish().run();
}

Let's walk through this starting from the bottom. Everything kicks off with our render function because we added the start attribute to the #[wasm_bindgen] macro. This sets things up so that our function is called as soon as the module is loaded.

This function creates a seed app, passing in our init, update, and view functions, and then launches the app.

Our init function gets called first, and is responsible for potentially doing anything with an url path that the app was started from (we don't handle that here -- we won't handle any routing at all in this guide). It then needs to create and return a Model that will store state for the app. Our app just has two silly states so that we can see how basic event handling works.

Moving up a block, our view function takes the model and returns a DOM node. Here we're simply matching on coming or going and setting an appropriate greeting in our <h1>. Seed provides macros for all valid HTML5 tags, and as you can see in the xample it also has macros for things like class and style.

Also you can see here how we've attached a simple event handler: whenever a click occurs on our h1 (which is the entire size of the viewport thanks to the styling) it will send a click message to our update function.

In our update function, we simply dispatch on the message type (there's only one for this tiny example) and then toggle the model's direction. Our view will get called again and the DOM will be re-rendered (we could call orders.skip() to prevent this), and we will see the greeting toggle.

And now that we gone over the basics we can move on to fetching and displaying some tasks, so that how we can be more productive!

Fetch Tasks from Backend

Seed provides some useful tools for fetching data, so the first thing we need to do is import those from the seed namespace in frontend/src/lib.rs:

use seed::{fetch, Request};

Then, since the first thing we want to do is load the tasks from the backend, we'll change our init function (and add a new function):

fn fetch_drills() -> impl Future<Item = Msg, Error = Msg> {
    Request::new("http://localhost:8000/tasks/").fetch_json_data(Msg::FetchedTasks)
}

fn init(_url: Url, orders: &mut impl Orders<Msg>) -> Model {
    orders.perform_cmd(fetch_drills());
    Model {
        direction: Direction::Coming,
    }
}

Let's talk a little bit about what's going on here, because it's not necessarily obvious at first glance. In our original init we just ignored the Orders object. Now we're going to use it. Orders provides a mechanism for us to be able to add messages or futures to a queue. We can send multiple messages or futures and they will be performed in the order that we call the functions, with futures being scheduled after the model update.

Since we want to fetch our tasks, we create a future using the Requests struct, which is seed's wrapper around the Fetch API. We create a new request for a hard-coded (gasp!) url, and then call its fetch_json_data method which returns a Future. This future will create the Msg we provided, which will then get pumped into our uupdate function when the request completes (or fails).

If we try compiling now, we get several errors. First, we haven't imported Future. Second, we forgot to define Msg::FetchedTasks. The first one is simplest so let's tackle that. First add a dependency on the futures crate to frontend/Cargo.toml:

[dependencies]
mytodo = { path = ".." }
futures = "^0.1.28"

and then add a use in frontend/src/lib.rs:

use futures::Future;

To fix the second error we have to dive into what fetch_json_data is really doing with the Msg that we give it. What we're really providing (as shown in the api docs) is a FnOnce that takes a ResponseDataResult<T> where the latter is really a type alias for a Result<T, FailReason<T>>. Sheesh, that's a mouthful. But really all we need to provide is an enum member that takes a ResponseDataResult<T> where T is a serde Deserialize. (I think that's simpler. A little bit. How about an example?)

Back near the top of lib.rs, let's remove Msg::Click because we don't need it any more, and add FetchedTasks:

#[derive(Clone, Debug)]
enum Msg {
    FetchedTasks(fetch::ResponseDataResult<JsonApiResponse>),
}

If we try to build now... oh no, we made it worse. Several of the errors are about the newly-missing Click. As an exercise: go through and get rid of those errors -- modify every place there's a reference to Click. It's easy. I'll be here when you're done.

Hint: completely remove the simple_ev call in view, and the entire match arm in update.

Ok now when we build we're down to just two compile errors. Both of these are missing structs that we defined earlier in the backend crate. It seems like the right thing to do would be to add a dependency on the backend crate (you can try it and see what happens).

However, that is not the right thing. It is in fact the wrong thing. The biggest and most immediately obvious reason is that the backend pulls in dependencies that won't even build for wasm. The second is really kind of the same reason: we don't want to be forced to build those extra dependencies, and while certain techniques can be used to keep dependencies that we're not actually using out of our final package, we don't want to risk bloating our package with a bunch of unneeded stuff.

A better solution is to simply move the structs up into the root of our project.

We need serde, so add it to Cargo.toml:

[dependencies]
serde = { version = "1.0", features = ["derive"] }

Create a new file src/lib.rs:

#[macro_use]
extern crate serde;

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Task {
    pub id: i32,
    pub title: String,
}

#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct JsonApiResponse {
    pub data: Vec<Task>,
}

This is pretty straightforward: we just define the two structs we need, deriving from serde so that we have bidirectional serialization, and from Clone and Debug so we can have that easily on our Msg type.

Then we can modify our backend REST API to use the new structs. The backend needs to add the root crate as a dependency in backend/Cargo.toml:

[dependencies]
mytodo = { path = ".." }

And in backend.rs we need to remove our existing definition of JsonApiResponse, remove the use of backend::db::Task, and add a use from mytodo:

use mytodo::JsonApiResponse;

With this change, the backend almost builds. Unfortunately, almost doesn't count with compilers. This error is interesting:

error[E0308]: mismatched types
  --> backend/src/bin/backend.rs:21:28
   |
21 |         response.data.push(task);
   |                            ^^^^ expected struct `mytodo::Task`, found struct `backend::db::models::Task`
   |
   = note: expected type `mytodo::Task`
              found type `backend::db::models::Task`

Our loop is trying to push a database-task, but our response object wants an api-task. Obviously we should get rid of the db::models::Task struct and just have it use the mytodo::Task struct instead, right? The way it is now is repetitive, and that violates DRY, and we want to stay DRY!

Well, let's think about how the different pieces are potentially going to change. We might enhance our application in many different ways. Some of those ways might change our database schema -- which will require changes to db::models structs. We would like to avoid being forced to change our REST API models every time our database changes.

Right now it seems like it's repetitive, but that's only because our task model is ultra-simple. If our app grows new features it's very likely we will need two different models, so we will keep them separate. And since they're separate, we need to manually convert from db_task into api_task in our loop over the query:

    for db_task in query_task(&conn) {
        let api_task = mytodo::Task {
            id: db_task.id,
            title: db_task.title,
        };
        response.data.push(api_task);
    }

All right! Now our backend builds cleanly, and there's only one more (easy) error to fix up in the frontend!

error[E0004]: non-exhaustive patterns: pattern `FetchedTasks` of type `Msg` is not handled
  --> frontend/src/lib.rs:25:11
   |
20 | / enum Msg {
21 | |     FetchedTasks(fetch::ResponseDataResult<JsonApiResponse>),
   | |     ------------ variant not covered
22 | | }
   | |_- `Msg` defined here
...
25 |       match msg {
   |             ^^^
   |
   = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms

We just need to handle Msg::FetchedTasks in update:

fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
    match msg {
        Msg::FetchedTasks(Ok(result)) => {
            // TODO: update the model
        }
        Msg::FetchedTasks(Err(reason)) => {
            log!(format!("Error fetching: {:?}", reason));
            orders.skip();
        }
    }
}

Here we pattern-match on the Ok or Err of the Result, logging the latter to the console. (In production we'd probably want to notify the user or retry the operation.) If we get an Ok then the fetch has succeeded and we should update the model so we can render the tasks in the DOM. But we're stubbing it now so we can finally have a successful build after that long refactoring.

A Side Note On Refactoring and Testing

In retrospect, we could have made that refactoring in smaller, safer chunks if we had known in advance how many things we were going to have to change. A better approach would have been to move the structs up to the root first, then fix up the backend, add the fetched message, and finally remove the click message. Each of those steps could have been built and tested separately. But since we're stumbling through the changes a little bit we let the compiler guide us through the refactorings to a large extent.

Also, refactorings like this get scary as the app gets bigger. It's still small enough that we can easily test the backend, the frontend, and the end-to-end by hand. If we are serious about this at all, we'd definitely want some unit and integration tests wrapped around our app. I've left tests out of the scope of this book for the sake of brevity, clarity, and forward momentum, but they're a critical piece of any development effort, and they make up a big part of an upcoming book ("Engineering Rust Web Applications") that will be the big brother this little guide always wanted.

Displaying the Tasks

We have some tasks we want to display. Our displaying machinery lives in the view function. We need a way to get the tasks from our update function (where we get the fetch result) to the view function (where we make the nodes). The one thing these have in common is our Model. Let's remove the now-useless Direction struct and replace it with a Vec<Task>. We have to touch a few different places, so here's the whole frontend/src/lib.rs:

#[macro_use]
extern crate seed;
use futures::Future;
use seed::prelude::*;
use seed::{fetch, Request};

use mytodo::{JsonApiResponse, Task};

struct Model {
    tasks: Vec<Task>,
}

#[derive(Clone, Debug)]
enum Msg {
    FetchedTasks(fetch::ResponseDataResult<JsonApiResponse>),
}

fn update(msg: Msg, model: &mut Model, _orders: &mut impl Orders<Msg>) {
    match msg {
        Msg::FetchedTasks(Ok(mut result)) => {
            model.tasks.clear();
            model.tasks.append(&mut result.data);
        }
        Msg::FetchedTasks(Err(reason)) => {
            log!(format!("Error fetching: {:?}", reason));
        }
    }
}

fn view(model: &Model) -> impl View<Msg> {
    let tasks: Vec::<Node<Msg>> = model.tasks.iter().map(|t| {
        li![{t.title.clone()}]
    }).collect();

    h1![
        {"Tasks"},
        ul![
            tasks,
        ],
    ]
}

fn fetch_drills() -> impl Future<Item = Msg, Error = Msg> {
    Request::new("http://localhost:8000/tasks/").fetch_json_data(Msg::FetchedTasks)
}

fn init(_url: Url, orders: &mut impl Orders<Msg>) -> Model {
    orders.perform_cmd(fetch_drills());
    Model {
        tasks: vec![],
    }
}

#[wasm_bindgen(start)]
pub fn render() {
    seed::App::build(init, update, view).finish().run();
}

Notice that we've deleted the Direction struct and replaced it's presence in Model with a vector of tasks.

In update we set the model to contain the vec from the result.

In view, the h1 now just contains a heading "Tasks" and we've set up a ul underneath it. At the top of the function we're mapping over the tasks in the model to create some li elements that we can hang off the ul.

Everything builds cleanly! Let's test it. In one window, start the backend:

$ cargo run -p backend --bin backend

In another window, serve the frontend using a convenient rust crate that simply serves the current directory from a small web server:

$ cd frontend
$ cargo install microserver
$ microserver

Browse to http://localhost:9090/ and... and... nothing!? So disappointing.

We can see in the window that's running our backend that a GET request came in. So we know something is happening.

Let's open developer tools (in Chrome, Ctrl+Shift+I) and look first at the console (whole lot of nothing) and then at the network tab to see what it's doing with the json request to our backend. Hmm, the tasks request is showing as red, that can't be good.

Ahh, we forgot about CORS (Cross-Origin Resource Sharing). Since our REST API is being served from port 8000 and our main page is being served from 9090, they're two separate origins, and we have to make sure our backend is returning the proper CORS headers.

This is a bad news / good news thing. Great news, really. The bad news is that we have a little more work to do. The great news is that it's really simple to get it working for our small example.

Adding CORS Support in the Backend

Diving right in, the support we need for CORS is in the rocket_cors crate, so change backend/Cargo.toml:

[dependencies]
rocket_cors = { version = "0.5.0", default-features = false }

And at the top of backend.rs we need to use rocket_cors:

use rocket_cors::{AllowedHeaders, AllowedOrigins, Error};

Then add the CORS options to main:

fn main() -> Result<(), Error> {
    let allowed_origins = AllowedOrigins::all();

    let cors = rocket_cors::CorsOptions {
        allowed_origins,
        allowed_headers: AllowedHeaders::some(&["Authorization", "Accept"]),
        allow_credentials: true,
        ..Default::default()
    }
    .to_cors()?;

    rocket::ignite()
        .mount("/", routes![tasks_get])
        .attach(cors)
        .launch();

    Ok(())
}

It's worth noting that we could be more restrictive with our options here -- for our purposes today we just want to open it up, but for a public-facing application we would want to carefully examine the options in the rocket_cors docs.

Now we can try serving the backend and frontend in separate windows again, and refreshing our browser.

Victory! You should now see the two tasks we defined earlier. If you open yet another window, run:

$ cargo run -p backend --bin todo new celebrate

and then refresh the browser, you will see the new task.

Frontend Wrap-Up

We've built a functional web application, from the bottom up, almost entirely in rust!

But ... there are also an awful lot of things that have been left out of this guide:

  • The frontend doesn't do any kind of user input. None. Nada.
    • (I am going to fix this in an update. Stay tuned.)
  • The frontend doesn't do any kind of routing -- no browsing to multiple pages within the app, no pagination, etc.
  • There are ZERO tests. I feel kind of dirty.
  • Also, ZERO attempts at error handling. Any component will panic if anything goes the least bit wrong.
  • I only scratched the surface of cargo make.
  • There's nary a mention of continuous integration.
  • Nothing about app deployment, upgrades, troubleshooting/debugging, or maintenance.
  • The data model in the database and REST API is trivial; there are interesting ways that this could be made more instructive.
  • It's out of compliance with the JSON API spec.
  • No users or authentication, or security of any sort.
  • No web-side external plugins (e.g. npm packages).
  • Nothing about interfacing directly with javascript.

But ... my intent for the scope of this guide was to show how to put together an all-rust stack for getting an application skeleton built from end-to-end. In a short guide. With the exception of user input in the web ui, I think I've done that.

Those missing pieces are important! I have a rough outline for a full-length follow-up to this book ("Engineering Rust Web Applications") that will cover those topics and more, with a more rigorous approach, but similar style. It won't be based around a todo app -- it's tentatively a library management system.

I'd love to hear your feedback and/or corrections. Please email info at erwabook.com or open an issue at https://gitlab.com/bstpierre/irwa/.

To get updates on this book, email-only draft chapters of "Engineering Rust Web Applications", or other Rust articles that land on this site, subscribe:

Subscribe

* indicates required
Email Format

Full-Stack Exercise

With all of that out of the way, let's try one last exercise. This is going to be a feature that slices all the way up the stack: due dates.

There are two ways to do this.

The Right Way™ -- store a proper date type in the database, carry it up through the models as a date type, and convert it to a string at the last minute before presenting it to the user. (Obviously this is the only way to do it for a real app.) Give yourself 3 stars if you tackle it this way.

The Easy Way™ -- if you're just interested in seeing all the places you have to touch to make a feature like this work, just store it as a string in the db, and bring it up through the stack as a string that you can show directly to the user. As a bonus, if you implement this method, you can have a task like "exercise more" or "start a budget" with a due date of "tomorrow" and then you never have to follow through! (If you do this and try to use it for real, you're going to end up filled with self loathing. "Fix the ****** app" will be one of your tasks, with a due date of "yesterday".)