Create a Database Access Layer

For convenience of our upper-layer application, we'll write a set of thin wrappers around the diesel code and expose it as a module.

As one way of testing these wrappers, we'll also build an executable with subcommands that exercise each of the wrapper functions. Then this suite of subcommands can be used as a debugging/troubleshooting/experimentation tool.

Inserting a Task

Our database is kind of sad. Just one lonely table, and no signs of a row to be found anywhere. Let's fix this situation -- but first, there are two little surprises waiting for you.

You may have already noticed that you have diesel.toml in your directory:

# For documentation on how to configure this file,
# see diesel.rs/guides/configuring-diesel-cli

[print_schema]
file = "src/schema.rs"

This was generated when we ran diesel setup. Note that it refers to src/schema.rs. Let's take a peek at that:

table! {
    task (id) {
        id -> Integer,
        title -> Text,
    }
}

We got this for free when we ran our schema migration -- it will automatically get updated every time we run a migration (in either direction). The table! macro that you see there generates a bunch of code that we can use when we work with the tables in our database.

The next thing we need just a little bit of glue. Before we start typing, we need to think about where we want to keep this code. Right now all we have is a binary crate, but we want to build up a little library. Let's plan on having the following module structure:

mytodo
  +-- db
  |    +-- models
  |    +-- schema
  +-- rest

We'll create the db module now, and the rest later. (Pun intended. I'm so sorry.) We have to let rust know we're making the db module by creating a src/lib.rs:

#[macro_use]
extern crate diesel;

pub mod db;

That also pulls in diesel's macros -- our code will heavily rely on these.

And we want to get diesel to make changes to schema.rs in a new location, so let's change diesel.toml:

# For documentation on how to configure this file,
# see diesel.rs/guides/configuring-diesel-cli

[print_schema]
file = "src/db/schema.rs"

We can test that we've got diesel set up correctly by removing the existing schema and rerunning the migration to generate a new one:

$ rm src/schema.rs
$ diesel migration redo
Rolling back migration 2019-08-19-023055_task
Running migration 2019-08-19-023055_task
$ ls src/db/
schema.rs

That little bit of glue I mentioned is the models module shown in the tree above. This is where we define some types that we can use for reading and writing the database. And now we're ready to write some code. Create src/db/models.rs:

use super::schema::task;

#[derive(Insertable)]
#[table_name = "task"]
pub struct NewTask<'a> {
    pub title: &'a str,
}

By deriving our struct from Insertable and setting the table_name to what we've got in our schema, diesel will automagically give us code to perform database inserts -- in src/db/mod.rs we can add our function to take advantage of this:

use diesel::{prelude::*, sqlite::SqliteConnection};

pub mod models;
pub mod schema;

pub fn establish_connection() -> SqliteConnection {
    let db = "./testdb.sqlite3";
    SqliteConnection::establish(db)
        .unwrap_or_else(|_| panic!("Error connecting to {}", db))
}

pub fn create_task<'a>(connection: &SqliteConnection, title: &'a str) {
    let task = models::NewTask { title };

    diesel::insert_into(schema::task::table)
        .values(&task)
        .execute(connection)
        .expect("Error inserting new task");
}

The first line pulls in things from diesel that we need. The next two lines expose our models and schema submodules. Then there's a convenience function to create an SqliteConnection to our database. (Note: anything more serious than a toy application like we're building will need a better mechanism for setting the path to the database!)

And finally, the create_task function is almost anti-climactic in its simplicity. We create an object from the struct we declared in models. Diesel's insert_into takes the table from our schema, gives us back an object that we can add to with values, which gives us back an object that we can execute.

In a real application, we'd do a better job of error handling. Also note that we're intentionally throwing away the return value, which would be a usize representing the number of rows created. Other database engines (e.g. Postgres) can make the id of the just-inserted row available in the query result, but we don't get that with SQLite.

We can make sure we've got everything written correctly by building with cargo build. Expect to see a couple of dead_code warnings about unused functions -- we'll fix that soon by putting them to use. But first we need a hook to hang that usage on.

Create a Development Tool

Earlier I mentioned a tool that we could use to read and write from the database. We'll create the infrastructure for that now. First let's create a new binary:

$ mkdir src/bin/

Here's src/bin/todo.rs, piece by piece. First we need std::env so we can process the command line arguments, and we need the db functions we just wrote.

use std::env;
use mytodo::db::{create_task, establish_connection};

Then we've got a function to print help, in case the user provides some unparseable set of arguments.

fn help() {
    println!("subcommands:");
    println!("    new<title>: create a new task");
}

In our main function we just match the first argument against our set of possible subcommands and dispatch the remaining arguments to a handler -- and calling help in case there is no arg or we can't make sense out of it.

fn main() {
    let args: Vec<String> = env::args().collect();

    if args.len() < 2 {
        help();
        return;
    }

    let subcommand = &args[1];
    match subcommand.as_ref() {
        "new" => new_task(&args[2..]),
        _ => help(),
    }
}

And finally we have our subcommand handler that opens a database connection and passes the title down to the db layer.

fn new_task(args: &[String]) {
    if args.len() < 1 {
        println!("new: missing <title>");
        help();
        return;
    }

    let conn = establish_connection();
    create_task(&conn, &args[0]);
}

Now we can test it!

$ cargo run --bin todo new 'do the thing'
   Compiling mytodo v0.1.0 (/projects/rust/mytodo)
    Finished dev [unoptimized + debuginfo] target(s) in 1.47s
     Running `target/debug/todo new 'do the thing'`
$ cargo run --bin todo new 'get stuff done'
    Finished dev [unoptimized + debuginfo] target(s) in 0.01s
     Running `target/debug/todo new 'get stuff done'`
$ echo 'select * from task;' | sqlite3 testdb.sqlite3
1|do the thing
2|get stuff done

Be careful with that last command -- if you get a little SQL on your hands there, you should go wash up before it stains. When you come back we'll make a safer way to query tasks.

Querying Tasks

When we wrote our insertion function, we used a struct that was derived from Insertable. Maybe you won't find it surprising that we will want to use a struct derived from Queryable to perform queries. Add this to src/db/models.rs:

#[derive(Queryable)]
pub struct Task {
    pub id: i32,
    pub title: String,
}

It's worth noting -- because it's such an attractive thing to want -- that you can't just derive one struct from both Queryable and Insertable. This would require you to set the id when you perform the insert, and we almost always want to let the database engine automatically assign the id.

In src/db/mod.rs we can add a function that returns a Vec of this new model struct when querying the task table:

pub fn query_task(connection: &SqliteConnection) -> Vec<models::Task> {
    schema::task::table
        .load::<models::Task>(connection)
        .expect("Error loading tasks")
}

Add a new subcommand handler to src/bin/todo.rs:

fn show_tasks(args: &[String]) {
    if args.len() > 0 {
        println!("show: unexpected argument");
        help();
        return;
    }

    let conn = establish_connection();
    println!("TASKS\n-----");
    for task in query_task(&conn) {
        println!("{}", task.title);
    }
}

Adding the match in main and a help output line is a simple exercise for the reader.

Now we can test both our query function and our insertion function without getting too close to any SQL.

$ cargo run --bin todo show
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/todo show`
TASKS
-----
do the thing
get stuff done

Database Layer Wrap-Up

We've written a very simple (i.e. absolute minimum) data abstraction layer, and we've exercised it by writing a CLI tool for poking and peeking the database.

Our data model is so simple the app isn't actually capable of being useful (there's no mechansim to mark a task done, or even to delete a task). Exercises are suggested below to fix this situation.

We are completely missing:

  • comments
  • documentation (outside of help)
  • tests
  • continuous integration

and other quality-maintenance kind of stuff. In a production app we'd definitely add these as we go along.

However, even with these shortcomings we have enough infrastructure upon which to build our next layer, the REST API. Try your hand at the exercises, and we'll work on the REST API in Chapter @ref(create-a-rest-api-layer).

Database Layer Exercises

Add a "done" column to the table

Write and run a migration, update the models, update the insertion code to set the task as pending (not done) on creation, and update the show subcommand to show done/pending status.

Add a subcommand to mark tasks done. You'll need to decide whether to let the user provide the title or the id.

If the former, then in the db layer, you may need to add a function to look up by title, and call that from the subcommand so that you can pass the id to the other new db layer function you'll add that udpates the record to set done to true.

If the latter, you should probably modify the show subcommand to display ids.

This sounds like a lot of steps but they're all fairly simple. Check out the diesel guides for help with queries that filter, and with update operations.

Add a subcommand to delete a task

As above, you'll need to decide whether to have the user provide the id or the title. Then add a db layer function to delete a task, and a subcommand to call it.