Chapter 6 Introduction to dplyr

6.1 Intro

dplyr is a package for data manipulation, developed by Hadley Wickham and Romain Francois. It is built to be fast, highly expressive, and open-minded about how your data is stored. It is installed as part of the tidyverse meta-package and, as a core package, it is among those loaded via library(tidyverse).

dplyr’s roots are in an earlier package called plyr, which implements the “split-apply-combine” strategy for data analysis (Hadley Wickham 2011b). Where plyr covers a diverse set of inputs and outputs (e.g., arrays, data frames, lists), dplyr has a laser-like focus on data frames or, in the tidyverse, “tibbles”. dplyr is a package-level treatment of the ddply() function from plyr, because “data frame in, data frame out” proved to be so incredibly important.

Have no idea what I’m talking about? Not sure if you care? If you use these base R functions: subset(), apply(), [sl]apply(), tapply(), aggregate(), split(), do.call(), with(), within(), then you should keep reading. Also, if you use for() loops a lot, you might enjoy learning other ways to iterate over rows or groups of rows or variables in a data frame.

6.1.2 Say hello to the gapminder tibble

The gapminder data frame is a special kind of data frame: a tibble.

It’s tibble-ness is why we get nice compact printing. For a reminder of the problems with base data frame printing, go type iris in the R Console or, better yet, print a data frame to screen that has lots of columns.

Note how gapminder’s class() includes tbl_df; the “tibble” terminology is a nod to this.

There will be some functions, like print(), that know about tibbles and do something special. There will others that do not, like summary(). In which case the regular data frame treatment will happen, because every tibble is also a regular data frame.

To turn any data frame into a tibble use as_tibble():

6.2 Think before you create excerpts of your data …

If you feel the urge to store a little snippet of your data:

Stop and ask yourself …

Do I want to create mini datasets for each level of some factor (or unique combination of several factors) … in order to compute or graph something?

If YES, use proper data aggregation techniques or faceting in ggplot2 – don’t subset the data. Or, more realistic, only subset the data as a temporary measure while you develop your elegant code for computing on or visualizing these data subsets.

If NO, then maybe you really do need to store a copy of a subset of the data. But seriously consider whether you can achieve your goals by simply using the subset = argument of, e.g., the lm() function, to limit computation to your excerpt of choice. Lots of functions offer a subset = argument!

Copies and excerpts of your data clutter your workspace, invite mistakes, and sow general confusion. Avoid whenever possible.

Reality can also lie somewhere in between. You will find the workflows presented below can help you accomplish your goals with minimal creation of temporary, intermediate objects.

6.3 Use filter() to subset data row-wise

filter() takes logical expressions and returns the rows for which all are TRUE.

Compare with some base R code to accomplish the same things:

Under no circumstances should you subset your data the way I did at first:

Why is this a terrible idea?

  • It is not self-documenting. What is so special about rows 241 through 252?
  • It is fragile. This line of code will produce different results if someone changes the row order of gapminder, e.g. sorts the data earlier in the script.

This call explains itself and is fairly robust.

6.4 Meet the new pipe operator

Before we go any further, we should exploit the new pipe operator that the tidyverse imports from the magrittr package by Stefan Bache. This is going to change your data analytical life. You no longer need to enact multi-operation commands by nesting them inside each other, like so many Russian nesting dolls. This new syntax leads to code that is much easier to write and to read.

Here’s what it looks like: %>%. The RStudio keyboard shortcut: Ctrl+Shift+M (Windows), Cmd+Shift+M (Mac).

Let’s demo then I’ll explain.

This is equivalent to head(gapminder). The pipe operator takes the thing on the left-hand-side and pipes it into the function call on the right-hand-side – literally, drops it in as the first argument.

Never fear, you can still specify other arguments to this function! To see the first 3 rows of gapminder, we could say head(gapminder, 3) or this:

I’ve advised you to think “gets” whenever you see the assignment operator, <-. Similarly, you should think “then” whenever you see the pipe operator, %>%.

You are probably not impressed yet, but the magic will soon happen.

6.5 Use select() to subset the data on variables or columns.

Back to dplyr….

Use select() to subset the data on variables or columns. Here’s a conventional call:

And here’s the same operation, but written with the pipe operator and piped through head():

Think: “Take gapminder, then select the variables year and lifeExp, then show the first 4 rows.”

6.7 Pure, predictable, pipeable

We’ve barely scratched the surface of dplyr but I want to point out key principles you may start to appreciate. If you’re new to R or “programming with data”, feel free skip this section and move on.

dplyr’s verbs, such as filter() and select(), are what’s called pure functions. To quote from Wickham’s Advanced R Programming book (2015):

The functions that are the easiest to understand and reason about are pure functions: functions that always map the same input to the same output and have no other impact on the workspace. In other words, pure functions have no side effects: they don’t affect the state of the world in any way apart from the value they return.

In fact, these verbs are a special case of pure functions: they take the same flavor of object as input and output. Namely, a data frame or one of the other data receptacles dplyr supports.

And finally, the data is always the very first argument of the verb functions.

This set of deliberate design choices, together with the new pipe operator, produces a highly effective, low friction domain-specific language for data analysis.

Go to the next Chapter, dplyr functions for a single dataset, for more dplyr!

6.8 Resources

dplyr official stuff:

RStudio Data Transformation Cheat Sheet, covering dplyr. Remember you can get to these via Help > Cheatsheets.

Data transformation chapter of R for Data Science (Wickham and Grolemund 2016).

Excellent slides on pipelines and dplyr by TJ Mahr, talk given to the Madison R Users Group.

Blog post Hands-on dplyr tutorial for faster data manipulation in R by Data School, that includes a link to an R Markdown document and links to videos.

Chapter 15: cheatsheet I made for dplyr join functions (not relevant yet but soon).