The scientific process is naturally incremental, and many projects start life as random notes, some code, then a manuscript, and eventually everything is a bit mixed together.
Managing your projects in a reproducible fashion doesn't just make your science reproducible, it makes your life easier.
— Vince Buffalo (@vsbuffalo) April 15, 2013
Most people tend to organize their projects like this:
There are many reasons why we should ALWAYS avoid this:
A good project layout will ultimately make your life easier:
Fortunately, there are tools and packages which can help you manage your work effectively.
One of the most powerful and useful aspects of RStudio is its project management functionality. We'll be using this today to create a self-contianed, reproducible project.
The first thing we're going to do is to install a third-party package, packrat
. This allows RStudio to create self-contained packages: any further packages you download will be contained within their respective projects. This is really useful, as different versions of packages can change results as new knowledge is gained. This allows you to easily keep track of the version used for your analyses.
Packages can be installed using the install.packages
function:
install.packages("packrat")
Now we're going to create a new project in RStudio:
Now when we start R in this project directory, or open this project with RStudio, all of our work on this project will be entirely self-contained in this directory. By installing packrat
and telling RStudio to use packrat
with this project, any third-party packages will be installed in a separate library in the packrat/
subdirectory of our project. This means we don't have to worry about package versions changing, especially when returning to a project after a long period of time (for example when writing up your thesis!).
Any libraries you already have installed outside of your project will need to be reinstalled in each packrat project.
Packrat will also analyse your script files and warn you if youre using any libraries not installed and managed inside your project. This is useful if you reuse code between projects.
Packrat also allows you to easily bundle up a project to share with someone else.
RStudio has a more detailed packrat tutorial
Now lets load the packrat library:
library("packrat")
Here we've called the function library
and used it to load those packages into our local namespace (our interactive R session). This means all of their functions are now available to us.
The main function you'll encounter in packrat
is the status
function:
packrat::status()
Up to date.
Here I've put the name of the library in front of its function, separated by a ::
. This explicitly tells R to call the function from that library. This can be useful to make your code clearer (status
is fairly generic function name and might be used by other packages), and useful when two packages have functions with the same names (in which case order of library loading becomes important), or you've written your own function or variable with the same name (you should try to avoid this).
You'll want to run this periodically (after installing libraries and writing new code) to make sure your project is still self-contained.
Although there is no "best" way to lay out a project, there are some general principles to adhere to that will make project management easier:
This is probably the most important goal of setting up a project. Data is typically time consuming and/or expensive to collect. Working with them interactively (e.g., in Excel) where they can be modified means you are never sure of where the data came from, or how it has been modified since collection. It is therefore a good idea to treat your data as "read-only".
In many cases your data will be "dirty": it will need significant preprocessing to get into a format R (or any other programming language) will find useful. This task is sometimes called "data munging". I find it useful to store these scripts in a separate folder, and create a second "read-only" data folder to hold the "cleaned" data sets.
Anything generated by your scripts should be treated as disposable: it should all be able to be regenerated from your scripts.
There are lots of different was to manage this output. I find it useful to have an output folder with different sub-directories for each separate analysis. This makes it easier later, as many of my analyses are exploratory and don't end up being used in the final project, and some of the analyses get shared between projects.
The next thing we're going to do is to install the third-party package, ProjectTemplate
. This package will set up an ideal directory structure for project management. This is very useful as it enables you to have your analysis pipeline/workflow organised and structured. Together with the default Rstudio project functionality and Git you will be able to keep track of your work as well as be able to share your work with collaborators.
ProjectTemplate
.install.packages("ProjectTemplate")
library(ProjectTemplate)
create.project("../my_project", merge.strategy = "allow.non.conflict")
For more information on ProjectTemplate and its functionality visit the home page ProjectTemplate
The most effective way I find to work in R, is to play around in the interactive session, then copy commands across to a script file when I'm sure they work and do what I want. You can also save all the commands you've entered using the history
command, but I don't find it useful because when I'm typing its 90% trial and error.
When your project is new and shiny, the script file usually contains many lines of directly executed code. As it matures, reusable chunks get pulled into their own functions. It's a good idea to separate these into separate folders; one to store useful functions that you'll reuse across analyses and projects, and one to store the analysis scripts.
You may find yourself using data or analysis scripts across several projects. Typically you want to avoid duplication to save space and avoid having to make updates to code in multiple places.
In this case I find it useful to make "symbolic links", which are essentially shortcuts to files somewhere else on a filesystem. On linux and OSX you can use the ln -s
command, and on windows you can either create a shortcut or use the mklink
command from the windows terminal.
Now we have a good directory structure we will now place/save the data file in the data/
directory.
Download the gapminer data from [here] (https://github.com/resbaz/r-novice-gapminder-files).
Download ZIP
located on the right hand side menu, last option. To download the .zip
file to your downloads folder.data/
within your project.We will load and inspect these data latter today.
We also set up our project to integrate with git, putting it under version control. RStudio has a nicer interface to git than shell, but is very limited in what it can do, so you will find yourself occasionally needing to use the shell. Let's go through and make an initial commit of our template files.
The workspace/history pane has a tab for "Git". We can stage each file by checking the box: you will see a Green "A" next to stage files and folders, and yellow question marks next to files or folders git doesn't know about yet. RStudio also nicely shows you the difference between files from different commits.
Generally you do not want to version disposable output (or read-only data). You should modify the .gitignore
file to tell git to ignore these files and directories.
Use packrat to install the packages we'll be using later:
Note: if you run packrat::status
it will warn you that these libraries are unecessary because they're not used in any project code.
Modify the .gitignore
file to contain cache/
, graphs/
, reports/
and logs/
so that disposable output isn't versioned.
Add the newly created folders to version control using the git interface.