Real world datasets are messy. There is no way around it: datasets have "holes" (missing data), the amount of formats in which data can be stored is endless, and the best structure to share data is not always the optimum to analyze them, hence the need to munge them. As has been correctly pointed out in many outlets (e.g.), much of the time spent in what is called (Geo-)Data Science is related not only to sophisticated modeling and insight, but has to do with much more basic and less exotic tasks such as obtaining data, processing, turning them into a shape that makes analysis possible, and exploring it to get to know their basic properties.
For how labor intensive and relevant this aspect is, there is surprisingly very little published on patterns, techniques, and best practices for quick and efficient data cleaning, manipulation, and transformation. In this session, you will use a few real world datasets and learn how to process them into Python so they can be transformed and manipulated, if necessary, and analyzed. For this, we will introduce some of the bread and butter of data analysis and scientific computing in Python. These are fundamental tools that are constantly used in almost any task relating to data analysis.
This notebook covers the basic and the content that is expected to be learnt by every student. We use a prepared dataset that saves us much of the more intricate processing that goes beyond the introductory level the session is aimed at. As a companion to this introduction, there is an additional notebook (see link on the website page for Lab 01) that covers how the dataset used here was prepared from raw data downloaded from the internet, and includes some additional exercises you can do if you want dig deeper into the content of this lab.
In this notebook, we discuss several patterns to clean and structure data properly, including tidying, subsetting, and aggregating; and we finish with some basic visualization. An additional extension presents more advanced tricks to manipulate tabular data.
Before we get our hands data-dirty, let us import all the additional libraries we will need, so we can get that out of the way and focus on the task at hand:
# This ensures visualizations are plotted inside the notebook
%matplotlib inline
import os # This provides several system utilities
import pandas as pd # This is the workhorse of data munging in Python
import seaborn as sns # This allows us to efficiently and beautifully plot
We will be exploring some of the characteristics of the population in Liverpool. To do that, we will use a dataset that contains population counts, split by ethnic origin. These counts are aggregated at the Lower Layer Super Output Area (LSOA from now on). LSOAs are an official Census geography defined by the Office of National Statistics that is small enough to create variation within cities, but large enough also to preserve privacy. For that reason, many data products (Census, deprivation indices, etc.) use LSOAs as one of their main geographies.
Let us first set the path to the file where we store the data we will use:
# Important! You need to specify the path to the data in *your* machine
# If you have placed the data folder in the same directory as this notebook,
# you would do:
# f = 'liv_pop.csv'
f = 'data/liv_pop.csv' # Path to file containing the table
IMPORTANT: the path above might have look different in your computer. See this introductory notebook for more details about how to set your paths.
Alternatively, you can read this file from its web location too (do not run the following cell if you want to read the data locally or are currently offline):
f = 'http://darribas.org/gds18/content/labs/data/liv_pop.csv'
To read a "comma separated values" (.csv
) file, we can run:
db = pd.read_csv(f, index_col='GeographyCode') # Read the table in
Let us stop for a minute to learn how we have read the file. Here are the main aspects to keep in mind:
read_csv
from the pandas
library, which we have imported with the alias pd
.f
index_col
is not strictly necessary but allows us to choose one of the columns as the index of the table. More on indices below.read_csv
because the file we want to read is in the csv
format. However, pandas
allows for many more formats to be read (and written, just replace read
by to
! For example, read_csv
reads in, to_csv
writes out). A full list of formats supported may be found here.Now we are ready to start playing and interrogating the dataset! What we have at our fingertips is a table that summarizes, for each of the LSOAs in Liverpool, how many people live in each, by the region of the world where they were born. Now, let us learn a few cool tricks built into pandas
that work out-of-the box with a table like ours.
head
(tail
). For example, for the top/bottom five lines:db.head()
db.tail()
db.info()
db.describe()
Note how the output is also a DataFrame
object, so you can do with it the same things you would with the original table (e.g. writing it to a file).
In this case, the summary might be better presented if the table is "transposed":
db.describe().T
# Obtain minimum values for each table
db.min()
# Obtain minimum value for the column `Europe`
db['Europe'].min()
Note here how we have restricted the calculation of the maximum value to one column only.
Similarly, we can restrict the calculations to a single row:
# Obtain standard deviation for the row `E01006512`,
# which represents a particular LSOA
db.loc['E01006512', :].std()
# Longer, hardcoded
total = db['Europe'] + db['Africa'] + db['Middle East and Asia'] + \
db['The Americas and the Caribbean'] + db['Antarctica and Oceania']
# Print the top of the variable
total.head()
# One shot
total = db.sum(axis=1)
# Print the top of the variable
total.head()
Note how we are using the command sum
, just like we did with max
or min
before but, in this case, we are not applying it over columns (e.g. the max of each column), but over rows, so we get the total sum of populations by areas.
Once we have created the variable, we can make it part of the table:
db['Total'] = total
db.head()
# New variable with all ones
db['ones'] = 1
db.head()
And we can modify specific values too:
db.loc['E01006512', 'ones'] = 3
db.head()
del db['ones']
db.head()
We have already seen how to subset parts of a DataFrame
if we know exactly which bits we want. For example, if we want to extract the total and European population of the first four areas in the table, we use loc
with lists:
eu_tot_first4 = db.loc[['E01006512', 'E01006513', 'E01006514', 'E01006515'], \
['Total', 'Europe']]
eu_tot_first4
However, sometimes, we do not know exactly which observations we want, but we do know what conditions they need to satisfy (e.g. areas with more than 2,000 inhabitants). For these cases, DataFrames
support selection based on conditions. Let us see a few examples. Suppose we want to select...
... areas with more than 2,500 people in Total:
m5k = db.loc[db['Total'] > 2500, :]
m5k
... areas where there are no more than 750 Europeans:
nm5ke = db.loc[db['Europe'] < 750, :]
nm5ke
... areas with exactly ten person from Antarctica and Oceania:
oneOA = db.loc[db['Antarctica and Oceania'] == 10, :]
oneOA
Pro-tip: these queries can grow in sophistication with almost no limits. For example, here is a case where we want to find out the areas where European population is less than half the population:
eu_lth = db.loc[(db['Europe'] * 100. / db['Total']) < 50, :]
eu_lth
Now all of these queries can be combined with each other, for further flexibility. For example, imagine we want areas with more than 25 people from the Americas and Caribbean, but less than 1,500 in total:
ac25_l500 = db.loc[(db['The Americas and the Caribbean'] > 25) & \
(db['Total'] < 1500), :]
ac25_l500
Among the many operations DataFrame
objects support, one of the most useful ones is to sort a table based on a given column. For example, imagine we want to sort the table by total population:
db_pop_sorted = db.sort_values('Total', ascending=False)
db_pop_sorted.head()
If you inspect the help of db.sort_values
, you will find that you can pass more than one column to sort the table by. This allows you to do so-called hiearchical sorting: sort first based on one column, if equal then based on another column, etc.
The next step to continue exploring a dataset is to get a feel for what it looks like, visually. We have already learnt how to unconver and inspect specific parts of the data, to check for particular cases we might be intersted in. Now we will see how to plot the data to get a sense of the overall distribution of values. For that, we will be using the Python library seaborn
.
One of the most common graphical devices to display the distribution of values in a variable is a histogram. Values are assigned into groups of equal intervals, and the groups are plotted as bars rising as high as the number of values into the group.
A histogram is easily created with the following command. In this case, let us have a look at the shape of the overall population:
_ = sns.distplot(db['Total'], kde=False)
Note we are using sns
instead of pd
, as the function belongs to seaborn
instead of pandas
.
We can quickly see most of the areas contain somewhere between 1,200 and 1,700 people, approx. However, there are a few areas that have many more, even up to 3,500 people.
An additional feature to visualize the density of values is called rug
, and adds a little tick for each value on the horizontal axis:
_ = sns.distplot(db['Total'], kde=False, rug=True)
Histograms are useful, but they are artificial in the sense that a continuous variable is made discrete by turning the values into discrete groups. An alternative is kernel density estimation (KDE), which produces an empirical density function:
_ = sns.kdeplot(db['Total'], shade=True)
Another very common way of visually displaying a variable is with a line or a bar chart. For example, if we want to generate a line plot of the (sorted) total population by area:
_ = db['Total'].sort_values(ascending=False).plot()
For a bar plot all we need to do is to change an argument of the call:
_ = db['Total'].sort_values(ascending=False).plot(kind='bar')
Note that the large number of areas makes the horizontal axis unreadable. We can try to turn the plot around by displaying the bars horizontally (see how it's just changing bar
for barh
). To make it readable, let us expand the plot's height:
_ = db['Total'].sort_values().plot(kind='barh', figsize=(6, 20))
Happy families are all alike; every unhappy family is unhappy in its own way.
Leo Tolstoy.
Once you can read your data in, explore specific cases, and have a first visual approach to the entire set, the next step can be preparing it for more sophisticated analysis. Maybe you are thinking of modeling it through regression, or on creating subgroups in the dataset with particular characteristics, or maybe you simply need to present summary measures that relate to a slightly different arrangement of the data than you have been presented with.
For all these cases, you first need what statistician, and general R wizard, Hadley Wickham calls "tidy data". The general idea to "tidy" your data is to convert them from whatever structure they were handed in to you into one that allows convenient and standardized manipulation, and that supports directly inputting the data into what he calls "tidy" analysis tools. But, at a more practical level, what is exactly "tidy data"? In Wickham's own words:
Tidy data is a standard way of mapping the meaning of a dataset to its structure. A dataset is messy or tidy depending on how rows, columns and tables are matched up with observations, variables and types.
He then goes on to list the three fundamental characteristics of "tidy data":
If you are further interested in the concept of "tidy data", I recommend you check out the original paper (open access) and the public repository associated with it.
Let us bring in the concept of "tidy data" to our own Liverpool dataset. First, remember its structure:
db.head()
Thinking through tidy lenses, this is not a tidy dataset. It is not so for each of the three conditions:
GeographyCode
; and subgroups of an area. To tidy up this aspect, we can create two different tables:# Assign column `Total` into its own as a single-column table
db_totals = db[['Total']]
db_totals.head()
# Create a table `db_subgroups` that contains every column in `db` without `Total`
db_subgroups = db.drop('Total', axis=1)
db_subgroups.head()
Note we use drop
to exclude "Total", but we could also use a list with the names of all the columns to keep. Additionally, notice how, in this case, the use of drop
(which leaves db
untouched) is preferred to that of del
(which permanently removes the column from db
).
At this point, the table db_totals
is tidy: every row is an observation, every table is a variable, and there is only one observational unit in the table.
The other table (db_subgroups
), however, is not entirely tidied up yet: there is only one observational unit in the table, true; but every row is not an observation, and there are variable values as the names of columns (in other words, every column is not a variable). To obtain a fully tidy version of the table, we need to re-arrange it in a way that every row is a population subgroup in an area, and there are three variables: GeographyCode
, population subgroup, and population count (or frequency).
Because this is actually a fairly common pattern, there is a direct way to solve it in pandas
:
tidy_subgroups = db_subgroups.stack()
tidy_subgroups.head()
The method stack
, well, "stacks" the different columns into rows. This fixes our "tidiness" problems but the type of object that is returning is not a DataFrame
:
type(tidy_subgroups)
It is a Series
, which really is like a DataFrame
, but with only one column. The additional information (GeographyCode
and population group) are stored in what is called an multi-index. We will skip these for now, so we would really just want to get a DataFrame
as we know it out of the Series
. This is also one line of code away:
# Unfold the multi-index into different, new columns
tidy_subgroupsDF = tidy_subgroups.reset_index()
tidy_subgroupsDF.head()
To which we can apply to renaming to make it look better:
tidy_subgroupsDF = tidy_subgroupsDF.rename(columns={'level_1': 'Subgroup', 0: 'Freq'})
tidy_subgroupsDF.head()
Now our table is fully tidied up!
One of the advantage of tidy datasets is they allow to perform advanced transformations in a more direct way. One of the most common ones is what is called "group-by" operations. Originated in the world of databases, these operations allow you to group observations in a table by one of its labels, index, or category, and apply operations on the data group by group.
For example, given our tidy table with population subgroups, we might want to compute the total sum of population by each group. This task can be split into two different ones:
Freq
for each of them.To do this in pandas
, meet one of its workhorses, and also one of the reasons why the library has become so popular: the groupby
operator.
pop_grouped = tidy_subgroupsDF.groupby('Subgroup')
pop_grouped
The object pop_grouped
still hasn't computed anything, it is only a convenient way of specifying the grouping. But this allows us then to perform a multitude of operations on it. For our example, the sum is calculated as follows:
pop_grouped.sum()
Similarly, you can also obtain a summary of each group:
pop_grouped.describe()
We will not get into it today as it goes beyond the basics we want to conver, but keep in mind that groupby
allows you to not only call generic functions (like sum
or describe
), but also your own functions. This opens the door for virtually any kind of transformation and aggregation possible.
Practice your data tidying skills with a different dataset. For example, you can have a look at the Guardian's version of Wikileaks' Afghanistan war logs. The table is stored on a GoogleDoc on the following address:
https://docs.google.com/spreadsheets/d/1EAx8_ksSCmoWW_SlhFyq2QrRn0FNNhcg1TtDFJzZRgc/edit?hl=en#gid=1
And its structure is as follows:
from IPython.display import IFrame
url = 'https://docs.google.com/spreadsheets/d/1EAx8_ksSCmoWW_SlhFyq2QrRn0FNNhcg1TtDFJzZRgc/edit?hl=en#gid=1'
IFrame(url, 700, 400)
Follow these steps:
csv
file (File --> Download as --> .csv, current sheet).This notebook, as well as the entire set of materials, code, and data included
in this course are available as an open Github repository available at: https://github.com/darribas/gds18
<span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Geographic Data Science'18</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://darribas.org" property="cc:attributionName" rel="cc:attributionURL">Dani Arribas-Bel</a> is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.