Handling Data#

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
import os.path
import subprocess

Locate Course Data Files#

def wget_data(url):
    local_path='./tmp_data'
    subprocess.run(["wget", "-nc", "-P", local_path, url])
wget_data('https://courses.physics.illinois.edu/phys398dap/fa2023/data/pong_data.hf5')
def locate_data(name, check_exists=True):
    local_path='./tmp_data'
    path = os.path.join(local_path, name)
    if check_exists and not os.path.exists(path):
        raise RuxntimeError('No such data file: {}'.format(path))
    return path
locate_data('pong_data.hf5')

Data files are stored in the industry standard binary HDF5 and text CSV formats, with extensions .hf5 and .csv, respectively. HDF5 is more efficient for larger files but requires specialized software to read. CSV files are just plain text:

wget_data('https://courses.physics.illinois.edu/phys398dap/fa2023/data/line_data.csv')
with open(locate_data('line_data.csv')) as f:
    # Print the first 5 lines of the file.
    for lineno in range(5):
        print(f.readline(), end='')

The first line specifies the names of each column (“feature”) in the data file. Subsequent lines are the rows (“samples”) of the data file, with values for each column separated by commas. Note that values might be missing (for example, at the end of the third row).

Read Files with Pandas#

We will use the Pandas package to read data files into DataFrame objects in memory. This will only be a quick introduction. For a deeper dive, start with Data Manipulation with Pandas in the Phython Data Science Handbook.

pong_data = pd.read_hdf(locate_data('pong_data.hf5'))
line_data = pd.read_csv(locate_data('line_data.csv'))

You can think of a DataFrame as an enhanced 2D numpy array, with most of the same capabilities:

line_data.shape

Individual columns also behave like enhanced 1D numpy arrays:

line_data['y'].shape
line_data['x'].shape

For a first look at some unknown data, start with some basic summary statistics:

line_data.describe()

Jot down a few things you notice about this data from this summary.

  • The values of x and y are symmetric about zero.

  • The values of x look uniformly distributed on [-1, +1], judging by the percentiles.

  • The value of dy is always > 0, as you might expect if it represents the “error on y”.

  • The dy column is missing 150 entries.

Summarize pong_data the same way. Does anything stick out?

pong_data.describe()

Some things that stick out from this summary are:

  • Mean, median values in the xn columns are increasing left to right.

  • Column y0 is always zero, so not very informative.

  • Mean, median values in the yn columns increase from y0 to y4 then decrease through y9.

Work with Subsets of Data#

A subset is specified by limiting the rows and/or columns. We have already seen how to pick out a single column, e.g. with line_data['x'].

We can also pick out specific rows (for details on why we use iloc see here):

line_data.iloc[:4]

Note how the missing value in the CSV file is represented as NaN = “not a number”. This is generally how Pandas handles any data that is missing / invalid or otherwise not available (NA).

We may not want to use any rows with missing data. Select the subset of useful data with:

line_data_valid = line_data.dropna()
line_data_valid[:4]

You can also select rows using any logical test on its column values. For example, to select all rows with dy > 0.5 and y < 0:

xpos = line_data[(line_data['dy'] > 0.5) & (line_data['y'] < 0)]
xpos[:4]

Use describe to compare the summary statistics for rows with x < 0 and x >= 0. Do they make sense?

line_data[line_data['x'] < 0].describe()
line_data[line_data['x'] >= 0].describe()

Extend Data with New Columns#

You can easily add new columns derived from existing columns, for example:

line_data['yprediction'] = 1.2 * line_data['x'] - 0.1

The new column is only in memory, and not automatically written back to the original file.

EXERCISE: Add a new column for the “pull”, defined as: $\( y_{pull} \equiv \frac{y - y_{prediction}}{\delta y} \; . \)$ What would you expect the mean and standard deviation (std) of this new column to be if the prediction is accuracte? What do the actual mean, std values indicate?

line_data['ypull'] = (line_data['y'] - line_data['yprediction']) / line_data['dy']

The mean should be close to zero if the prediction is unbiased. The RMS should be close to one if the prediction is unbiased and the errors are accurate. The actual values indicate that the prediction is unbiased, but the errors are overerestimated.

line_data.describe()

Combine Data from Different Sources#

Most of the data files for this course are in data/targets pairs (for reasons that will be clear soon).

Verify that the files pong_data.hf5 and pong_targets.hf5 have the same number of rows but different column names.

wget_data('https://courses.physics.illinois.edu/phys398dap/fa2023/data/pong_targets.hf5')
pong_data = pd.read_hdf(locate_data('pong_data.hf5'))
pong_targets = pd.read_hdf(locate_data('pong_targets.hf5'))

print('#rows: {}, {}.'.format(len(pong_data), len(pong_targets)))
assert len(pong_data) == len(pong_targets)

print('data columns: {}.'.format(pong_data.columns.values))
print('targets columns: {}.'.format(pong_targets.columns.values))

Use pd.concat to combine the (different) columns, matching row by row. Verify that your combined data has the expected number of rows and column names.

pong_both = pd.concat([pong_data, pong_targets], axis='columns')
print('#rows: {}'.format(len(pong_both)))
print('columns: {}.'.format(pong_both.columns.values))

Prepare Data from an External Source#

Finally, here is an example of taking data from an external source and adapting it to the standard format we are using. The data is from the 2014 ATLAS Higgs Challenge which is now documented and archived here. More details about the challenge are in this writeup.

EXERCISE:

  1. Download the compressed CSV file (~62Mb) atlas-higgs-challenge-2014-v2.csv.gz using the link at the bottom of this page.

  2. You can uncompress (gunzip) the file on-the-fly.

  3. Skim the description of the columns here. The details are not important, but the main points are that:

  • There are two types of input “features”: 17 primary + 13 derived.

  • The goal is to predict the “Label” from the input features.

  1. Examine the function defined below and determine what it does. Lookup the documentation of any functions you are unfamiliar with.

  2. Run the function below, which should create two new files in your coursse data directory:

  • higgs_data.hf5: Input data with 30 columns, ~100Mb size.

  • higgs_targets.hf5: Ouput targets with 1 column, ~8.8Mb size.

wget_data('http://opendata.cern.ch/record/328/files/atlas-higgs-challenge-2014-v2.csv.gz')
def prepare_higgs(filename='atlas-higgs-challenge-2014-v2.csv.gz'):
    # Read the input file, uncompressing on the fly.
    df = pd.read_csv(locate_data(filename), index_col='EventId', na_values='-999.0')
    # Prepare and save the data output file.
    higgs_data = df.drop(columns=['Label', 'KaggleSet', 'KaggleWeight']).astype('float32')
    higgs_data.to_hdf(locate_data('higgs_data.hf5', check_exists=False), 'data', mode='w')
    # Prepare and save the targets output file.
    higgs_targets = df[['Label']]
    higgs_targets.to_hdf(locate_data('higgs_targets.hf5', check_exists=False), 'targets', mode='w')
prepare_higgs()

Check that locate_data can find the new files:

locate_data('higgs_data.hf5')
locate_data('higgs_targets.hf5')

Now you can load these data files and explore the data

higgs_data = pd.read_hdf(locate_data('higgs_data.hf5'))
higgs_data.describe()
higgs_targets = pd.read_hdf(locate_data('higgs_targets.hf5'))
higgs_targets.describe()

You can now safely remove the tmp_data directory if you like. This is an example of a shell command. Uncomment this line if you want to do this. Colab will clean this up after you end the session.

#!rm -rf ./tmp_data