Sample This: Sampling Randomly from the Streets

23 Jun

Say you want to learn about the average number of potholes per unit paved street in a city. To estimate that quantity, the following sampling plan can be employed:

1. Get all the streets in a city from Google Maps or OSM
2. Starting from one end of the street, split each street into .5 km segments till you reach the end of the street. The last segment, or if the street is shorter than .5km, the only segment, can be shorter than .5 km.
3. Get the lat/long of start/end of the segments.
4. Create a database of all the segments: segment_id, street_name, start_lat, start_long, end_lat, end_long
5. Sample from rows of the database
6. Produce a CSV of the sampled segments (subset of step 4)
7. Plot the lat/long on Google Map — filling all the area within the segment.
8. Collect data on the highlighted segments.

For Python package that implements this, see https://github.com/soodoku/geo_sampling.

The Missing Plot

27 May

Datasets often contain missing values. And often enough—at least in social science data—values are missing systematically. So how do we visualize missing values? After all, they are missing.

Some analysts simply list-wise delete points with missing values. Others impute, replacing missing values with mean or median. Yet others use sophisticated methods to impute missing values. None of the methods, however, automatically acknowledge that any of the data are missing in the visualizations.

It is important to acknowledge missing data.

One can do it is by providing a tally of how much data are missing on each of the variables in a small table in the graph. Another, perhaps better, method is to plot the missing values as a function of a covariate. For bivariate graphs, the solution is pretty simple. Create a dummy vector that tallies missing values. And plot the dummy vector in addition to the data. For instance, see:
missing

(The script to produce the graph can be downloaded from the following GitHub Gist.)

In cases, where missing values are imputed, the dummy vector can also be used to ‘color’ the points that were imputed.

Where’s the news?: Classifying News Domains

23 Jul

We select an initial universe of news outlets (i.e., web domains) via the Open Directory Project (ODP, dmoz.org), a collective of tens of thousands of editors who hand-label websites into a classification hierarchy. This gives 7,923 distinct domains labeled as: news, politics/news, politics/media, and regional/news. Since the vast majority of these news sites receive relatively little traffic, to simplify our analysis we restrict to the one hundred domains that attracted the largest number of unique visitors from our sample of toolbar users. This list of popular news sites includes every major national news source, well-known blogs and many regional dailies, and
collectively accounts for over 98% of page views of news sites in the full ODP list (as estimated via our toolbar sample). The complete list of 100 domains is given in the Appendix.

From Filter Bubbles, Echo Chambers, and Online News Consumption by Flaxman, Goel and Rao.

When using rich browsing data, scholars often rely on ad hoc lists of domains to estimate consumption of certain kind of media. Using these lists to estimate consumption raises three obvious concerns – 1) Even sites classified as ‘news sites,’ such as the NYT, carry a fair bit of non-news 2) (speaking categorically) There is the danger of ‘false positives’ 3) And (speaking categorically again) there is a danger of ‘false negatives.’

FGR address the first concern by exploiting the URL structure. They exploit the fact that the URL of NY Times story contains information about the section. (The classifier is assumed to be perfect. But likely isn’t. False positive and negative rates for this kind of classification can be estimated using raw article data.) This leaves us with concern about false positives and negatives at the domain level. Lists like those published by DMOZ appear to be curated well-enough to not contain too many false-positives. The real question is about how to calibrate false negatives. Here’s one procedure. Take a large random sample of the browsing data (at least 10,000 unique domain names). Compare it to a large comprehensive database like Shallalist. Of the domains that aren’t in the database, query a URL classification service such as Trusted Source. (The initial step of comparing against Shallalist is to reduce the amount of querying.) Using the results, estimate the proportion of missing domain names (the net number of missing domain names is likely much much larger). Also estimate missed visitation time, page views etc.

A Quick Scan: From Paper to Digital

28 May

Additional Note: See this presentation on paper to digital (pdf).

There are two ways of converting paper to digital data: ‘human OCR and input,’ and machine-assisted. Here are some useful pointers about the latter.

Scanning

Since the success of so much of what comes after depends on the quality of the scanned document, invest a fair bit of effort in obtaining high-quality scans. If the paper input is in the form of a book, and if the book is bound, and especially if it is thick, it is hard to copy text close to the spine without destroying the spine. If the book can be bought at a reasonable price, do exactly that – destroy the spine – cut the book and then scan. Many automated scanning services will cut the spine by default. Other things to keep in mind: scan at a reasonably high resolution (since storage is cheap, go for at least 600 dpi), and if choosing PDF as an output option, see if the scan can be saved as a “searchable PDF.” (So scanning + OCR in one go.)

An average person working without too much distraction can scan 60-100 images per hour. If two pages of the book can be scanned at once, this means 120-200 pages can be scanned in an hour. Assuming you can hire someone to scan for $10/hr, it comes to 12-20 pages per dollar, which translates to 5 to 8 cents per page. However, scanning manually is boring, and people who are punctilious about quality page after page are far and few between. Relying on automated scanning companies may be better. And cheaper. 1dollarscan.com charges 2 cents per page with OCR. But there is no free lunch. Most automated scanning services cut the spines of the book, and many places don’t send back the paper copy for reasons to do with copyright. So you may have to factor in the cost of the book. And typically, the scanning services do all or nothing. You can’t give directions to scan the first 10 pages, followed by middle 120, etc. Thus per relevant page costs may exceed those of manual scanning.

Scan to Text

If the text is clear and laid out simply, most commonly available OCR software will do just fine. Acrobat Professional’s own facility for recognizing text in images, found under ‘tools,’ is reasonable. Acrobat also notes words that it is unsure about – it calls them ‘suspects’; you can click through the line up of ‘suspects,’ correcting them as needed. The interface for making corrections is suboptimal, but it is likely to prove satisfactory for small documents with few errors.

Those without ready access to Adobe Professional can extract text from `searchable PDF’ using xpdf or any of the popular programming languages. In Python, pyPdf (see script) or pdfminer (or other libraries) are popular. If the document is a set of images, one can use libraries based on Tesseract (see script). PDF documents need to be converted to images using Ghostscript or similar such rasterization software before being fed to Tesseract.

But what if the quality of scans is poor or the page layout complicated? First, try enhancing images – fixing orientation, using filters to enhance readability, etc. This process can be automated if the images are distorted in similar ways. Second, try extracting boundary boxes for columns/paragraphs and words/characters (position/size) and font styles (name/size) by choosing XML/HTML as the output format. This information can be later exploited for aligning etc. However, how much you gain from extracting style and boundary box information depends heavily on the quality of the original pdf. For low quality pdfs, mislabeling of font size and style can be common, which means the information cannot be used reliably. Third, explore training the OCR. Tesseract can be trained to improve OCR though training it isn’t straightforward. Fourth, explore good professional OCR engines such as Abbyy FineReader (See R Package connecting to Abbyy FineReader API). OCR in AbbyyFine can be easily improved by adding training data, and tuning various options for identifying the proper ‘area order’ (which text area follows which, which portion of the page isn’t part of text area etc.).

Post-processing: Correcting Errors

The OCR process makes certain kinds of errors. For instance, ‘i’ may be confused for an ‘l’ or for a ‘pipe.’ And these errors are often repeated. One consequence of these errors is that some words are misspelled systematically. One way to deal with such errors is to simply search and replace particular strings (see script). When using search and replace, it pays to be mindful of problems that may ensue from searching and replacing short strings. For instance, replacing ‘lt’ with ‘it’ may mean converting ‘salt’ to ‘sait.’

Note: For a more general account on matching dirty data, read this article.

It is typically useful to script a search and replace script alongside a database of search terms and terms you propose to replace them with. For one it allows you to apply the same set of corrections to many documents. For two, it can be easily amended and re-rerun. While writing these scripts (see script), it is important to keep issues to do with text encoding in mind; OCR yields some ligatures (e.g., fi) and some other Unicode characters.

Searching and replacing particular strings can prove time-consuming as the same word can often be misspelled in tens of different ways. For instance, in a recent project, the word “programming” was misspelled in no less than 28 ways. The easiest extension to this algorithm is to search for patterns. For instance, some number of sequential errors at any point in the string. One can extend it to include various ‘edit distances’, e.g., Levenshtein distance, which is the number of characters you need to switch to convert one word to another, allowing the user to handle non-sequential errors (see script). Again the trick is to keep the length of the string in mind as false positives may abound. One can do that by choosing a metric that factors in the size of the string and the number of errors. For instance, a metric like (Levenshtein distance)/(size of the original string). Lastly, rather than use edit distance, one can apply ‘better’ pattern matching algorithms, such as Ratcliff/Obershelp.

Spell checks, a combination of pattern matching libraries and databases (dictionaries), are another common way of searching and replacing strings. There are various freely available spell-check databases (to be used with your own pattern matching algorithm) and libraries, including pyEnchant for Python. One can even use VB to call MS-Word’s fairly reasonable spell checking functions. However, if the document contains lots of unique proper nouns, spell check is liable to create more problems than it solves. One way to reduce these errors is to (manually) amend the database. Another is to limit corrections to words of a certain size, or to cases where the suggested words and the words in text differ only by certain kinds of letters (or non-letters, ‘pipe’ for the letter l). One can also leverage ‘Google suggest’ to fix spelling errors. Lastly, spell checks built for particular OCR errors, such as ocrspell, can also be used. If these methods still yield too many false corrections, one can go for a semi-automated approach: use spell-checks to harvest problematic words and recommended replacements and then let people pick the right version. A few tips for creating human-assisted spell check versions: eliminate duplicates, and provide the user with neighboring words (2 words before and after can work for some projects). Lastly, one can use M-Turk to iteratively proof-read the document (see TurkIt).

Bad Weather: Getting Data on Weather in a ZIP Code on a Particular Date

27 Jun

High-quality weather data are public. But they aren’t easy to make use of.

Some thoughts and some software for finding out the weather in a particular ZIP Code on a particular day (or a set of dates).

Some brief ground clearing before we begin. Weather data come from weather stations, which can belong to any of the five or more “networks,” each of which collects somewhat different data, sometimes label the same data differently and have different reporting protocols. The only geographic information that typically comes with weather stations is their latitude and longitude. By “weather,” we may mean temperature, rain, wind, snow, etc. and we may want data on these for every second, minute, hour, day, month, etc. It is good to keep in mind that not all weather stations report data for all units of time, and there can be a fair bit of missing data. Getting data at coarse time units like day, month, etc. typically involves making some decisions about what statistic is the most useful. For instance, you may want minimum and maximum for daily temperature and totals for rainfall and snow. With that primer, let’s begin.

We begin with what not to do. Do not use the NOAA web service. The API provides a straightforward way to get “weather” data for a particular ZIP Code for a particular month. Except, the requests often return nothing. It isn’t clear why. The documentation doesn’t say whether the search for the closest weather station is limited to X kilometers because without that, one should have data for all ZIP Codes and all dates. Nor does the API bother to return how far the weather station is from which it got the data, though one can get that post hoc using Google Geocoding API. However, given the possibility that the backend for the API would improve over time, here’s a script for getting the daily weather data, and hourly precipitation data.

On to what can be done. The “web service” that you can use is Farmer’s Almanac’s. Sleuthing using scripts that we discuss later reveal that The Almanac reports data from the NWS-USAF-NAVY stations (ftp link to the data file). And it appears to have data for most times though no information is provided on the weather station from which it got the data and the distance to the ZIP Code.

If you intend to look for data from GHCND, COOP, or ASOS, there are two kinds of crosswalks that you can create: 1) from ZIP codes to weather stations, and 2) from weather stations to ZIP Codes. I assume that we don’t have access to shapefiles (for census ZIP Codes) and that postal ZIP Codes encompass a geographic region. To create a weather station to ZIP Code crosswalk, web service such as Geonames or Google Geocoding API can be used. If the station lat,./long. is in the zip code, the distance comes up as zero. Otherwise the distance is calculated as distance from the centroid of the ZIP Code (see geonames script that finds 5 nearest ZIPs for each weather station). For creating a ZIP code to weather station crosswalk, we get centroids of each ZIP using a web service such as Google (or use already provided centroids from free ZIP databases). And then find the “nearest” weather stations by calculating distances to each of the weather stations. For a given set of ZIP Codes, you can get a list of closest weather stations (you can choose to get n closest stations, or say all weather stations within x kilometers radius, and/or choose to get stations from particular network(s)) using the following script. The output lists for each ZIP Code weather stations arranged by proximity. The task of getting weather data from the closest station is simple thereon—get data (on a particular set of columns of your choice) from the closest weather station from which the data are available. You can do that for a particular ZIP Code and date (and date range) combination using the following script.

Join Or Die

8 Oct

How to join datasets correctly is well understood. But we continue to do it badly. Here are notes on how to do fuzzy joins and long to wide transformations starting with some general preprocessing advice.

Preprocessing keys and Tables

  1. Remove stuff irrelevant to matching, e.g., font style, spaces, articles (the, a, an), etc.
  2. Standardize different versions of the same word. One common problem is abbreviations, e.g., Township and Twp., Saint and St., etc. Common misspellings can also be handled similarly. But do all this with care. For example, St. may stand for state and saint.
  3. Remove duplicates if many-to-one (or one-to-many or many-to-many) matches are not allowed. Note that our ability to detect duplicates will be limited by how dirty the data are within each dataset.
  4. Often, a few common problems are at the heart of most issues. For instance, say that you are matching on country, state, and city. Noise in how the countries are spelled across datasets may explain missed matches. Fix those.

Fuzzy Join

Problems with joining data are mostly uniqueness problems. Fuzzy joins add to this complication because they implicitly do a 1-to-many and a many-to-1 join. After a fuzzy join with a reasonably large distance metric, one common workflow is to remove rows where the left table key is duplicated and where the right table key is duplicated. (You likely want to remove all rows with the duplicated key than keep the first row, which is the default behavior of many deduplication functions.) Another common tactic is to manually assess these cases, which also addresses the case where the closest match is not the right match. If manually reviewing all the rows isn’t an option, code a small random sample and figure out the optimal distance that gives reasonable FP/FN. If you are building software for manual review: If you are building software for manual review:

  1. Arrange matches intelligently – for example, string + number, followed by string, etc. When there are more than 5 matches, arranging alphabetically works well as a default.
  2. If you have lots of data, consider Mechanical Turk or Captcha.

To improve the speed and tractability of fuzzy joins, there are there common tricks: 1. reduce the number of potential strings to match by exact matching on some variables (which you can hand curate), 2. multi-processing with shared memory, 3. eliminating the exact matches by doing an inner join and removing those keys.

Long to Wide

Converting data from long to wide is the same as joining the underlying group datasets. Most software implementations, however, assume too much and do not provide a workflow that assumes that we are joining datasets. In particular, many software implementations assume that the “key” has no missing values or duplicates. (For time series data, the software may assume that all the dates exist.) It is useful to test non-missingness and uniqueness of keys and to explicitly set a variable as the primary key. It is also good to think what the imputed value of a column should be if the “key” is missing for some group, e.g., years. You could also limit the rows to where all the years are present.