DPLA Metadata Fun: Compression as a measure of data quality

The past week had the opportunity to participate in an IMLS-funded workshop about managing local authority records hosted by Cornell University at the Library of Congress.  It was two days of discussions about issues related to managing local and aggregated name authority records. This meeting got me thinking more about names in our digital library metadata both locally (at UNT) and in aggregations (DPLA).

It has been a while since I worked on a project with the DPLA metadata dataset that they provide for bulk download so I figured it was about time to grab a copy and poke around a bit.

This time around I’m interested in looking at some indicators of metadata quality.  Loosely it is how well does a set of metadata conform to itself.  Specifically I want to look at how name values from the dc.creator, dc.contributor, and dc.publisher compare with each other.

I’ll give a bit of an overview to get us started.

If we had these four values in a set of metadata for say the dc.creator of an awesome movie.

Alexander Johan Hjalmar Skarsgård
Skarsgård, Alexander Johan Hjalmar
Alexander Johan Hjalmar Skarsgard
Skarsgard, Alexander Johan Hjalmar

If we sort these values, make them unique, and then count the instances, we will get the following.

1  Alexander Johan Hjalmar Skarsgard
1  Alexander Johan Hjalmar Skarsgård
1  Skarsgard, Alexander Johan Hjalmar
1  Skarsgård, Alexander Johan Hjalmar

So we have 4 unique name strings in our dataset.

If we applied a normalization algorithm that turned the letter å into an a and then tried to make our data unique we would end up with the following.

2  Alexander Johan Hjalmar Skarsgard
2  Skarsgard, Alexander Johan Hjalmar

Now we have only two name strings in the dataset, each with an instance count of two.

We can measure the compression rate by taking the original number of instances and dividing it by this new number.  4/2 = 2 or a 2:1 compression rate.

Another way to do it is to get the amount of space saved with this compression.  This is just a different equation.  1 – 2/4 = 0.5 or a 50% space savings.

If we apply an algorithm similar to the one that OpenRefine uses and calls a “fingerprint” we can get the following from our first four values.

4 alexander hjalmar johan skarsgard

Now we’ve gone from four values down to one for a 4:1 compression rate or we’ve created a 75% space savings.

Relation to Quality

When we go back to our first four examples, we can come to the opinion pretty quickly that these are most likely supposed to be the same name.

Alexander Johan Hjalmar Skarsgård
Skarsgård, Alexander Johan Hjalmar
Alexander Johan Hjalmar Skarsgard
Skarsgard, Alexander Johan Hjalmar

If we saw this in our databases we would want to clean these up.  They would most likely lead to poor faceting in our discovery interface.  If a user wanted to find other items that had a dc.creator of Skarsgård, Alexander Johan Hjalmar, it is possible that they wouldn’t find any of the other three items when they clicked on a link to show more.

If we can agree that reducing the number of “near matches” in the dataset is an improvement, we might be able to use these data compression measures as a way of identifying which parts of a digital library might have consistency problems.

That’s exactly what I’m proposing to do here.  I want to find out if we can use a number of different algorithms on the values of dc.creator, dc.contributor, and dc.publisher in the DPLA metadata set and see how much these values compress the data.

Preparing the Data

I’m going to start with the all.json.gz file from the DPLA’s bulk metadata download page.

This file is a very large json file containing 15,816,573 records from the April 2017 DPLA metadata dump.

The first thing that I want to do is to reduce this dataset, which is 6.1GB compressed so something a little more manageable.  I will start with the dc_creator information.  I will use a set of commands for the wonderful tool jq that gets me what I’m wanting.

jq -nc --stream --compact-output '. | fromstream(1|truncate_stream(inputs)) | {'provider': (._source.provider["@id"]), 'id': (._source.id), 'creator': ._source.sourceResource.creator?}'

The command I used above will transform each of the records in the DPLA dataset into something that looks like this:

{"provider":"http://dp.la/api/contributor/uiuc","id":"705e1e5f19331a6c8a554ce707059288","creator":null}
{"provider":"http://dp.la/api/contributor/uiuc","id":"bcae15d47f2544caf0407b1e17bf97cd","creator":["Harlow, G","Rogers, J"]}
{"provider":"http://dp.la/api/contributor/uiuc","id":"96cab3354d942e7ea2030f1452f5beb8","creator":["Drummond, S","Ridley, W"]}
{"provider":"http://dp.la/api/contributor/uiuc","id":"e3ce5090d0a8b3c247c84d6f0d5ff16e","creator":["Barber, J.T","Cardon, A"]}

This is now a large file with one small snippet of json on each line.  I can write straightforward Python scripts to process these lines and do some of the heavy lifting for analysis.

For this first pass I’m interested in all of the dc.creators in the whole DPLA dataset to measure the overall compression.

Here is a short set of these values.

Henry G. Gilbert Nursery and Seed Trade Catalog Collection
United States. Committee on Merchant Marine and Fisheries
Herdman, W. A. Sir, (William Abbott), 1858-1924
United States. Committee on Merchant Marine and Fisheries
Henderson, Joseph C
Fancher Creek Nurseries
Roeding, George Christian, 1868-1928
Henry G. Gilbert Nursery and Seed Trade Catalog Collection
United States. Animal and Plant Health Inspection Service
United States. Bureau of Entomology and Plant Quarantine
United States. Plant Pest Control Branch
United States. Plant Pest Control Division

The full list is 10,413,292 lines long when I ignore record instances that don’t have any value for creator.

The next thing to do is sort that list and make it unique which leaves me 1,445,688 unique creators in the DPLA metadata dataset.

Compressing the Data

For the first pass through the data I am going to use the “fingerprint algorithm” that OpenRefine describes in depth here.

The basics are as follows (from OpenRefine’s documentation)

  • remove leading and trailing whitespace
  • change all characters to their lowercase representation
  • remove all punctuation and control characters
  • split the string into whitespace-separated tokens
  • sort the tokens and remove duplicates
  • join the tokens back together
  • normalize extended western characters to their ASCII representation (for example “gödel” → “godel”)

If you’re curious, the code that performs this is in OpenRefine is here.

The next steps are to run this fingerprinting algorithm on each of the1,445,688 creators, sort the created hash values, make them unique and count the resulting lines.  This gives you the new unique creators based on the fingerprint algorithm.

I end up with 1,365,922 unique creator values based on the fingerprint.

That comes to a reduction of 5.52% of the unique values.

To give you an idea of what this looks like for values.  There are eleven different creator instances that have the fingerprint of “akademiia imperatorskaia nauk russia”.

  • Imperatorskai︠a︡ akademī︠ia︡ nauk (Russia)
  • Imperatorskai︠a︡ akademīi︠a︡ nauk (Russia)
  • Imperatorskai͡a akademīi͡a nauk (Russia)
  • Imperatorskai͡a akademii͡a nauk (Russia)
  • Imperatorskai͡a͡ akademīi͡a͡ nauk (Russia)
  • Imperatorskaia akademīia nauk (Russia)
  • Imperatorskaia akademiia nauk (Russia)
  • Imperatorskai͡a akademïi͡a nauk (Russia)
  • Imperatorskai︠a︡ akademīi︠a︡ nauk (Russia)
  • Imperatorskai͡a akademīi͡a nauk (Russia)
  • Imperatorskaia akademīia nauk (Russia)

These 11 different versions of this name are distributed among five different DPLA Hubs.

Below is a table showing how the different versions are distributed across hubs.

Name Records bhl hathitrust internet_archive nypl smithsonian
Imperatorskai︠a︡ akademī︠ia︡ nauk (Russia) 1 0 1 0 0 0
Imperatorskai︠a︡ akademīi︠a︡ nauk (Russia) 13 0 11 2 0 0
Imperatorskai͡a akademīi͡a nauk (Russia) 7 0 7 0 0 0
Imperatorskai͡a akademii͡a nauk (Russia) 3 0 3 0 0 0
Imperatorskai͡a͡ akademīi͡a͡ nauk (Russia) 1 0 1 0 0 0
Imperatorskaia akademīia nauk (Russia) 13 0 0 0 0 13
Imperatorskaia akademiia nauk (Russia) 4 0 0 0 4 0
Imperatorskai͡a akademïi͡a nauk (Russia) 1 0 1 0 0 0
Imperatorskai︠a︡ akademīi︠a︡ nauk (Russia) 11 0 11 0 0 0
Imperatorskai͡a akademīi͡a nauk (Russia) 13 0 13 0 0 0
Imperatorskaia akademīia nauk (Russia) 211 211 0 0 0 0

When you look at the table you will see that bhl, internet_archive, nypl, and smithsonian each have their preferred way of representing this name.  Hathitrust however has eight different ways that it represents this single creator name in its dataset.

Next Steps

This post hopefully introduced the idea of using “field compressions” for name fields like dc.creator, dc.contributor, and dc.publisher as a way of looking at metadata quality in a dataset.

We calculated the amount of compression using OpenRefine’s fingerprint algorithm for the DPLA creator fields.  This ends up being 5.52% compression.

In the next few posts I will compare the different DPLA Hubs to see how they compare with each other.  I will probably play with a few different algorithms for creating the hash values I use.  Finally I will calculate a few metrics in addition to just the unique values (cardinality) of the field.

If you have questions or comments about this post,  please let me know via Twitter.