In the previous post I walked through some of the different ways that we could normalize a subject string and took a look at what effects these normalizations had on the subjects in the entire DPLA metadata dataset that I have been using.
This post I wanted to continue along those lines and take a look at what happens when you apply these normalizations to the subjects in the dataset, but this time focus on the Hub level instead of working with the whole dataset.
I applied the normalizations mentioned in the previous post to the subjects from each of the Hubs in the DPLA dataset. This included total values, unique but un-normalized values, case folded, lower cased, NACO, Porter stemmed, and fingerprint. I applied the normalizations on the output of the previous normalization as a series, here is an example of what the normalization chain looked like for each.
total total > unique total > unique > case folded total > unique > case folded > lowercased total > unique > case folded > lowercased > NACO total > unique > case folded > lowercased > NACO > Porter total > unique > case folded > lowercased > NACO > Porter > fingerprint
The number of subjects after each normalization is presented in the first table below.
|Hub Name||Total Subjects||Unique Subjects||Folded||Lowercase||NACO||Porter||Fingerprint|
Here is a table that shows the percentage reduction after each field is normalized with a specific algorithm. The percent reduction makes it a little easier to interpret.
|Hub Name||Folded Normalization||Lowercase Normalization||Naco Normalization||Porter Normalization||Fingerprint Normalization|
Here is that data presented as a graph that I think shows the data a even better.
You can see that for many of the Hubs you see the biggest reduction happening when applying the Porter Normalization and the Fingerprint Normalization. Hubs of note are ArtStore which had the highest percentage of reduction of the hubs. This was primarily caused by the Porter normalization which means that there were a large percentage of subjects that stemmed to the same stem, often this is plural vs singular versions of the same subject. This may be completely valid with out ArtStore chose to create metadata but is still interesting.
Another hub I found interesting with this data was that from Harvard where the biggest reduction happened with the Fingerprint Normalization. This might suggest that there are a number of values that are the same just with different order. For example names that occur in both inverted and non-inverted form.
In the end I’m not sure how helpful this is as an indicator of quality within a field. There are fields that would benefit from this sort of normalization more than others. For example subjects, creator, contributor, publisher will normalize very differently than a field like title or description.
Let me know what you think via Twitter if you have questions or comments.