Monthly Archives: December 2016

First step analysis of Library of Congress Name Authority File

For a class this last semester I spent a bit of time working with the Library of Congress Name Authority File (LC-NAF) that is available here in a number of downloadable formats.

After downloading the file and extracting only the parts I was interested in, I was left with 7,861,721 names to play around with.

The resulting dataset has three columns, the unique identifier for a name, the category of either PersonalName or CorporateName and finally the authoritative string for the given name.

Here is an example set of entries in the dataset.

<http://id.loc.gov/authorities/names/no2015159973> PersonalName Thomas, Mike, 1944-
<http://id.loc.gov/authorities/names/n00004656> PersonalName Gutman, Sharon A.
<http://id.loc.gov/authorities/names/no99024929> PersonalName Hornby, Lester G. (Lester George), 1882-1956
<http://id.loc.gov/authorities/names/n86050616> PersonalName Borisi\uFE20u\uFE21k, G. N. (Galina Nikolaevna)
<http://id.loc.gov/authorities/names/no2011132525> PersonalName Cope, Samantha
<http://id.loc.gov/authorities/names/nr92002092> PersonalName Okuda, Jun
<http://id.loc.gov/authorities/names/n2008028760> PersonalName Brandon, Wendy
<http://id.loc.gov/authorities/names/no2008088468> PersonalName Gminder, Andreas
<http://id.loc.gov/authorities/names/nb2013005548> CorporateName Archivo Hist\u00F3rico Provincial de Granada
<http://id.loc.gov/authorities/names/n84081250> PersonalName Mermier, Pierre-Marie, 1790-1862

I was interested in how Personal and Corporate names differ across the whole LC-NAF file and to see if there were any patterns that I could tease out. The final goal if I could train a classifier to automatically classify a name string into either PersonalName or CorporateName classes.

But more on that later.

Personal or Corporate Name

The first thing to take a look at in the dataset is the split between PersonalName and CorporateName strings.

LC-NAF Personal / Corporate Name Distribution

As you can see the majority of names in the LC-NAF are personal names with 6,361,899 (81%) and just 1,499,822 (19%) being corporate names.

Commas

One of the common formatting rules in library land is to invert names so that they are in the format of Last, First.  This is useful when sorting names as it will group names together by family name instead of ordering them by the first name.  Because of this common rule I expected that the majority of the personal names will have a comma.  I wasn’t sure what number of the corporate names would have a comma in them.

Distribution of Commas in Name Strings

In looking at the graph above you can see that it is true that the majority of personal names have commas 6,280,219 (99%) with a much smaller set of corporate names 213,580 (14%) having a comma present.

Periods

I next took a look at periods in the name string.  I wasn’t sure exactly what I would find in doing this so my only prediction was that there would be fewer name strings that have periods present.

Distribution of Periods in Name Strings

This time we see a bit different graph.  Personal names have1,587,999 (25%) instances with periods while corporate names had 675,166 (45%) instances with periods.

Hyphens

Next up to look at are hyphens that occur in name strings.

Distribution of Hyphens in Name Strings

There are 138,524 (9%) of corporate names with hyphens and 2,070,261 (33%) of personal names with hyphens present in the name string.

I know that there are many name strings in the LC-NAF that have dates in the format of yyyy-yyyy, yyyy-, or -yyyy. Let’s see how many name strings have a hyphen when we remove those.

Date and Non-Date Hyphens

This time we look at the instances that just have hyphens and divide them into two categories. “Date Hyphens” and “Non-Date Hyphens”.  You can see that most of the corporate name strings have hyphens that are not found in relation to dates.  The personal names on the other hand have the majority of hyphens occurring in date strings.

Parenthesis

The final punctuation characters we will look at are parenthesis.

Distribution of Parenthesis in Name Strings

We see that most names overall don’t have parenthesis in them.  There are 472,254 (31%) name strings in the dataset with parenthesis. There are also 541,087 (9%) of personal name strings that have parenthesis.

This post is the first in a short series that takes a look at the LC Name Authority File to get a better understanding of how names in library metadata have been constructed over the years.

If you have questions or comments about this post,  please let me know via Twitter.

Removing leading or trailing white rows from images

At the library we are working on a project to digitize television news scripts from KXAS, the NBC affiliate from Fort Worth.  These scripts were read on the air during the broadcast and are a great entry point into a vast collection of film and tape collection that is housed at the UNT Libraries.

To date we’ve digitized and made available over 13,000 of these scripts.

In looking at workflows we noticed that sometimes the scanners and scanning software would leave several rows of white pixels at the leading or trailing end of the image.

It is kind of hard to see that because this page has a white background so I’ll include a closeup for you.  I put a black border around the image to help the white stand out a bit.

Detail of leading white edge

One problem with these white rows is that they happen some of the time but not all of the time.  Another problem is that the number of white lines isn’t uniform, it will vary from image to image when it occurs. The final problem is that it is not consistently at the top or at the bottom of the image. It could be at the top, the bottom, or both.

Probably the best solution to this problem is going to be getting different control software for the scanners that we are using.  But that won’t help the tens of thousands of these image that we have already scanned and that we need to process.

Trimming white line

Manual

There are a number of ways that we can approach this task.  First we can do what we are currently doing which is to have our imaging students open each image and manually crop them if needed.  This is very time consuming.

Photoshop

There is a tool in photoshop that can sometimes be useful for this kind of work.  It is the “Trim” tool.  Here is the dialog box you get when you select this tool.

Photoshop Trim Dialog Box

This allows you to select if you want to remove from the top of bottom (or left or right).  The tool wants you to select a place on the image to grab a color sample and then it will try and trim off rows of the image that match that color.

Unfortunately this wasn’t an ideal solution because you still had to know if you needed to crop from the top or bottom.

Imagemagick

Imagemagick tools have an option called “trim” that does a very similar thing to the Photoshop Trim tool.  It is well described on this page.

By default the trim option here will remove edges around the whole image that match a pixel value.  You are able to adjust the specificity of the pixel color if you add a little blur but it isn’t an ideal solution either.

A little Python

My next thing to look at was to use a bit of Python to identify the number of rows in an image that are white.

With this script you feed it an image filename and it will return the number of rows from the top of the image that are at least 90% white.

The script will convert the incoming image into a grayscale image, and then line by line count the number of pixels that have a pixel value greater than 225 (so a little white all the way to white white).  It will then count a line as “white” if more than 90% of the pixels on that line have a value of greater than 225.

Once the script reaches a row that isn’t white, it ends and returns the number of white lines it has found.  If the first row of the image is not a white row it will immediately return with a value of 0.

The next thing to go back to Imagemagick but this time use the -chop flag to remove the number of rows from the image that the previous script specified.

mogrify -chop 0x15 UNTA_AR0787-010-1959-06-14-07_01.tif

We tell mogrify to chop off the first fifteen rows of the image with the 0x15 value.  This means an offset of zero and then remove fifteen rows of pixels.

Here is what the final image looks like without the leading white edge.

Corrected image

In order to count the rows from the bottom you have to adjust the script in one place.  Basically you reverse the order of the rows in the image so  you work from the bottom first.  This allows you to apply the same logic to finding white rows as we did before.

You have to adjust the Imagemagick command as well so that you are chopping the rows from the bottom of the image and not the top.  You do this by specifying -gravity in the command.

mogrify -gravity bottom -chop 0x15 UNTA_AR0787-010-1959-06-14-07_01.tif

With a little bit of bash scripting these scripts can be used to process a whole folder full of images and instructions can be given to only process images that have rows that need to be removed.

This combination of a small Python script to gather image information and then passing that info on to Imagemagick has been very useful for this project and there are a number of other ways that this same pattern can be used for processing images in a digital library workflow.

If you have questions or comments about this post,  please let me know via Twitter.