Influencers of the past: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
Line 94: Line 94:
==== Names cleaning ====
==== Names cleaning ====


Once we had the addresses, we needed to clean the names.
Once we had the addresses, we needed to clean the names. To do so,


== Finding the geolocation of the adresses ==
== Finding the geolocation of the adresses ==

Revision as of 12:00, 12 December 2019

Abstract

The goal of this project is to show who were the notable people in Paris in 1884 and 1908 and where they lived. Our expected output is a webpage showing both maps from 1884 and 1908, with clusters indicating the number of inhabitants. The more you zoom, the more details you can see. You can click on a point to have more information about someone (i.e. his/her name). We will provide an analysis of the results.
Here is the initial sketch of our project: Sketch of Influencers of the past and you can find the final website at the following link: Project website.

Preview of our website

Planning

Task Status Deadline
Extract the data Done (:
Clean the data Done (:
Get coordinates of the addresses Done 22.11.19
Georeference old maps Done 22.11.19
Display people on maps Done 29.11.19
Web interface and analysis Done 06.12.19

Historical sources

In this project we are dealing with two main sources: the Annuaire du grand monde parisien of 1884 and the Paris-mondain: annuaire du grand monde parisien et de la colonie étrangère of 1908. These annuaires comprehend a list of people considered famous and influential at the time, listing for each of them their names and addresses. As stated in the preface by the author of the 1884 annuaire, Pol Hanin, the goal of such a book was to honor the high society of Paris and create a truly useful list of famous people[1].

For the visualisation, we have used two old maps of Paris. For the year 1884, we have the Nouveau plan complet illustré de la ville de Paris en 1884 by Alexandre Aimé Vuillemin and Charles Dyonnet. For the year 1908, we have the Plan de Paris, Mars 1908 et du chemin de fer métropolitain, distinguant les lignes déclarées d'utilité publique; les lignes concédées à titre éventuel et la concession de la Cie Nord-Sud by L. Wuhrer. Both maps are stored at the Bibliothèque nationale de France, in the departement Cartes et plans. Note that the second map also presents the subway network of Paris at the time, of which the first line was opened on July 19th of 1900 for the Olympics Games of that year[2].

Annuaire du grand monde parisien (1884)
Paris-mondain: annuaire du grand monde parisien et de la colonie étrangère (1908)
Annuaire du grand monde parisien (1884)
Paris-mondain: annuaire du grand monde parisien
et de la colonie étrangère (1908)

Main steps

Extracting the data from the directories

Our first step is to extract all the names and adresses from the two directories. To do so, we use Transkribus[3] to get the OCR and then start to parse the informations.

Cleaning the data

This is the principal step in our project. The data the OCR gives us is quite messy, there are a lot of errors and we definitely need to correct them to hope obtaining the coordinates of our addresses. We also need to harmonize our results. For instance, we want to consider in the same way 'r.' and 'rue' (the French name for 'street') or 'bd' and 'boulevard'. Having all our addresses in a standardized form is also helpful to easily retrieve the corresponding coordinates. The principal challenge of this step, is that we have two different OCRs for the two years (1884 and 1908). We thus had to implement two specific parsers.

The code for the parsers can be found here: 1884 parser and 1908 parser.

Cleaning the 1884 annuaire

The format for the 1884 OCR was xlsx. Since the results were given in small tables of a dozen of elements each but with a different number of columns each time (2,3,4 or 5), first we had to combine all our results in one sigle dataframe with two columns: Addresses and Names. Then we have proceeded with the cleaning of each column.

Addresses

For the addresses, we have based our cleaning on the work done by Peter Norvig[4]. The idea of this spelling corrector is to manually correct some addresses to learn the correct spellings and store the corrected words. Once this is done, the actual corrections are performed: word by word, we find all the words at edit distances 1 and 2 that are in the list of corrected words and replace it with the most probable correction. This is well-suited for correcting addresses since we have many occurrences of the same words (such as "rue" or "bd"). Once this is done, with a last few corrections on specific cases (simply using the replace() method for strings in Python), we can get the coordinates of each address.

Names

We also need to clean the names but this is much harder since there cannot be a list of "corrected names". What we can do, however, is correct the titles. Indeed, in the 1884 annuaire, many people's names come with their title (such as "Cte" for "Comte"). We therefore use a simple dict to map the abbreviations (and their different occurences, due to OCR errors) to the full title. We also correct some specific cases such as "cl'" instead of "d'". The last thing we have done is splitting the spouses as in many cases they were listed together. This has increased the number of people by approximately 500 people. In the end, we have decided to keep the households (not splitted) in the visualization but these results could still be used for further analysis.

Cleaning the 1908 annuaire

The format for the 1908 OCR was txt. By looking at the raw data from the OCR, we saw that most of the people were separated by a '\n', or were at least on different lines. Also, each new entry starts with the name of the person in uppercase and every information about him/her is separated by a comma.

We used these criteria to separate the people and put them into an array. Concretely to detect a new person, we check:

1) If the line is a '\n' 2) If the line starts with a family name i.e if at least 70% of the first word of the line is in uppercase. This threshold allows us to tolerate small OCR errors

We also split every people in our array by comma. Then, each entry of our array is an array which contains the name of the person as a first element. We retrieve his/her address with helpers methods.

Addresses

To find the address of a person, we iterate over the entry and look for a digit (or a digit followed by "bis"). This is because we noticed that most of the addresses start with the street number of the address. Once we found the digit, we concatenate all the following elements of the array until we hit a '('. This is because every address ends with "('arrondissement' number)" and we don't need it.

Names cleaning

Once we had the addresses, we needed to clean the names. To do so,

Finding the geolocation of the adresses

To be able to show the adresses on the map, we need to find their geolocation (latitude/longitude coordinates). For this step, we have proceded in two steps. First we have used the list of addresses of Paris created by the DHLab. This database provides a list of old Paris addresses with the start and ending date (if known) and the coordinates (latitude and longitude, directly in the format EPSG:3857 handled by Leaflet[5]). The difficulty was to ensure the matching of the addresses between our lists and the database. To do so, we have designed a "normalized" way of representing the addresses (removing all the accents, the punctuation, setting all the letters to lowercase and putting the number at the end).

To complete our database, we then used the GeoPy API [6]. This API simply takes our remaining addresses and gives back the geocoordinates.

Georeference old maps of Paris

Once we have the coordinates of our addresses we need to georeference old maps of Paris. To do so we use Georeferencer[7] to get our old maps in the format TIFF. Through the localisation of homologuous points between the old map and the present map, this tool allows to project coordinates on the old map. This can then be used with the library Leaflet to visualise our results.

Visualise results

Once we have all our elements we can start visualise our results. At first we tried to continue using Python with the Python module Folium[8] (implementing Leaflet). However the results were not great: it would take a long time to load and we would not have much control on how to visualise the people. This is why we have decided to switch to Javascript, making it also much simpler to embed the maps in our website. Then we had to decide how to display the famous people on the map. The naive way would be to simply put all our addresses on the map as markers but due to the large number of addresses we have (a few thousands) this would result in a overcrowded map. Our first idea is therefore to cluster our addresses when they are near each other. This will allow, at low level zoom, to visualise 'influential' neighbourhoods for instance. Then, when one starts to zoom more on the map, he will eventually reach a level where each person is shown as a dot. In this last case, when one clicks on the dot, a pop-up with additional information on the person (such as the name) will show up. To do so we use the Leaflet plugin MarkerCluster[9]. This is a first step to show how "clustered" the famous people are but we want to implement other visualisation to better show it. The first one use the Leaflet plugin Heat[10], a simple heatmap plugin, to represent the density of famous people. The second one adds to the map the arrondissements of Paris[11], coloring them given the number of famous people within. Finally, the same thing is done with the quarters of Paris[12]. Notice that both the arrondissements and the quarters date from 1860[13] and have not changed much up to the present day, meaning that finding the fanciest quarters is meaningful (even without knowing the precise history of Paris).

Quality assessment

In this section, we assess the quality of our processes. First we evaluate the quality of our parsing of the OCR output, comparing the number of entries in the annuaire and the number of addresses we have to clean. Note that for the 1908 list, the OCR has given us a text file and therefore this evaluation will combine both the quality of the OCR and the quality of our parsing methods to extract each pair name/address. For the 1884 list, the OCR directly gives us a table of names and addresses, thus this evaluation assesses the quality of the OCR, over which we have no control. To estimate the number of entries in the actual annuaire, we have counted them manually on a few pages to get the mean number of people per page (a very constant number due to the clear structure of the annuaires) and multiplied it by the number of pages. We get the following results:

Year Entries per page Number of pages Total number of entries Output of the OCR After removing missing values Quality assessment [%]
1884 35 182 ~6400 5709 5590 87
1908 40 310 ~12400 11045 8394 68

Then we need to evaluate the quality of our cleaning on the addresses and how many coordinates we managed to get. Those two steps are evaluated together as the quality of our cleaning directly reflect on the number of coordinates we get. Here are the results:

Year Entries pre-cleaning Entries with coordinates Quality assessment [%]
1884 5590 3572 64
1908 8394 5295 63
Example of a hard-to-read address in the 1908 annuaire

Overall we have managed to get the coordinates of 56% of the people in the 1884 annuaire and 43% of the people in the 1908 annuaire. This numbers seem to be quite low but it is important to stress out that, especially for the 1908 entries, the OCR output was of poor quality. This was to be expected as the 1908 annuaire itself has less structure than the 1884 one and the quality of the images on Gallica is poorer. In some cases, even for a human it would be hard to read the exact information, as in the following example.

Links

References