Influencers of the past: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
No edit summary
Line 96: Line 96:
| 8394
| 8394
| 68
| 68
|}
Then we need to evaluate the quality of our cleaning on the addresses and how many coordinates we managed to get. Those two steps are evaluated together as the quality of our cleaning directly reflect on the number of coordinates we get. Here are the results:
{| class="wikitable"
|-
! Year
! Entries pre-cleaning
! Entries with coordinates
! Quality assessment [%]
|-
| 1884
| 5590
| 3572
| 64
|-
| 1908
| 8394
| 5295
| 63
|}
|}



Revision as of 14:39, 8 December 2019

In this page, we will discuss and present our project Influencers of the past. Our goal is to show who were the notable people in Paris in 1888 and 1908 and where they lived. Here is the sketch of our project: Sketch of Influencers of the past.

Abstract

Our expected output is a webpage showing both maps from 1884 and 1908, with clusters indicating the number of inhabitants per neighbourhood. The more you zoom, the more details you can see. You can click on a point to see more information about someone (i.e. his/her name). We will provide an analysis of the results.

Planing

Task Status Deadline
Extract the data Done (:
Clean the data Done (:
Get coordinates of the addresses Done 22.11.19
Georeference old maps Done 22.11.19
Display people on maps Done 29.11.19
Web interface and analysis In progress 06.11.19

Historical sources

Main steps

Extracting the data from the directories

Our first step is to extract all the names and adresses from the two directories. To do so, we use Transkribus to get the OCR and then start to parse the informations.

Cleaning the data

This is the principal step in our project. The data the OCR gives us is quite messy, there are a lot of errors and we definetely need to correct them to hope obtaining the geocoordinates of our addresses. We also need to harmonise our results. For instance, we want to consider in the same way 'r.' and 'rue' (the French name for 'street') or 'bd' and 'boulevard'. Having all our addresses in a stardardized form is also helpful to easily retrieve the corresponding geocoordinates. The principal challenge of this step, is that we have two different OCRs for the two years (1884 and 1908). We thus had to implement two specific parsers.

Finding the geolocation of the adresses

To be able to show the adresses on the map, we need to find their geolocation (latitude/longitude coordinates). For this step, we have proceded in two steps. First we have used the list of addresses of Paris created by the DHLab. This database provides a list of old Paris addresses with the start and ending date (if known) and the geocoordinates (latitude and longitude, directly in the format EPSG:3857 handled by Leaflet). This first step has given us ADD PERCENTAGE % of our addresses. To complete our database, we then used the GeoPy API [1]. This API simply takes our remaining addresses and gives back the geocoordinates. With this second step, we have managed to geolocalise 92% of our addresses.

Georeference old maps of Paris

Once we have the geocoordinates of our addresses we need to georeference old maps of Paris. To do so we Georeferencer. Through the localisation of homologuous points between the old map and the present map, this tool allows to project geocoordinates on the old map. This can then be used with the library Leaflet to visualise our results.

Visualise results

Once we have all our elements we can start visualise our results. At first we tried to continue using Python with the Python module Folium [2] (implementing Leaflet). However the results were not great: it would take a long time to load and we would not have much control on how to visualise the people. This is why we have decided to switch to Javascript, making it also much simpler to embed the maps in our website. Then we had to decide how display the famous people on the map. The naive way would be to simply put all our addresses on the map but due to the large number of addresses we have (a few thousands) this would result in a overcrowded map. Our first idea is therefore to cluster our addresses when they are near each other. This will allow, at low level zoom, to visualise 'influential' neighbourhoods for instance. Then, when one starts to zoom more on the map, he will eventually reach a level where each person is shown as a dot. In this last case, when one clicks on the dot, a pop-up with additional information on the person (such as the name) will show up. To do so we use the plugin Leafler.markercluster. This is a first step to show how "clustered" the famous people are but we want to implement other visualisation to better show it. The first one use the plugin Leaflet.heat, a simple heatmap plugin, to represent the density of famous people. The second one adds to the map the arrondissements of Paris[3], coloring them given the number of famous people within. Finally, the same thing is done with the quarters of Paris[4]. Notice that both the arrondissements and the quarters date from 1860[5] and have not changed much up to the present day, meaning that finding the fanciest quarters is meaningful (even without knowing the precise history of Paris).

Implementation details

Quality assessment

In this section, we assess the quality of our processes. First we evaluate the quality of our parsing of the OCR output, comparing the number of entries in the annuaire and the number of addresses we have to clean. Note that for the 1908 list, the OCR has given us a text file and therefore this evaluation will combine both the quality of the OCR and the quality of our parsing methods to extract each pair name/address. For the 1884 list, the OCR directly gives us a table of names and addresses, thus this evaluation assesses the quality of the OCR, over which we have no control. To estimate the number of entries in the actual annuaire, we have counted them manually on a few pages to get the mean number of people per page (a very constant number due to the clear structure of the annuaires) and multiplied it by the number of pages. We get the following results:

Year Entries per page Number of pages Total number of entries Output of the OCR After removing missing values Quality assessment [%]
1884 35 182 ~6400 5709 5590 87
1908 40 310 ~12400 11045 8394 68

Then we need to evaluate the quality of our cleaning on the addresses and how many coordinates we managed to get. Those two steps are evaluated together as the quality of our cleaning directly reflect on the number of coordinates we get. Here are the results:

Year Entries pre-cleaning Entries with coordinates Quality assessment [%]
1884 5590 3572 64
1908 8394 5295 63

Links

References