Spatialising Sarah Barclay Johnson's travelogue around Jerusalem (1858): Difference between revisions

From FDHwiki
Jump to navigation Jump to search
Line 96: Line 96:
With SpaCy, we included results labeled as "GPE", "LOC", "ORG", and "PERSON". To better illustrate the result from SpaCy, we made this Venn graph below.
With SpaCy, we included results labeled as "GPE", "LOC", "ORG", and "PERSON". To better illustrate the result from SpaCy, we made this Venn graph below.
[[File:fdh-spacy.png|200px|thumb|center|SpaCy Result]]
[[File:fdh-spacy.png|200px|thumb|center|SpaCy Result]]
A: False negative, those toponyms in the book that SpaCy didn't get;
A: False negative, those toponyms in the book that SpaCy didn't get; (16)


B: True positive, with correct labels (GPE & LOC);
B: True positive, with correct labels (GPE & LOC); (13)


C: True positive, with wrong labels (ORG & PERSON);
C: True positive, with wrong labels (ORG & PERSON); (17)


D: False positive, with correct labels (GPE & LOC);
D: False positive, with correct labels (GPE & LOC); (68)


E: False positive, with wrong labels (ORG & PERSON).
E: False positive, with wrong labels (ORG & PERSON); (13)
 
{|class="wikitable"
!|A
!|B
!|C
!|D
!|E
|-
|16
|13
|17
|68
|13
|}


Based on these sets, we proposed several metrics to assess the SpaCy results:
Based on these sets, we proposed several metrics to assess the SpaCy results:


Accuracy: (B+C)/(A+B+C) = 30/46 = 0.652   
Accuracy: (B+C)/(A+B+C) = 0.652   


Precision: (B+C)/(B+C+D+E) = 30/111 = 0.270
Precision: (B+C)/(B+C+D+E) = 0.270


Mislabelling rate: C/(B+C) = 17/30 = 0.567
Mislabelling rate: C/(B+C) = 0.567


Thus, if we exclude the ones labeled as "ORG" and "PEOPLE", the new Accuracy would be B/(A+B) = 13/29 = 0.448, and the new Precision would be B/(B+E) = 13/26 = 0.5.
Thus, if we exclude the ones labeled as "ORG" and "PEOPLE", the new Accuracy would be B/(A+B) = 0.448, and the new Precision would be B/(B+E) = 0.5.


In either case, the results are not satisfying enough.  
In either case, the results are not satisfying enough.  

Revision as of 18:40, 19 December 2023

Abstract

Introduction

Caption

Delving into the pages of "Hadji in Syria: or, Three years in Jerusalem" by Sarah Barclay Johnson 1858, this project sets out to digitally map the toponyms embedded in Johnson's 19th-century exploration of Jerusalem with the wish to connect the past and the present. By visualizing Johnson's recorded toponyms, this project aims to offer a dynamic tool for scholars and enthusiasts, contributing to the ongoing dialogue on the city's historical evolution.

This spatialization, in its attempt, pays homage to Johnson's literary contribution, serving as a digital window into the cultural crossroads: Jerusalem. The project invites users to engage with the city's history, fostering a deeper understanding of its rich heritage and the interconnected narratives that have shaped the city. In this fusion of literature, history, and technology, we hope to embark on a digital odyssey, weaving a narrative tapestry that transcends time and enriches our collective understanding of Jerusalem's intricate past.

Motivation

Deliverables

  • Pre-processed textual dataset of the book.
  • Application results of NER using Spacy and GPT-4 on the book's text.
  • Comparative analysis report of NER effectiveness between Spacy and GPT-4.
  • Manually labelled dataset for a selected chapter for NER validation.
  • QGIS mapping files visualizing named locations from selected chapters and the entire book.
  • Visual representative maps of the narrative journey, highlighting key locations and paths.
  • Developed scripts for automating Wikipedia page searches and extracting location coordinates.
  • Dataset with matched coordinates for all identified locations.
  • A developed platform to display project outputs.

Methodology

Assessing and Preparing OCR-Derived Text for Analysis

In our project, which involved analyzing a specific book, the initial step was to acquire the text version of the book. We found and downloaded the OCR text from Google Books and then assessed the quality of the textual data. A key metric in our assessment was the ratio of “words in the text that exist in a dictionary” to “total words” in the text, calculated using the NLTK library. Considering the book's inclusion of multiple languages, we set a threshold of 70% for this ratio. If met or exceeded, we regarded the text's quality satisfactory for our analysis purposes.

Following this quality assessment, we proceeded with text preprocessing, adapted to the specific needs of our study. Notably, we chose not to remove stop words or convert the text to lowercase, maintaining the original structure and form of the text.

Detecting Locations

NER with Spacy

In the initial stages of the project, Spacy was employed for Named Entity Recognition (NER) to analyze the text and automatically classify entities such as locations, organizations, and people. SpaCy's pre-trained models and linguistic features facilitated the identification of named entities within the text, allowing for the automatic tagging of toponyms. Specifically, the focus of our project was on extracting toponyms, which are place names or locations relevant to the geographic context of the travelogue, i.e. usually "GPE" (Geopolitical Entities) and "LOC" (Regular Locations) in SpaCy's labeling system.

However, as the project progressed, it became apparent that SpaCy's performance in accurately labeling toponyms was not entirely satisfactory, encountering mislabeling issues that could impact the precision of the spatial representation. Here is the SpaCy output from one sample paragraph:


Toponym Name NER Label Correct Label
Bethlehem ORG GPE
Bethany GPE GPE
Mary PERSON PERSON
meek PERSON N.A.
Lazarus ORG PERSON
Christ ORG PERSON
Calvary ORG LOC
Olivet PERSON LOC


So, there is a mislabeling problem. In theory we only need to retrieve the toponyms, i.e. “GPE” & “LOC”, but SpaCy labelled some of them as “PERSON” or “ORG”. In other words, if we only select “GPE” and “LOC”, we’ll lose some toponyms; if we also select “ORG” and “PERSON”, we’ll get some non-toponyms.

Difficulties when working with historical content

When applying Named Entity Recognition (NER) with Spacy to historical content, we encountered significant challenges. The main problem was the frequent misnaming of locations, which is a result of place names changing over time, especially in historical and biblical contexts. These names often have varied across multiple languages, adding to the complexity. Furthermore, even by reading, it was occasionally challenging to determine the current significance or identity of these names due to their changing nature over centuries. This complexity highlighted how essential it is to understand the relationships between locations and what they mean within the book's narrative along with to representing a linguistic and technical challenge. It is critical to comprehend these relationships because they have a significant impact on how the text is interpreted and understood overall.

GPT-4

Preliminary Analysis for Model Selection - Assessment Focusing on Chapter 3

Manual detection

In our preliminary analysis for model selection, we focused on Chapter 3 for a detailed assessment. The initial step involved manually detecting and labeling named entities, specifically locations mentioned in the text. This was achieved by highlighting relevant text segments and subsequently gathering these identified locations into a spreadsheet. To ensure the accuracy of our location identification, each location was cross-referenced with its corresponding Wikipedia page. This was an important step, especially in cases where the context of the book made it challenging to understand the exact nature of the places mentioned, even after analyzing the entire paragraph. This method provided a solid foundation for our named entity recognition approach.

Spacy Results

With SpaCy, we included results labeled as "GPE", "LOC", "ORG", and "PERSON". To better illustrate the result from SpaCy, we made this Venn graph below.

SpaCy Result

A: False negative, those toponyms in the book that SpaCy didn't get; (16)

B: True positive, with correct labels (GPE & LOC); (13)

C: True positive, with wrong labels (ORG & PERSON); (17)

D: False positive, with correct labels (GPE & LOC); (68)

E: False positive, with wrong labels (ORG & PERSON); (13)

Based on these sets, we proposed several metrics to assess the SpaCy results:

Accuracy: (B+C)/(A+B+C) = 0.652

Precision: (B+C)/(B+C+D+E) = 0.270

Mislabelling rate: C/(B+C) = 0.567

Thus, if we exclude the ones labeled as "ORG" and "PEOPLE", the new Accuracy would be B/(A+B) = 0.448, and the new Precision would be B/(B+E) = 0.5.

In either case, the results are not satisfying enough.


GPT-4 Results

  • Daniele
SpaCy Result

Since the GPT-4 Results outperformed the results of Spacy NER, presented GPT-4 prompt has been used to retrieve the locations in the book.


Matching Wikipedia Pages

Using the Wikipedia API , locations identified by GPT were searched in Wikipedia, and the first relevant result was recorded. Additionally, the first image found on the page of the recorded link was retrieved. This approach was primarily used to verify the accuracy of manually determined locations. Subsequently, after all locations were obtained, it was used both for visualizing the author's path and for acquiring coordinates for locations without coordinates.

Tracking Author's Route on Maps

Finalizing the List of Coordinates

1. Fuzzy matching GPT-4 results with an existing location list

We've got a list of existing toponyms in Jerusalem that were extracted previously from one map with coordinates. The list is multi-lingual, which means there might be multiple names for one place in different languages including English, Arabic, German, and Hebrew. Considering toponyms that we retrieved from the travelogue could be in languages other than English (perhaps quotes or references to historical/cultural stuff) and there might be slightly different ways to refer to the same place, a fuzzy matching algorithm is applied to match the toponyms GPT retrieved and the toponyms in multiple languages from the list. We used "fuzzywuzzy" package and set the similarity_score over 80 as the passing standard. After fuzzy matching, we got X unique toponyms with Y unique coordinates.

2. Retrieving coordinates by matched Wikipedia pages

Utilizing previously obtained Wikipedia links, coordinates were gathered using a web scraping method. To ensure the exclusion of broader locations such as the 'Mediterranean Sea,' the coordinates obtained were filtered to focus specifically on the Jerusalem region. For locations where coordinates could not be determined in the previous step, additional coordinate information was added, culminating in the final version of the dataset being recorded.


Chapter Count of Geometry Count of Location
1 1 7
2 2 16
3 21 32
4 9 16
5 4 6
6 1 4
7 4 11
8 7 10
9 2 7
10 4 8
11 3 8
12 2 4
13 4 6
14 - 4
16 4 12
18 4 7
19 2 7
20 3 5
Grand Total 77 170

Visualization by GeoPandas

Daniele

  1. QGIS
  2. GeoPandas

Creating a Platform for Final Output

To display our final output, we built a website on which interactive maps about the travelogue are shown by chapters.

Results

Limitations and Further Work

- API for a smoother process - Improve with the full book instead of chapter by chapter

automated application for all historical context

augmented interaction on map

Conclusion

Project Timeline & Milestones

Timeframe Task Completion
Week 4
  • Exploring literature of NER
  • Finding textual data of the book
Week 5
  • Pre-processing text
  • Quality assessment of the data
Week 6
  • Applying NER using Spacy
Week 7
  • Manually labelling chapter 3
  • GPT-4 Prompt Engineering
Week 8
  • Working on mapping with QGIS
Week 9
  • Finalizing GPT-4 Prompt
  • Automating Wikipedia Page Search
Week 10
  • Finalizing the list of manually detected locations
  • Evaluation of GPT-4 and Spacy Results for chapter 3
Week 11
  • Matching the coordinates of the locations from chapter 3
  • QGIS mapping of the locations from chapter 3
Week 12
  • Visualizing the full chapter 3 journey
  • Retrieving the locations from the entire book
Week 13
  • Matching the coordinates of the locations from the entire book
  • Retrieving coordinates from matched Wikipedia pages
  • QGIS Mapping of the locations from the entire book
  • Visualizing the full journey
Week 14
  • Develop a platform to display outputs
  • Complete GitHub repository
  • Complete Wiki page
  • Complete presentation

GitHub Repository

GitHub Link

Course: Foundation of Digital Humanities (DH-405), EPFL

Professor: Frédéric Kaplan

Supervisors:

Authors:

References