Data Ingestion of Guide Commericiale: Difference between revisions
Tag: Manual revert |
|||
Line 262: | Line 262: | ||
Our final total accuracy comes out to '''97.14%''' | Our final total accuracy comes out to '''97.14%''' | ||
= Github Repository = | |||
[https://github.com/dhlab-class/fdh-2024-student-projects-rasmus-sahil-owen Data Ingestion of Guide Commericiale] | [https://github.com/dhlab-class/fdh-2024-student-projects-rasmus-sahil-owen Data Ingestion of Guide Commericiale] |
Revision as of 11:49, 12 December 2024
Introduction
Project Timeline & Milestones
Timeframe | Task | Completion |
---|---|---|
Week 4 |
|
✓ |
Week 5 |
|
✓ |
Week 6 |
|
✓ |
Week 7 |
|
✓ |
Week 8 |
|
✓ |
Week 9 |
|
✓ |
Week 10 |
|
✓ |
Week 11 |
|
✓ |
Week 12 |
|
|
Week 13 |
|
|
Week 14 |
|
Methodology
During our work, we approached our problem in two ways: in a more generalizable way that could process any guide commercial, and a more streamlined way that would require more manual annotation to improve results. We started with our first approach, [and worked on this for the first couple months], but came to realize its margin for error was for too high, making us pivot toward our second approach. The points below highlight each step of the way for both these approaches.
Approach 1: General
Pipeline Overview
Pipeline Overview This process aims to extract, annotate, standardize, and map data from historical documents. By leveraging OCR, named entity recognition (NER), and geocoding, the pipeline converts unstructured text into geographically and semantically meaningful data.
Pipeline
- Process pages by:
- Convert batch of pages into images
- Pre-process images for OCR using CV2 and Pillow
- Use Pytesseract for OCR to convert image to string
- Perform Named Entity Recognition by:
- Prompt GPT-4o to identify names, professions, and addresses, and to turn this into entries in a table format.
- Standardize addresses that include abbreviations, shorthand, etc.
- Append “Venice, Italy” to the end of addresses to ensure a feasible location
- Geocode addresses by:
- Use a geocoding API like LocationHQ to convert the address to coordinates
- Take top result of search, and append to the map
Divide Up Pages
Since we wanted this solution to work for any guide commercial, not just the ones provided to us, we intended to develop a way to parse through a document, identifying pages of interest that had usable data. While we got a working solution that was able to weed out pages with absolutely no information, we were never able to find a reliable solution for finding pages with the specific type of data we wanted. A possible idea we had was to train a model for identifying patterns within each page and grouping them off of that, but due to timeline restrictions we decided against it. Because of this, we never ended up finalizing this step for approach 1, and proceeded with the rest of our pipeline.
Perform OCR
To start, we used PDF Plumber for extracting pages from our document in order to turn them into images. With the goal of extracting text from these pages, we then used libraries such as CV2 and Pillow in order to pre-process each image with filtering and thresholding to improve clarity, and lastly performed OCR using Pytesseract, tweaking certain settings to accommodate the old scripture better.
Named Entity Recognition
After having extracted all text from a page, we can then prompt GPT-4o to identify names, professions, and addresses, and turn our text into a table format with an entry for each person. Furthermore, since these documents use abbreviations and shorthand for simplifying their writing process, we need to undo these by replacing them with their full meaning. Finally, to make geocoding in the future possible, we need to specify that each entry belongs to Venice by adding "Venice, Italy" to the end of each address.
Geocoding
For geocoding, we found LocationHQ to be a feasible solution for our cause, successfully mapping addresses to close by locations within Venice. In order to implement this into our project, we attempted to query each address in our table. While some queries narrowed down the request to a single address, some would result in multiple possible options. Our heuristic for these scenarios was to simply pick the top option, as it seemed to be the closest match each time. Finally, after gathering the coordinates for our addresses, we would map them onto a visual map that we ended up not proceeding with in the end.
Approach 2: Streamlined
Pipeline Overview
This approach outlines a systematic process for extracting, annotating, and analyzing data from historical Venetian guide commercials. The aim is to convert unstructured document data into a clean, structured dataset that supports geographic mapping and data analysis.
Pipeline
- Manually inspect entire document for page ranges with digestible data
- Perform text extraction using PDF Plumber
- Semantic annotation with INCEpTION, separating our pages into entries with first names, last names, occupations, addresses, etc.
- Clean and format data?
- What were the steps?
- Map parish to provinces using dictionary
- Plot on a map using parish and number from entry
- Perform data analysis.
Find page ranges of interest
The main difference with this approach compared to approach 1 was that instead of relying on a program to group our pages, we manually went through our document and found a range of pages that were of interest to us. This allowed us to go into the next steps of our pipeline with confidence, knowing we don't need to deal with the uncertainty of whether our page had irrelevant data or not.
Perform Text Extraction
Furthermore, after testing both OCR and text extraction, we found that using PDF Plumber's function for extracting text from the PDF directly worked much more reliably. Because of this, we ended up using this for all of our pages, resulting in a similar output to the OCR with a much higher accuracy. This also freed us up from requiring to perform pre-processing for images, making the pipeline much simpler.
Semantic Annotation
For semantic analysis, we steered toward using INCEpTION in place of GPT-4o, largely due to previous projects already having trained a model for extracting names, addresses, and professions from similar text documents. Another large benefit from this was that we no longer needed to rely on an external API for this, allowing us to bypass the limitations with financing our requests. [ADD MORE DETAIL HERE AND REWRITE MAYBE???]
Data Clean Up
[SAHIL WRITE THIS PART PLS]
Plot on a Map
WE HAVNET DONE THIS YET
Data Analysis
WE HAVEN'T DONE THIS YET
Results
Limitations & Future Work
Limitations
OCR Results
In Approach 1, we used the pytesseract library for our OCR, but came to the realization that this was unfeasible due to the amount of mistakes it made. This led us to switching to PDFPlumber's text extraction utility, which let us extract the text straight from the PDF. The document we used were processed by Google, meaning their OCR will most likely be higher than most people's. This approach with text extraction will mean simply taking photos of a document will end up not working, and that the text needs to be part of the PDF.
Time and Money
In our Approach 1, we intended on using GPT-4o for natural entity recognition, as well as a geolocating API for mapping our addresses to the correct location in Venice. What we came to realize was that when we have thousands of entries to process, each requiring possibly multiple requests to each endpoint, this would not be possible without a financial burden or sacrifice of time. This became one of the leading causes we pivoted toward Approach 2, which would cut these dependencies out of our project.
Metrics
Text Extraction
Overview
For our text extraction, we directly pulled the text from our PDF documen using PDFPlumber's built-in functions. As these documents were previously parsed and OCRd by Google, the accuracy of it would be much higher than doing it ourselves, but it still is not perfect.
Looking at the performance of this model on five randomly chosen pages, we manually checked every word on these pages, and counted the number of errors we found. As for the heuristic of what constitutes as an error, the criteria goes as follows:
- If a word does not show up as it is in the original document, it counts as one error.
- If 4 words are clumped together in the PDFPlumber when they should have spaces in between, this counts as 4 errors.
- If a word has an incorrect letter, it counts as 1 error
- If any marks are generated that were not intentionally in the old document, that counts as 1 error.
Page 1
- Total Words: 312
- Correct Words: 292
- Accuracy: 93.59%
Page 2
- Total Words: 256
- Correct Words: 254
- Accuracy: 99.22%
Page 3
- Total Words: 282
- Correct Words: 266
- Accuracy: 94.33%
Page 4
- Total Words: 278
- Correct Words: 259
- Accuracy: 93.17%
Page 5
- Total Words: 310
- Correct Words: 300
- Accuracy: 96.77%
Conclusion
Our final total accuracy comes out to 95.42%
Semantic Annotation
Overview
In the end, we decided to use INCEpTION for semantically annotating each of our pages, using a trained model provided to us by the instructors of the course. This model had been used on a similar Guide Commerciale file, and served as a great starting point for us to build off of.
Looking at the performance of this model on five randomly chosen manually cleaned up pages from our text extraction process, we manually checked each entry on these pages, and counted the number of errors we found.
Page 1
- Total entries: 39
- Incorrect entries: 2
- Accuracy: 94.87%
Page 2
- Total entries: 31
- Incorrect entries: 2
- Accuracy: 93.55%
Page 3
- Total entries: 37
- Incorrect entries: 1
- Accuracy: 97.30%
Page 4
- Total entries: 30
- Incorrect entries: 0
- Accuracy: 100%
Page 5
- Total entries: 38
- Incorrect entries: 0
- Accuracy: 100%
Conclusion
Our final total accuracy comes out to 97.14%
Github Repository
Data Ingestion of Guide Commericiale