Europeana: A New Spatiotemporal Search Engine

From FDHwiki
Jump to navigation Jump to search

Introduction

Europeana as a container of Europe’s digital cultural heritage covers different themes like art, photography, and newspaper. As Europeana has covered diverse topics, it's difficult to balance the ways to present digital materials according to their content. The search for some specific topics needs to go through different steps, and the result of the search might also dissatisfy the user's intention. After having a deep knowledge of the structure of Europeana, we decided to create a new search engine to better present the resources according to their contents. Taking the time and scale of our group into account, we selected the theme Newspaper as the content for our engine. In order to narrow down the task further, we selected the newspaper La clef du cabinet des princes de l'Europe as our target.

La clef du cabinet des princes de l'Europe was the first magazine in Luxembourg. It appeared monthly from July 1704 to July 1794. There are 1,317 La clef du cabinet des princes de l'Europe magazines in Europeana. The page number for each magazine is between 75 to 85. In order to reduce the amount of data to a scale that can be dealt with on our laptops, we randomly selected 8,000 pages from the whole time span of the magazine.

In order to have a better presentation of the specific magazine on our engine, we mainly implement OCR, text analysis, database design, and webpage design.

OCR is the electronic or mechanical conversion of images of typed, handwritten, or printed text into machine-encoded text. This conversion from in-kind to digital format can not only be used for historical and cultural protection but also provide us access to a deep analysis of them based on the computer. In our work, we used OCR to convert the image format magazine to text and store the text in the database, which provides us with more convenience and chances to better deal with them.

For the text analysis part of our work, we used 3 methods: name entity, LDA, and n-gram to deal with the text we got.

For the presentation of the magazine, we developed a webpage to realize the search and analysis functions. The webpage aim at realizing interactivity between users, and let users have an efficient way to reach the content they'd like to get.


Deliverables

  • The 8000 pages of La clef du cabinet des princes de l'Europe from July 1704 to July 1794 in image format from Europeana's website.
  • The OCR results for 8000 pages in text format.
  • The dataset for the text and results of text analysis based on LDA, name entity, and n-gram.
  • The webpage to present the contents and analysis results for La clef du cabinet des princes de l'Europe.
  • The GitHub repository contains all the codes for the whole project.

Motivation

1. Build Dataset

Project Plan and Milestones

Date Task Completion
By Week 3
  • Brainstorm projects ideas.
  • Prepare slides for initial project idea presentation.
By Week 5
  • Discuss the differences between image analysis and text analysis in terms of related algorithms, processing toolkits, implementation difficulties and display methods.
  • Decide to focus on text processing.
  • Select a subset collection from the "Newspaper collection" of Europeana for our project.
  • Check the content of "La clef du cabinet des princes de l'Europe" and learn its structure and time span.
By Week 6
  • Each of us read some pages of the journal to get an overall understanding of it.
  • We find that the accuracy of the OCR results isn't very satisfying and decide to somehow improve the OCR results before text analyzing.
  • Request for data.
By Week 7
  • Research in OCR methods and find some OCR methods for Italian italics
  • Get text by web analysis
  • Use DeepL to translate FR to ENG, and then translate ENG to FR, finally check results
  • Reproduce the OCR method from the literature and find that recognition has improved.
By Week 8
  • Apply OCRopus to a small set of images.
  • Use a grammar checker to analyze the result of OCRopus.
By Week 9
  • Prototype design.
  • Database design.
By Week 10
  • Get Europeana's API
  • Use the API to extract the URL for each page of our specific newspaper.
  • Download each page of our specific newspaper as images using the URL we got.
By Week 11
  • OCR using the better model and Kraken engine,
  • Store the text we get in the database.
  • Share for a grammar checker to optimize the text we get.
By Week 12
  • Use new selected grammar checker API to optimize the text.
  • Use entropy to analyze the result of the final text.
By Week 13
  • Build the web from our prototype.
  • Use different text analysis methods: LDA, n-gram, and name entity, to analyze the text
By Week 14
  • Final report and presentation.

Github Repository

https://github.com/XinyiDyee/Europeana-Search-Engine

Reference