Terzani online museum
Introduction
The Terzani online museum is a student course project created in the scope of the DH-405 course. From the archive of digitized photos taken by the journalist and writer Tiziano Terzani, we created a semi-generic method to transform IIIF manifests of photographs into a web application. Through this platform, the users can easily navigate the photos based on their location, filter them based on their content, or look for similar images to the one they upload. Moreover, the colorization of greyscale photos is also possible.
The Web application is available following this link.
Motivation
Many inventions in human history have set the course for the future. One of them is the film-camera, which has transformed the way we share bondings and stories. Storytelling is an essential part of the human journey. Starting from what one does when waking up in the morning to the explorations in the deep space, stories form the connections. The tales were mostly transferred orally historically. Later came writing practices like Roman's codices to today's daily Petabytes of information stored in data centers. These means of transmission have always influenced their times and the way historians perceive it. One such information transmission means that emerged in the 19th century is the film-camera[1]. The invention of the camera[2] started to change the landscape of narrative. The boom of a camera resulted in an abundance of photographs capturing the events throughout the 20th century. However, today the vast majority of them are lying in the drawers or archives possessing a risk of wearing out or being destroyed. While digitize would prevent photographs from wearing out, knowledge extraction is not possible. Hence we have to create a medium for these large collections of photos, such that anyone, anywhere, can easily access them. This medium would help understand the then situations better. If not for research, it is always fascinating to visit history and alluring to see the pictures of past times.
Our project focuses on Tiziano Terzani, an Italian journalist, and writer. During the second half of the 20th century, he had extensively traveled in east Asia[3] and witnessed many important events. During these trips, he and his team captured many pictures of immense historical value. The Cini foundation digitized some of his photo collections 'Terzani photo collections'. Nevertheless, having to go through an innumerable number of images is not productive. Our project thus constitutes the creation of a Web application facilitating the access of the Terzani photo archive to a 21st-century digital audience that would help access these photographs and travel through them.
Description of the realization
The Terzani Online Museum is a web application with multiple features allowing users to navigate through Terzani photo collections. The different pages of the website described below are accessible on the top navigation bar of the website.
Home
The home page welcomes the users to the website. It invites them to read about Terzani or to learn about the project on the about page.
About
The about page describes the website's features to the visitors. It guides them through the usage of the gallery, landmarks, text queries, and image queries.
Gallery
The gallery allows the users to quickly and easily explore photo collections of particular countries. On the website's gallery, the users can find a world map centered on Asia. On top of this map, a red overlay shows the countries for which photo collections are available. By clicking on any nation, users can view the pictures associated with that country and navigate through the collection by selecting different page numbers. By clicking on the displayed image, users can open a modal window displaying the full photo - unlike the gallery where cropped photographs are shown - alongside its IIIF annotation. An option to colorize the images is also available on the modal window.
The next feature on the gallery is to be able to see at a glance the famous landmarks that are present in the photographs. For that, the users can click on the Show landmarks
button above the map to display markers on the map at the locations of the landmarks. On clicking the highlighted spots opens a small pop-up with the location name and a button to show the photos of that landmark.
Search
On the Terzani Online Museum's search page, users can explore the photographs depending on their content. The requests can be either made by text, to search for photos whose content corresponds to the input text, or by image to get the set of photographs that are most similar to the uploaded one. The search results are displayed similarly to the gallery.
Text queries
Users are invited to input in a text field the content they are looking for in the Terzani collection photos. This content can correspond to multiple things. It can be general labels associated with the photographs, it can be specific localized objects in the image, or it can be recognized text from the photos.
Below the text field, users can select two additional parameters to tune their queries. Only show bounding boxes restrains the results to the localized objects and crops their display around them. Search with exact word constraints the search domain to match precisely to the input and thereby not displaying the results that are generated by possible combinations of the words.
Image queries
Users can also upload an image from their machine and obtain the 20 closest pictures analyzed in comparison with all the photos of the collection.
Photo colorisation
To breathe life into the photo collections, we implemented a colorization feature. When the users click on a photo and on the button to colorize it, a new window displays the automatically colorized picture.
Note: the following feature is disabled on the website currently because of the lack of GPU
Methods
Data Processing
Acquiring IIIF annotations
The IIIF annotations of photographs form the basis of the project, and the first step is to collect them. The Terzani archive is available on the Cini Foundation server[4]. However, it doesn't provide any specific API to download the IIIF manifests of the needed collections. Therefore, we use Python's beautiful soup module to read the root page of the archive[5] and extract all collection IDs from there. Using the collected IDs, we obtain the corresponding IIIF manifest of the collection using urllib. Consequently, we read this manifest and only keep the annotations of photographs whose label explicitly says that it represents the recto (front side) of it.
In our project, we display a gallery sorted by country. Although the country information is not present in the IIIF annotation, it is available on the root page of the Terzani archive[6]. The collection's name takes after their origins. As these names are written in Italian and not formatted the same, we manually map the photo collection to its country. In this process, we have ignored those collections that have multiple country names.
Annotating the Photographs
Once in possession of the IIIF of the photographs, we annotate them using Google Cloud Vision. This tool provides a Python API with a myriad of annotation features. For the scope of this project, we decided to use the following :
- Object localization: Detects which objects are on the image with their bounding box
- Text detection: OCR text on the image alongside their bounding box.
- Label detection: Provides general labels to the whole image
- Landmark detection: Detects and returns the name of the place and its coordinates if the image contains a famous landmark.
- Web detection: Searches if the same photo is on the web and returns its references alongside a description. We make use of this description as an additional label for the whole image.
- Logo detection: Detects any (famous) product logos within an image along with a bounding box.
For each IIIF annotation, we first read the image data into byte format and then use the Google Vision API to get the annotations. However, some of the information returned by API cannot be used as-is. We processed bounding boxes and all texts the following way :
- Bounding boxes: To able to display the bounding box with the IIIF format, we need its top-left corner coordinates and its width and height. For OCR text, logo, and landmark detection, the coordinates of the bounding box are relative to the image, and thus we directly use them.
- In the case of object localization, the API normalizes the bounding box coordinates between 0 and 1. The width and height information of the photo is present in its IIIF annotation, which allows us to "de-normalize" the coordinates.
- Texts: Google API returns text in English for various detections and in other identified languages for text OCR detection. As to improve the search result, along with the original annotation returned by the API, we also add tags after performing some cleansing steps. The text preprocessing
- Lower Case: Convert all the characters in the text to lowercase
- Tokens: Convert the strings into words using nltk word tokenizer.
- Punctuation: Remove the punctuations in words.
- Stem: Converts the words into their stem form using the porter stemmer from nltk.
We then store the annotations and bounding box information together in JSON format.
Photo feature vector
The feature vector of a photograph finds its use in the search for similar images. For each photo in the collection, we generate a 512-dimensional vector using Resnet to represent the image. The feature vector, which is the output of the Convolutional Neural Network, is a numeric representation of the photo. Recently there has been a plethora of success in training the deep neural networks that perform tasks such as classification and localization with near-human cognition. The hidden layers in these networks learn the intermediate representations of the image, and thus they can serve as a representation of the image itself. Hence for this project, to generate the feature vectors of the photo collections, we used a trained Resnet 18. We chose Resnet because of the relatively small feature size. The feature vector is output at the average pooling layer before flattening the image, where the feature learning part ends. Similar to the annotations, a JSON document stores the vectors.
Database
As the data is primarily unstructured owing to the non-definitive number of tags, annotation, and bounding boxes an image can have, we use a NoSQL database and choose MongoDB due to its representation of data as documents. Using the PyMongo, we created three different collections on this database.
- Image Annotations: This is the base collection where each object has a unique ID and contains an IIIF annotation and in addition to the annotations that have a bounding box (object localization, landmark, and OCR) obtained with Google Vision.
- Image Feature Vectors: This collection contains the mapping between the object ID and its corresponding feature vector.
- Image Tags: This is a meta collection on top of the Image Annotations to help process the text search queries faster by searching the labels and returning the related image labels. It contains one object for each annotation, bounded object, landmark, and text detected by Google Vision, and they store a list of IDs of photos corresponding to them.
Colorization
As fascinating it is to visit old photographs, it poses difficulty in grasping the nuances if it is monochromatic. To this end, to colorize the images in Terzani Photo Collection, we have used DeOldify. DeOldify is a tool specially designed to colorize old black and white photographs. The description and code of Deoldify are available here.
Website
Back-end technologies
An account of handling data similarly to the way of its creation and not having to manage complex features like authentication, we choose a Python web framework of Flask which provides the essential tools to build a Web server.
The server primarily processes the users' queries. Along with making a bridge between the client and the database, It also takes care of colorizing a photo that is computational heavy.
Front-end technologies
To build our webpages, we make use of the course of the conventional HTML5 and CSS3. To make the website responsive on all kinds of devices and of screen sizes, we use Twitter's CSS framework Bootstrap. The client-side programming uses JavaScript with the help of the JQuery library. Finally, for easy usage of data coming from the server, we use the Jinja2 templating language.
Gallery by country
To create the interactive map, we used the open-source JavaScript library Leaftlet. To put in evidence the countries that Terzani visited, we used the feature that allows us to display GeoJSON on top of the map. We used GeoJSON maps to construct such a document that contains the countries we mapped manually.
When the user clicks on a country, the client makes an AJAX request to the server. In turn, the server queries the database to get the IIIF annotations of pictures matching the requested country. When the client gets this information back, it uses the image links from the IIIF annotations to display them to the user. The total number of results for a given country serves to compute the number of pages required to display all of them, while each page contains 21 images. To create the pagination, we use HTML <a>
tags, which, on click, make an AJAX request to the server asking for the relevant IIIF annotations
Map of landmarks
When the user clicks on the Show landmarks
button, an AJAX request is made to the server asking for the name and geolocalisation of all landmarks in the database. With this information, we can create with the Leaflet library a marker for unique landmarks. Additionally, Leaflet also allows creating a customized pop-up when clicking on the position. These pop-ups contain simple HTML with a button which, on click, queries for the IIIF annotations of the corresponding landmark.
Search by Text
Querying photographs by text happens in multiple steps described below. The numbers correspond to the numbers on the schema on the right.
- The user enters their query on the search bar. The client makes a request containing the user input to the server.
- Upon receiving the user text query, the server
- Tokenizes it into lower case words followed by removing any punctuation.
- The words undergoing stemming if the user does not indicate to search for an exact match.
- Then the server queries the
Image Tag
collection to retrieve the image IDs corresponding to each word.
- Tokenizes it into lower case words followed by removing any punctuation.
- The MongoDB database responds with the desired object IDs
- Upon receiving the object IDs,
- The server orders the images in the sequence of text matching score.
- Then queries
Image Annotation
collection to retrieve the IIIF annotation of these objects.- If the user checked the
Only show bounding boxes
checkbox, the server also asks for the bounding boxes information.
- If the user checked the
- The MongoDB database responds with the desired IIIF annotations and the bounding boxes if requested.
- When the server gets the IIIF annotations
- It constructs the IIIF image URLs of all results so that the resulting image has the shape of a square.
- If the user requests to show the bounding boxes only, then the server creates the IIIF image URL to obtain the bounding box region of the image.
- The client receives the IIIF annotations and image URLs from the server.
- It constructs the IIIF image URLs of all results so that the resulting image has the shape of a square.
- Using Jinja2, the client creates an HTML
<img>
for each Image URL.- The image data, hosted on the Cini Foundation server, are queried using the IIIF image URLs.
- The Cini Foundation server answers with the image data displaying the results to the user.
Search by Image
The process to query similar photographs is similar to the text queries.
- The user uploaded an image from their computer. The client makes a request containing the data of this image to the server.
- Upon receiving this request, the server
- Computes the feature vector of the user's image using a ResNet 18 in a similar fashion described while creating the database.
- It then queries the database for all feature vectors.
- The database answers with all feature vectors.
- When the server has all the feature vectors, it creates a similarity vector between the user uploaded image and all of the images returned by the database.
- The server obtains the similarity between the feature vectors using Cosine similarity.
- Then, the server selects the top 20 images having the highest similarity and queries the
Image Annotation
database for the IIIF of the photos.
- The remaining steps fall in place similarly to the text search case without the bounding box requirement.
Image colorization
The process of colorization is different from the text/Image-based search and has the least intervention. Once a user selects an image to colorize, the server initializes a DeOldify instance. The photo then undergoes a series of transformations and returns the final colorized image. Each time a user colorizes a photo, we store the result in the server to avoid the heavy computation repeatedly.
Quality assessment
Assessing the quality of our product is rather tricky. While our project makes use of many technologies (Google Vision, DeOldify, ...), we did not train any model or modified them in any way. Thus, it is not our job to evaluate their quality. As the Terzani Online Museum is a user-centered project, we thought it made more sense for the users, not the developers, to assess its quality. We therefore gathered feedback in the form of guided and non-guided user testing. We still however provide our own critical views regarding what the users cannot see, namely the data processing part.
Data Processing
In an ideal scenario, we would have liked the data processing part to be fully generic and automated. The scraping of IIIF annotation from the Cini Foundation server however requires some manual work. Indeed the lack of an API to easily access any IIIF manifest coerced us to parse the structure of the Terzani archive webpage. This means that if the structure of this page were to change, the code we wrote to scrape the IIIF annotation would be useless. Moreover, as the country of a collection is not available in the IIIF annotations, we have to manually set them from the name of the collection. The rest of the pipeline however is fully automated and generic. This is why we assess that we have developped a semi-generic method, where some manual work, scraping IIIF annotations and assigning a country to each of them, has to be done before running the automated script.
Concerning the creation of new tags and annotations for the photographies using Google Vision API, we can generally assess that the results are sufficiently reliable and coherent. However, as the API doesn't provide any control over the alphabets for the OCR, we noticed that it very often misses detection of words written in Chinese, Japanese, Vietnamese/, Hindi, etc... Results for english text however are very impressive and sometimes more precise than a human eye.
The annotation step is also very time consuming. Because the images are not stored on Google Cloud Storage, the process cannot happen asynchronously which leads to a large amount of waiting time. A further improvement of this project would be to make our own code asynchronous to accelerate the process. Moreover, we could also parallelize the computation of feature vectors to optimize the data processing even further.
Website user feedback
Text queries
The first feedback we got about text queries was that the way they weren't made on the exact text input was counter intuitive. Indeed we originally thought it would be too strenuous for the user to search for exact words to find a match and therefore resorted to make partial matches. This however creates unexpected results where you get photos of cathedrals when you were looking for cats, and were you got carving photos when searching for cars. We answered that concern by adding the Search for exact words checkbox which disables the partial matches.
Otherwise, the users were mostly happy with this feature and had fun making queries. The failing cases (e.g. the bounding boxes for "dog" also show photos of a pig and a monkey) were seen as more amusing than annoying. We asked the users to rate on a scale of 1 to 7 how relevant the results of their queries were (1 being irrelevant and 7 entierely relevant). The average relevancy score over all testers was approximately 6.2, which allows us to safely say that this feature is working well.
Image queries
Concerning the image queries, we had the remark that it would be practical to display the uploaded image next to the results. We therefore decided to implement this suggested feature and to also display the selected image before making the search queries.
Users were mostly pleased with the results, though not as impressed as for the text queries. They noticed that a picture of a face results with photos of people and a picture of a house gives building pictures, but didn't get an extremely similar photo that amazed them. We also asked them to rate the relevancy of this feature's results on a scale of 1 to 7 and got an average score of 5.8. It should be however noticed that this feature was tested on a subset of 1000 images from the 8500 available ones. Augmenting the number of potential results would also augment the chances of finding similar images.
While users did not complain about query time, the image queries take about 1-2 seconds to execute. This is due to the fact that the feature vector of the uploaded image has to be sequentially compared with all feature vectors from the database. As a further optimization, we could parallelize this computation to make it scale better and faster overall.
Code Realisation and Github Repository
The GitHub repository of the project is at Terzani Online Museum. There are two principal components of the project. The first one corresponds to the creation of a database of the images with their corresponding tags, bounding boxes of objects, landmarks and text identified, and their feature vectors. The functions related to these operations are inside the folder (package) terzani and the corresponding script in the scripts folder. The second component is the website that is in the website directory. The details of installation and usage are available on the Github repository.
Limitations/Scope for Improvement
Apart from the difficulties in the data processing mentioned above, there are some technical limitations that we can achieve using this project. Below is a list of limitations
- The search engine is not a machine learning model rather a simple text matching tool. Thus the results can deviate from the search query as well.
- Due to lack of time, we are unable to change the behavior of the
Show landmarks
button on the gallery page that keeps on adding markers.
Future Scope
As this project brings the possibility of viewing archived photographs, this fills only one half of the pie. The other half of the pie and future feasibility would be to associate the photos with writings. In this case, as Mr. Terzani is a journalist and writer himself, associating the images with his articles and books would enhance the reader reading the texts about the past through his writings.
Schedule
We spent the first couple of weeks setting the scope of our project. The original idea was to only colorise the photographs from the Terzani archive, but we quickly realized that there were already powerful softwares capable of doing. Therefore we moved the goalposts during week 5 and made the following schedule for the Terzani online museum.
☑: Completed ☒: Partially completed ☐: Did not undertake
Timeframe | Tasks |
---|---|
Week 5-6 |
|
Week 6-7 |
|
Week 7-8 |
|
Week 8-9 |
|
Week 9-10 |
|
Week 10-11 |
|
Week 11-12 |
|
Week 12-13 |
|
Week 13-14 |
|