Jerusalem 1840-1949 Road Extraction and Alignment: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
Line 74: Line 74:
{|class="wikitable" style="margin: 1em auto 1em auto;"
{|class="wikitable" style="margin: 1em auto 1em auto;"
|-
|-
| [[File:1625.jpg|300px|thumb|center|Figure 1:Good Predictions]]
| [[File:1875res contour(good).jpg|300px|thumb|center|Figure 1:Good Predictions]]
| [[File:1587.jpg|200px|thumb|center|Figure 2:Poor Predictions]]
| [[File:1842res contour(poor).jpg|200px|thumb|center|Figure 2:Poor Predictions]]
|-
|-
| [[File:1613.jpg|300px|thumb|center|Figure 3:Good Predictions]]
| [[File:1883res contour(good).jpg|300px|thumb|center|Figure 3:Good Predictions]]
| [[File:1545.jpg|300px|thumb|center|Figure 4:Poor Predictions]]
| [[File:1888res contour(poor).jpg|300px|thumb|center|Figure 4:Poor Predictions]]
|-
|-
| [[File:583.jpg|300px|thumb|center|Figure 5:Good Predictions]]
| [[File:1896res contour(good).jpg|300px|thumb|center|Figure 5:Good Predictions]]
| [[File:573.jpg|300px|thumb|center|Figure 6:Poor Predictions]]
| [[File:1876 7res predict(poor).jpg|300px|thumb|center|Figure 6:Poor Predictions]]
|-
|-
| [[File:1031.jpg|300px|thumb|center|Figure 7:Good Predictions]]
| [[File:1900res contour(good).jpg|300px|thumb|center|Figure 7:Good Predictions]]
| [[File:1030.jpg|300px|thumb|center|Figure 8:Poor Predictions]]
| [[File:1876 5 downsampled predict(poor).jpg|300px|thumb|center|Figure 8:Poor Predictions]]
|}
|}


* '''Results – Wall Alignment'''
* '''Results – Wall Alignment'''
We plot the overlapped area ratio w.r.t the year of the map and find that the alignment results of more recent maps are better.  
We plot the overlapped area ratio w.r.t the year of the map and find that the alignment results of more recent maps are better.  
[[File:-50.jpg|550px|thumb|center|Figure 1: the overlapped area ratio]]
[[File:Plot the overlapped area ratio w.r.t the year of the map.png|550px|thumb|center|Figure 1: the overlapped area ratio]]


We add the main road from OpenStreetMap to the wall and do alignment. The result also shows that the alignment results of more recent maps are better.  
We add the main road from OpenStreetMap to the wall and do alignment. The result also shows that the alignment results of more recent maps are better.  
{|class="wikitable" style="margin: 1em auto 1em auto;"
{|class="wikitable" style="margin: 1em auto 1em auto;"
|-
|-
|[[File:Error_distribution.png|300px|thumb|center|Figure 5: 1887]]
|[[File:1845 2cropped.png|300px|thumb|center|Figure 5: 1845]]
|[[File:Distplot_of_MSE.png|300px|thumb|center|Figure 6: 1915]]
|[[1915cropped.png|300px|thumb|center|Figure 6: 1915]]
|}
|}



Revision as of 17:28, 24 November 2021

Introduction

In this work, we present a semantic segmentation model based on neural networks for historical city maps. Based on the Jerusalem Old City corpora, we propose a new automatic map alignment method that surpasses the state of the art in terms of flexibility and performance.

Figure 2: Old City in openstreetmap

Motivation

The creation of large digital databases on urban development is a strategic challenge, which could lead to new discoveries in urban planning, environmental sciences, sociology, economics, and in a considerable number of scientific and social fields. Digital geohistorical data can also be used and valued by cultural institutions. These historical data could also be studied to better understand and optimize the construction of new infrastructures in cities nowadays, and provide humanities scientists with accurate variables that are essential to simulate and analyze urban ecosystems. Now there are many geographic information system platforms that can be directly applied, such as QGIS, ARCGIS, etc. how to digitize and standardize geo-historical data has become the focus of research. We hope to propose a model that can associate geographic historical data with today's digital maps, analyze and study them under the same geographic information platform, same coordinate projection, and the same scale. Eliminate errors caused by scaling, rotation, and the deformation of the map carrier that may exist in historical data and the entire process is automated and efficient. The scale is restricted to Jerusalem in our project. It is one of the oldest cities in the world, and is considered holy to the three major Abrahamic religions—Judaism, Christianity, and Islam. We are going to do georeferencing among Jerusalem’s historical maps from 1840 to 1949 and the modern map from OpenStreetMap. The overlaid maps reveal changes over time and enable map analysis and discovery. We are going to use the wall of the Old City as the feature to georeferenced because the region outside the Old City has seen many new constructions while the Old City has not great changes and the shape of the wall is relatively more consistent than other features like road networks.

Methodology

Data collection

126 historical maps of Jerusalem from 1837 to 1938.

Modern geographical data of Jerusalem from OpenStreetMap.


Wall Extraction

dhSegment is a generic approach for Historical Document Processing. It relies on a Convolutional Neural Network to predict pixelwise characteristics.

Figure 1: network architecture
  • Preprocessing

To make the image data fit into the neural networks and to make the old city region more dominant in the image, we cropped and scaled the original image using certain strategy to obtain our training, validation and testing set.

  • Training

For training set and validation set images, we pixel-wisely labelled them according to whether they belong to the old city with different colours. There are 25 traing images and 5 validation images now, Learning rate: 5e-5, Batch size: 2, N_epochs: 30, Data augmentation (rotation, scaling and color): True

Figure 5: Original Image
Figure 6: Labelled Image
  • Prediction

By doing prediction on the testing set, we got the predicted old city region and naturally got the contour of it, i.e. the wall.

Figure 5: Original Image
Figure 6: Predicted Binarized Mask
  • Postprocessing

We remove the minor noises by retain the largest connected region and draw the contour of it. We put the contour on the original image and find that it generally fits well, although there are some minor deviations.

Figure 1: After removing noises and drawing contour

After obtaining the wall contour, we try to fit the wall anchor from the openstreetmap on it and obtain the transformation matrix. The algorithm is as follows.

Firstly, we do an approximate scaling to scale the walls into nearly the same size. Since at the first stage the walls are not in the same direction, the scaling got here is not accurate enough. So after doing the rotation steps, we will scale the size again. Of course we can do the translation, rotation, scaling many times iteratively, but here only once can get a good result.

The translation and rotation step is to find the best translation and rotation angle that maximize the overlapping area of the two walls. Since it’s computationally expensive to search the translation pixel by pixel, and the angle degree by degree, we here narrow down the search space. We limit the search space of translation by only considering the translations that can fit the key point pairs, (P00, P01), (P10, P11), (P20, P21), (P30, P31), together. Here we can also use more key points, like the centroids. For every translation, we fix the key point and rotate the wall every 5 degrees from -30 degrees to 30 degrees and calculate the overlapping area. To calculate the overlapping area, we use FloodFill algorithm to fill the area bounded by the walls respectively and use logical-and operation to obtain the overlapped region and then see the number of pixels that are not zero to be the area. After the iterations, we get the maximized overlapping area and the transformation matrix.

For the wall extraction step, the neural network gives good predictions. But still it does not work well for some maps. For some maps, there is some concavity and convexity. This may owe to underfitting or overfitting. For some maps, the model just cannot make a prediction. This may be because our training set lacks samples that have similar features to these maps.

We plot the overlapped area ratio w.r.t the year of the map and find that the alignment results of more recent maps are better.

We add the main road from OpenStreetMap to the wall and do alignment. The result also shows that the alignment results of more recent maps are better.

Wall Alignment

  • Scaling

Firstly, we do an approximate scaling to scale the walls into nearly the same size. Since at the first stage the walls are not in the same direction, the scaling got here is not accurate enough. So after doing the rotation steps, we will scale the size again. Of course we can do the translation, rotation, scaling many times iteratively, but here only once can get a good result.

Figure 2: An approximate scaling
  • Translation & Rotation

We traverse the point-pair set to select the reference point-pair, perform translation and rotation to obtain maximized overlapping area of the two polygons.

Figure 2: Translation and rotation

The translation and rotation step is to find the best translation and rotation angle that maximize the overlapping area of the two walls. Since it’s computationally expensive to search the translation pixel by pixel, and the angle degree by degree, we here narrow down the search space. We limit the search space of translation by only considering the translations that can fit the key point pairs, (P00, P01), (P10, P11), (P20, P21), (P30, P31), together. Here we can also use more key points, like the centroids. For every translation, we fix the key point and rotate the wall every 5 degrees from -30 degrees to 30 degrees and calculate the overlapping area. To calculate the overlapping area, we use FloodFill algorithm to fill the area bounded by the walls respectively and use logical-and operation to obtain the overlapped region and then see the number of pixels that are not zero to be the area. After the iterations, we get the maximized overlapping area and the transformation matrix.

Results

  • Results – Wall Extraction

For the wall extraction step, the neural network gives good predictions. But still it does not work well for some maps. For some maps, there is some concavity and convexity. This may owe to underfitting or overfitting. For some maps, the model just cannot make a prediction. This may be because our training set lacks samples that have similar features to these maps.

Figure 1:Good Predictions
Figure 2:Poor Predictions
Figure 3:Good Predictions
Figure 4:Poor Predictions
Figure 5:Good Predictions
Figure 6:Poor Predictions
Figure 7:Good Predictions
Figure 8:Poor Predictions
  • Results – Wall Alignment

We plot the overlapped area ratio w.r.t the year of the map and find that the alignment results of more recent maps are better.

Figure 1: the overlapped area ratio

We add the main road from OpenStreetMap to the wall and do alignment. The result also shows that the alignment results of more recent maps are better.

Figure 5: 1845
300px|thumb|center|Figure 6: 1915

Project Plan and Milestones

Date Task Completion
By Week 4
  • Brainstorm project ideas, come up with at least one feasible innovative idea.
  • Prepare slides for initial project idea presentation.
By Week 6
  • Study related works about road extraction.
  • Determine the methods to be used.
  • Use Procreate to get road-tagged images as training dataset.
By Week 8
  • Use Procreate to get wall-tagged images as training dataset.
By Week 10
  • Prepare slides for midterm presentation.
--
By Week 11
--
By Week 12
--
By Week 13
  • Sort out the codes and push them to GitHub repository.
  • Write project report.
  • Prepare slides for final presentation.
--
By Week 14
  • Finish presentation slides and report writing.
  • Presentation rehearsal and final presentation.
--

Github Link

References