Alignment of XIXth century cadasters
Introduction
TODO: Examples of references (APA style)[1].
About a thousand of French napoleonian cadasters have been scanned and need from now to be aligned. A lot of different cities are included in this catalogue as La Rochelle, Bordeaux, Lyon, Lille, Le Havre and also cities that are no longer under French juridiction as Rotterdam.
Similarly to the work that had been done by the Venice Time Machine project, the idea is to attach every maps from a cadaster in order to get a single map of each city.
The main challenge for this project is the automatisation of this process, despite all inconsistencies in the maps, in terms of scale, orientation or conventions. Even if the instructions for the realisation of the maps were quite strict, some differences might last, for example in the scale, if there were nothing to show in some areas, or maps are not always oriented top-north (which is even not always indicated).
Deliverables
- [ ] Tool - [ ] rectification - [ ] Alignment - [ ] JSON generation - [ ] reconstruction of covered areas - [ ] PoC: La Rochelle downtown
This project provides an exploration of possible automation for aligning cadastres, as well as the development of an interface (in the form of a Jupyter Notebook) to supervise this task.
Project setup
Methods exploration on Berney's cadaster
For the primary research of methods to reattach cadastral maps, the so called cadastre Berney from Lausanne (1827) has been used, as long as the ground truth for this particular case is known and lot of processing steps (as lines and classes predictions) have already been made. The first exercise has been made on the two first maps, using the lines prediction files. The quandary was to be able to detect the common parts of these two maps, in this case the Rue Pépinet and the top of Rue du petit chêne. Many different methods have been tested for that task, mainly with help of the openCV python library. The researches have focused on scale invariant feature transform (SIFT), General Hough Transform (GHT)[2] and Template Matching (TM).
Template matching principle
Template matching is an OpenCV[1] function provided under the name cv.matchTemplate(). It takes as an imput a large scale image and smaller one, called template, that it will try to find in the first image. Concretely, the function slides the template through the initial image and for each position, gives a score of closeness. Note that the template always fully overlap the image, therefore the template must be fully included in the image. The function gives then as an output a matrix with score for each position. It then easy to find the position of the best score.
This function is known to have better results on greyscale images. It is therefore a wise decision to apply it on the lines detected files.
First reattachment of two cadastral maps
The method that gave the most satisfying results (in terms of final output and computation time) was the template matching. The strategy is to cut a template in one of the maps and find the best match in the other one. Attaching the two maps together is then an almost straightforeward task, as long as the lines prediction files are only black (0) or white (255) pixels, and the final result is just the sum of the two of them adapted with the best matching position.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Organisation
Timeline summary of first steps
Matching method | Template extraction | Growth | |
---|---|---|---|
Week 4 | SIFT | - | 1 to 1 |
Week 5 | SIFT and GHT | Manual | 1 to 1 |
Week 6 | GHT and TM | Manual | semi-automated 1 to 1 |
Week 7 | GHT and TM | Manual | semi-automated 1 to 1 |
Week 8 | TM | Manual and along lines | semi-automated 1 to 1 |
Week 9 | TM | Manual and along lines | 1 to 1 or N |
Week 10 | TM | Manual and along lines | 1 to 1 or N |
Final milestones of semester
Objective | |
---|---|
Week 11 | Extract lines on a first set of cadastral maps (La Rochelle or Bordeaux) & manage pipeline |
Week 12 | Test pipeline on the new extracted lines files |
Week 13 | Adapt our model or extend it on other cities |
Week 14 | Final presentation |
Automatisation process
- [ ] Trials - [ ] Automatic template extraction - [ ] Adjacent cadastres considerations: iterative growth - [ ] Taking into account: both orientation and scale - [ ] Low computational Efficiency - [ ] Necessity to fine-tune manually parameters due to the lack of a metric closer to human understanding - [ ] Sometimes poor results (eg. less dense areas, etc.) - [ ] Too fragile method ? - [ ] And still time consuming and need close supervision => change of paradigm
After this liminary result, a lot of question were nevertheless still to be answered. For example, is the orientation of our maps precisely in the North direction ? Will it be possible to have an explicit order of maps reattachment ? Will the matching score be as good in the countryside as it was in the city ? Is the scale homogenous within an entire city ? Or what criteria could be used to automatise the template selection ?
Due to these limitations, we decided to shift paradigm and to develop a tool more closely supervised.
Reattachment interface
[ ] Jupyter Innotater [ ] Rectifying [ ] rotation [ ] name [ ] NOT IMPLEMENTED: scale [ ] Matching process [ ] template selection [ ] target selection [ ] orientation dependent or not [ ] Output = "raw" network [ ] Rebuilding [ ] homographies [ ] pairwise visualisation [ ] area growth [ ] new network [ ] inverse homographies included [ ] shortest path
The AlignmentTool was developed to offer a highly supervised method to perform the cadastres' alignment.
The backend processes are mainly handled through OpenCv[1], the data structure is managed with NetworkX[3], and the Notebook interface relies on jupyter-innotater[4]. The architecture and proceeding of each of these components are detailed below.
Backbone
Matching
Homography
In the context of this project, homographies are restricted to Euclidean transformations (allowing rotations and translations but preserving the distance between every pair of points). However, as the rest of the implementation is more general, slight modifications of the homographies' computation could enable the intergration of more complex transformations (eg. affine or projective) within the pipeline.
Stitching
Network structure
{ "directed": true, "multigraph": false, "graph": {}, "nodes": [{ "h": int — height of the corresponding image, "w": int — width of the corresponding image, "label": str — name }, { "h": int — height of the corresponding image, "w": int — width of the corresponding image, "label": str — name }], "match": [{ "score": float — score of the template matching process, "anchor_tl": tuple: two int — coordinate of the template top left corner on the anchor, "anchor_br": tuple: two int — coordinate of the template bottom right corner on the anchor, "target_tl": tuple: two int — coordinate corresponding to anchor_tl on target, "target_br": tuple: two int — coordinate corresponding to anchor_br on target, "anchor": str — name of the anchor cadastre\node, "target": str — name of the target cadastre\node }] }
User guide
Lines detection process
Discussion and limitations
- [ ] Lack of convincing metric / purely qualitative results - [ ] Failing to automate the task
Why doesn't it work ?
Future Work
- [ ] Tool roadmap - [ ] Integration of other matching methods - [ ] Integration of more automation (adjacent matching) - [ ] improve UX - [ ] Taking into account matches from all adjacent cadastres and recursive adjustments - [ ] eg. Bundle Adjustment[5] - [ ] Automation - [ ] more heuristic and domain knowledge — in traditional ML/CV methods - [ ] investigate "deeper" approaches => add some literature: eg. [6] [7] - [ ] Alignment on Openstreetmap
References
- ↑ 1.0 1.1 1.2 Bradski, G. (2000). The OpenCV Library. Dr. Dobb's Journal of Software Tools. User site: https://opencv.org
- ↑ Ballard, D. H. (1981). Generalizing the Hough transform to detect arbitrary shapes. In Pattern Recognition (Vol. 13, Issue 2, pp. 111–122). Elsevier BV. doi:10.1016/0031-3203(81)90009-1
- ↑ Aric A. Hagberg, Daniel A. Schult, & Pieter J. Swart (2008). Exploring Network Structure, Dynamics, and Function using NetworkX. In Proceedings of the 7th Python in Science Conference (pp. 11 - 15). Documentation: https://networkx.org/documentation/stable
- ↑ Lester, D (danlester). (2021). jupyter-innotater. GitHub Repository: https://github.com/ideonate/jupyter-innotater.
- ↑ Brown, M., & Lowe, D. G. (2007). Automatic panoramic image stitching using invariant features. International journal of computer vision, 74(1), 59-73. doi:10.1007/s11263-006-0002-3
- ↑ Sun, K., Hu, Y., Song, J., & Zhu, Y. (2021). Aligning geographic entities from historical maps for building knowledge graphs. International Journal of Geographical Information Science, 35(10), 2078-2107. doi:10.1080/13658816.2020.1845702
- ↑ Duan, W., Chiang, Y.Y., Knoblock, C., Jain, V., Feldman, D., Uhl, J., & Leyk, S. (2017). Automatic Alignment of Geographic Features in Contemporary Vector Data and Historical Maps. In Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery (pp. 45–54). Association for Computing Machinery. doi:10.1145/3149808.3149816