Switzerland and the Transatlantic Slavery: Difference between revisions
Amina.matt (talk | contribs) |
Amina.matt (talk | contribs) |
||
Line 108: | Line 108: | ||
The [https://louverture.ch/cca/ text source] is organized into sections, and within each section the reader can find a list of entries. Most items are separated by a return and an arrow as starting string (=>). Each item references a different actor of colonial entreprise. The first step is to retrieve each item separately and appends its section index. This index is used for colonial location retrieval. Indeed the table of contents is mainly organized by colonial location (some sections don't refer explicitly to geographical location, and are treated separately). | The [https://louverture.ch/cca/ text source] is organized into sections, and within each section the reader can find a list of entries. Most items are separated by a return and an arrow as starting string (=>). Each item references a different actor of colonial entreprise. The first step is to retrieve each item separately and appends its section index. This index is used for colonial location retrieval. Indeed the table of contents is mainly organized by colonial location (some sections don't refer explicitly to geographical location, and are treated separately). | ||
[[File:caricom_sample.png| | [[File:caricom_sample.png|700px|center|thumb|Extract of the CCA archive webpage. A section title and several items are shown. ]] | ||
The processing of the text item itself is done with Natural Language tools as '''NLTK''' for tokenization and '''Stanford NER''' for Named Entities recognition and BIO taggings. | The processing of the text item itself is done with Natural Language tools as '''NLTK''' for tokenization and '''Stanford NER''' for Named Entities recognition and BIO taggings. | ||
Line 147: | Line 147: | ||
Retrieving relevant informations requires to define which of the persons, locations and date tags are related to the main protagonist. Indeed, in the description multiple persons, as relatives or bosses are mentioned, and multiple locations as the location of origin but also brother's baptized location or other less relevant places can be found. In order to sort amongst the possibilities we use pattern matching to match the structure of the different words and tags to a syntax pattern (note that we don't rely only on the tags for pattern matching as their accuracy is low). Our model contains the two schemas described below. With these two schemas we can recognized around 75% of the item retrieved. | Retrieving relevant informations requires to define which of the persons, locations and date tags are related to the main protagonist. Indeed, in the description multiple persons, as relatives or bosses are mentioned, and multiple locations as the location of origin but also brother's baptized location or other less relevant places can be found. In order to sort amongst the possibilities we use pattern matching to match the structure of the different words and tags to a syntax pattern (note that we don't rely only on the tags for pattern matching as their accuracy is low). Our model contains the two schemas described below. With these two schemas we can recognized around 75% of the item retrieved. | ||
[[File:Schemas_pic.png.001.png|center|thumb|Syntax structure for our two schemas. The location and date indices are retrieved with pattern matching, the person index is found using NER tag. ]] | [[File:Schemas_pic.png.001.png|800px|center|thumb|Syntax structure for our two schemas. The location and date indices are retrieved with pattern matching, the person index is found using NER tag. ]] | ||
The pattern matching is so efficient that it is used solely to retrieve the origin location information. We find the first occurence of the word '''from''' and retrieve the next strings as location. Our model accounts for several variation (e.g. Lausanne, the City of Geneva, Le Locle). In a similar manner the extraction of the date is also easier with pattern matching. Either we have schema II and the date is the second string in the text, or we have schema I and the date is in between parenthesis between person and location. For both schemas, we retrieve date indicated as range, starting date or ending date. Our model works for a large range of date formatting, 1850-1855, born 1790, after 1878, b. 1989 etc.. | The pattern matching is so efficient that it is used solely to retrieve the origin location information. We find the first occurence of the word '''from''' and retrieve the next strings as location. Our model accounts for several variation (e.g. Lausanne, the City of Geneva, Le Locle). In a similar manner the extraction of the date is also easier with pattern matching. Either we have schema II and the date is the second string in the text, or we have schema I and the date is in between parenthesis between person and location. For both schemas, we retrieve date indicated as range, starting date or ending date. Our model works for a large range of date formatting, 1850-1855, born 1790, after 1878, b. 1989 etc.. |
Revision as of 14:05, 22 December 2021
Introduction
In the last decade, the narrative that Switzerland has nothing to do with slave trade, slavery and colonialism has been severely challenged.[1] [2]
Between the 16th and the 19th centuries, there were a number of Swiss involved in slavery, the slave trade, and colonialism activities. Swiss trading companies, banks, city-states, family enterprises, mercenary contractors, soldiers, and private individuals participated in and profited from the commercial, military, administrative, financial, scientific, ideological, and publishing activities necessary for the creation and the maintenance of the Transatlantic slavery economy. In this project, focusing on the Caribbean Community (CARICOM) member states, we are interested in discovering the details of the colonial past of Switzerland.
Our primary source is the CARICOM Compilation Archive written by Hans Fässler, MA Zurich University, a historian from St.Gallen (Switzerland).
Motivation
The CCA (CARICOM Compilation Archive) archive is a single-page website with contents categorized by colonial location. In the body of the text, each entry concerns a different actor and starts with an arrow. The author Hans Fassler started the compilation about all the Swiss involvements to convince the CARICOM Reparations Commission (CRC) with arguments and material. In June 2019, the CRC was convinced to recommend the heads of the Caribbean Community to add Switzerland to the list for reparation of the colonial activities.
Hans continues updating information about CARICOM and expanding his research to North America and East India and other places. He discussed with us the issue that the website provider is warning about the growing content of CCA. Although the archive is a very informative source about the colonial past of Switzerland, it certainly creates an obstacle for potentially interested readers to learn from it in depth. The motivation of this project is to discover the previously less known history of Switzerland and provide a framework to visualize the content of the archive in a more accessible and more interactive way. The creation of a structured dataset path the way to quantitative analysis of the data provided by the archive.
In our project, we will extract the following information about each entry in the archive:
- Person's name
- City of origin in Switzerland
- Colonial location
- Date of birth and death of the person or the active date in the location
- Colonial activities that this person was involved
The above set of properties has been validated as relevant and valuable information by Hans Fassler.
As we discussed with Hans, he keeps the full content of each entry because it contains more detailed information. We would like to build the map visualization based on the information we extract. This would allow the entries to be easily understandable and interpretable since the map provides geographic information to help readers identify the places. The reader can also have the visual connection between the origin in Switzerland and the colonial locations. Also, based on the information we would like to extract, we can analyze the involvement of the Swiss in the colonial era.
Project Plan and Milestones
Base on the feedback of the midterm presentation the objectives have been revised. The material traces have been left for further work and some data analysis on the existing dataset has been suggested instead.
Step I : Information extraction with NLP tools(Stanford NER, NLTK)
Step II : Visualize the connection between Switzerland and Caribbean colonies
Step III : Highlight the material traces (not enough time to work on)
Date | Task | Completion |
---|---|---|
By Week 4
(07.10) |
|
✓ |
By Week 6
(21.10) |
|
✓ |
By Week 10
(25.11) |
Step I
Step II
|
✓ |
By Week 11
(02.12) |
Step I
Step II
|
✓ |
By Week 12
(09.12) |
Step II
|
✓ |
By Week 13
(16.12) |
Step II
Overall
|
✓ |
By Week 14
(22.12) |
Overall
|
✓ |
Methodology
The methodology of our project is divided into three steps: text processing, data enrichment with geographical databases and data visualization and analysis.
Text processing
The text source is organized into sections, and within each section the reader can find a list of entries. Most items are separated by a return and an arrow as starting string (=>). Each item references a different actor of colonial entreprise. The first step is to retrieve each item separately and appends its section index. This index is used for colonial location retrieval. Indeed the table of contents is mainly organized by colonial location (some sections don't refer explicitly to geographical location, and are treated separately).
The processing of the text item itself is done with Natural Language tools as NLTK for tokenization and Stanford NER for Named Entities recognition and BIO taggings.
Named Entities recognition (NER) is a text processing method that recognizes and tags words referring to named entities. In our case we are interested in using the 'PERSON' tag for (person name or person last name), as well as location (city, region, country) for place of origin and date. In addition, we run BIO tagging where the NE are labeled based on their position ('BEGING-INSIDE-OUTSIDE') with respect to others NE. This allows grouping of successive similar tags into a single string. An example for both steps is given below.
NER-tagging
('=', 'O'), ('>', 'O'), ('Jean', 'PERSON'), ('Huguenin', 'PERSON'), ('(', 'O'), ('1685–1740', 'O'), (')', 'O'), ('from', 'O'), ('Le', 'O'), ('Locle', 'ORGANIZATION'), ('(', 'ORGANIZATION'), ('Canton', 'ORGANIZATION'), ('of', 'ORGANIZATION'), ('Neuchâtel', 'ORGANIZATION'), (')', 'O'),
After BIO-tagging
('=', 'O'), ('>', 'O'), ('Jean Huguenin', 'PERSON'), ('(', 'O'), ('1685–1740', 'O'), (')', 'O'), ('from', 'O'), ('Le', 'O'), ('Locle ( Canton of Neuchâtel', 'ORGANIZATION'),
The NER isn't completely reliable, and we can already notice some mislabeling, the limitations of NER are discussed in the limitations below.
Retrieving relevant informations requires to define which of the persons, locations and date tags are related to the main protagonist. Indeed, in the description multiple persons, as relatives or bosses are mentioned, and multiple locations as the location of origin but also brother's baptized location or other less relevant places can be found. In order to sort amongst the possibilities we use pattern matching to match the structure of the different words and tags to a syntax pattern (note that we don't rely only on the tags for pattern matching as their accuracy is low). Our model contains the two schemas described below. With these two schemas we can recognized around 75% of the item retrieved.
The pattern matching is so efficient that it is used solely to retrieve the origin location information. We find the first occurence of the word from and retrieve the next strings as location. Our model accounts for several variation (e.g. Lausanne, the City of Geneva, Le Locle). In a similar manner the extraction of the date is also easier with pattern matching. Either we have schema II and the date is the second string in the text, or we have schema I and the date is in between parenthesis between person and location. For both schemas, we retrieve date indicated as range, starting date or ending date. Our model works for a large range of date formatting, 1850-1855, born 1790, after 1878, b. 1989 etc..
The other information relevant to our dataset is the activities in which our main person was involved in. The categorization is difficult as many characters are involved in multiple activities and that often their relatives activities are also related in the description. Based on our discussion with Hans Fassler and the study of our primary sources the following categories are relevant:
trading = ['company', 'companies', 'merchants', 'merchant'] military = ['soldier','captain','lieutenant','commander','regiment', 'rebellion', 'troops'] plantation = ['plantation', 'plantations'] slave_trade = ['slave ship', 'slave-ship'] slave_owner = ['slaves', 'slave', 'slave-owner'] racist = ['racism', 'racist', 'races']
The last category is related to the structural contributions of Swiss people , they include participations in Anti-Black Racism and Ideologies Relevant to Caribbean Economic Space , "Marine Navigation" and "African and European Logistics". The Marin Navigations section concerns primarly the development of navigation tools for Colonial Powered and the logistics contributions are related to banking or insurance companies.
Finally, the description contains many detailed that are worth keeping. Once the relevant information for data analysis and visualization are extracted, the full entry is added to the dataset. An example is given below.
Levels of confidence
For origin, date and person we calculate an accuracy value that indicates what is the level of confidence we have in the retrieved attribute. Note that there isn't any confidence level for the colonial location property as it comes directly from the author and is unambiguous.
Origin accuracy The origin location is found according to the schemes presented above. However, multiple locations exist in the same portion of text thus the actual location that we are looking for might be further away in the text. By counting the total of Swiss cities present in the text we can compute a level of confidence inversely proportional to it.
Date accuracy The date accuracy is calculated by counting how many instances of date (aka 4 digits string) there is in total in the text.
Person accuracy For both the data and person, retrieved based on the NER tags, the accuracy levels are calculated using the tags occurrences. Following the argument presented above accuracy is calculated as the inverse of tags occurrences.
Dataset enrichment with geographical databases
One of the goal of this work is to visualize the archive content on a geographical map. We add geographical information for both colonial and Swiss location using the following methods.
For colonial location Colonial locations are retrieved from the table of contents which organises the corpus mainly by countries. A few exceptions are regions from the Caribbean economic space, states for North America and other indications for structural contributions. Our model geolocalizes countries based on their capitals geographical coordinates, and states from North America based on their capitals too. Two different datasets are used respectively for countries and U.S. states. For the regions from the Caribbean economic spaces, the French West Indies are mapped to Guadeloupe, Danish West Indies to the U.S. Virgin Islands. Based on the content of Southern Africa section, we used South Africa as reference region and finally the East Indies are mapped to Indonesia. The structural contribution are more difficult to map, indeed as mentioned by the archive author "they cannot be assigned to one single Caribbean country". We decided to map them to Switzerland, in order to highlight that some contributions didn't take place abroad but where still part of the European colonial project. A finer grain retrieval would allow to extract more specific location for text item in section concerning several locations.
For origin locations (Switzerland) The geolocalization of origin locations is made at the level of cities. We use an additional dataset to map each origin location to a Swiss city with its geographical coordinates.
Data Visualization
We used Javascript, HTML, and CSS to implement the visualisation. In order to display the map and draw the connections between places, Javascript library Leaflet.js is used. We store the extracted information in GeoJSON format for the map implementation because is a simple open standard format to store both geographical and non-spatial features.
When the "Show All" button is clicked, the map displays all the connections, then click on each line, key information about the entry will popup.
With the dropdown on the text panel on the left, the user can filter the list based on the origin city in Switzerland. Click on the name of the two places will zoom the corresponding location on the map. Click on the arrow will draw the line between the two places.
Results
Overall, we extract 464 text items from the division of the initial page. With the combination of NER and BIO tagging with syntax structure pattern matching, we can retrieve 75% of the entries.
Precisely, on this set, 117 items have no person's name or location which makes them irrelevant (precisely 49 entries have no person defined, 16 entries where neither the person nor the location could be defined and in 52 cases the person and location are in the wrong order with respect to our schema). We are left with 327 entries.
The average confidence levels are respectively 52%, 52%, and 38%, for person, origin, and date. The date average is low but this means that many date occurs in the text. Indeed, as we used the syntax matching we are pretty confident that this number is an indication of high occurrences of dates and of the text complexity instead of bad text processing. This argument is valuable for the other indicators too.
With this data, we can highlight some features of Swiss involvement in transatlantic slavery. Important cities as Zurich and Bâle (Basel) have many involvement in activities participating in transatlantic slavery. Surprisingly, smaller localities as Neuchâtel and Le Locle (which is also in the Canton of Neuchâtel) have major contributions too. Interestingly enough on the colonial side, the third country with the most contributions is Switzerland, which as we defined earlier, highlight contributions that cannot be localized in a single country, structural contributions.
The extraction of information can further investigate of prominent figures of the transatlantic business from Switzerland. A analysis of most occurring last names reveals that Flournoy, Zinzendorf and Zollicoffer families are the most cited in the archive. The archive author insists on "how interconnected the slavery-economies of North America, the Caribbean and Brazil (and beyond) were, is also demonstrated by the fact that several Swiss families globalized into more than one space", in this citation the Flournoy family is cited.
Finally another important data extracted from the archive is the activities in which people were involved. The distribution describes well the context of plantations in the America and how Swiss person were usually owner of such enterprises and thus owner of slaves.
At the end, on the website 106 entries are visualized, the significant drop is due to the lack of geographical coordinates for some Swiss locations. This would be the first step needed to significantly increase the data for visualisation.
Limitations
The limitations presentation follows the methodology steps.
Text processing
The complete archive has 464 items, i.e. entries about different actors. However, retrieving information such as the name and origin of the actor, as well as his activities and the location of the activities is difficult. The texts can be pretty complex and intricated, 'as were the implications of Switzerland in Black Slavery <ref>Hans Fässler<\ref>.
- David Louis Agassiz (1737–1807), uncle of the racist and glaciologist Louis Agassiz (1807–1873), was a financier who left Switzerland for France in 1747 with his friend Jacques Necker in order to work in the Parisian branch of the Thellusson et Vernet bank (investments in colonial companies, links with the slave trade). Until 1770, David Louis Agassiz cooperated with Pourtalès of Neuchâtel via the company «Joseph Lieutaud et Louis Agassiz». Necker was to become Louis XVI’s Minister of Finance, whereas David Louis Agassiz left for Britain where he acquired a considerable fortune and anglicised his name to Arthur David Lewis Agassiz. He was naturalised by a private Act of Parliament in 1766. Agassiz dealt in cotton, silk, sugar, cocoa, coffee, tobacco, and cochineal and had business relations with France, Spain, Portugal, Italy, Germany, Belgium, Denmark, the Netherlands, Sweden, Switzerland, Russia, North and South America and the East and West Indies. In 1776, Francis Anthony Rougemont (1713–1788) from a Neuchâtel family joined the partnership under the name of «Agassiz, Rougemont et Cie.», a company which had close ties with «MM Pourtalès et Cie.» from Neuchâtel (ownership of plantations on Grenada, indiennes industry, banking). Arthur David Lewis Agassiz’s son Arthur Agassiz (1771–1866), cousin of the racist Louis Agassiz, took over the family business, and later formed a company «Agassiz, Son & Company». In 1823, Arthur Agassiz was working in Port-au-Prince (Haiti) with «Jean Robert Bernard et Cie.».
Limitation of NER versus pattern recognition . The results of NER processing are not reliable for all tags. For person name, Stanford NER performance are reliable and visual inspection shows good results. However, the Stanford NER is missing a lot of locations, most of them are either not recognized or miscategorized as organizations. In a similar way the dates aren't well recognized. The limitations of the tools made it worth it to use pattern matching and develop our own model. This required to match the author style and makes our model sensitive to change in authoring.
Dataset enrichment with geographical databases
The colonial locations aren't always the same level of definition, some are regions, countries or states even using only the TOC. Our method introduces some artifacts links to the models decisions: for example East Indies definitely covers more than just Indonesia. If we wanted to overcome the limitations we would need to retrieve colonial location with another method. We suggest that a list of countries and cities could be looked for in each text, and a default value assumed based on the TOC. For origin countries, a lot of values have no geographical coordinates because there are too small cities (Saint-Aubin, Bournens, Bourmens), this could be fix by using an additional dataset.
Data analysis and visualisation
It is worth to note that this set in an observational set of data. Therefore we have no control on the database constitution and it canno't be taken as representative of all transatlantic slavery implications. However, this definitely shows the numerous connections Switzerland across the Atlantic.
Links
Github repository: Colonial-heritage-in-Switzerland
Primary source: caricom archives
Secondary sources: geonamescaches, uscapitals.