Projects: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
All the projects are pieces of a larger puzzle.
All the projects are pieces of a larger puzzle.
The goal is to experiment a new approach to knowledge production and negociation based on a platform intermediary between Wikipedia and Twitter.  
The goal is to experiment a new approach to knowledge production and negociation based on a platform intermediary between Wikipedia and Twitter.  
The platform is called [ClioWire]





Revision as of 07:52, 4 October 2017

All the projects are pieces of a larger puzzle. The goal is to experiment a new approach to knowledge production and negociation based on a platform intermediary between Wikipedia and Twitter.

The platform is called [ClioWire]


ClioWire: Platform management and development

This group will manage the experimental platform of the course. They will have to run platform and develop additional features for processing and presenting the pulses. The initial code base is Mastodon.

The group will write bots for rewritting pulses and progressively converging towards articulation/datafication of the pulses.

Knowledge required : Python, Javascript, basic linux administration.

Resp. Vincent and Orlin

Secondary sources

The goal is to extract from a collection of 3000 scanned books about Venice all the sentences containing at least two named entities and transforming them into pulses. This should consiste a de facto set of relevant information taking a large base of Venetian documents.

Resp. Giovanni

Primary sources

This group will look for named entities in digiitized manuscript and post pulses about these mentions. The group will use Wordspotting methods

Supervisor : Sofia

Skills : Java

Image banks

The goal is to transform the metadata of CINI which have been OCRed into pulses. One challenge is to deal with OCR errors and possible disambiguation.

Supervision: Lia

Newspaper, Wikipedia, Semantic Web

The goal is to find all the sentences in a large newspaper archive that contains at least 2 names entities. These sentences should be posted as pulses.

The named entity detection have already been done. The only challenge to retrieve the corresponding sentences in the digitized transcriptions.

In addition, this group should look for ways for importing massively element of knowledge from other sources (DBPedia, RDF databases)

Resp. Maud

Skills: Python or Java