WikiBio: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
Line 9: Line 9:
In this project, we make use of two different data sources:
In this project, we make use of two different data sources:


* Wikidata is used to gather the structured information about the people who lived in the Republic of Venice. Multiple information are extracted from their wikidata entries, such as: birth and death times, professions and family names. To gether the data from wikidata, a customizable SPARQL query is used.
* Wikidata is used to gather the structured information about the people who lived in the Republic of Venice. Multiple information are extracted from their wikidata entries, such as: birth and death times, professions and family names. To gather this data from wikidata, a customizable SPARQL query is used on the official wikidata SPARQL API
* Wikipedia is used to match the wikidata entries with the unstructured text of the article about that person. The "Wikipedia" package for python is used to find the matching pairs and then to extract the Wikipedia articles matching the entries.
 
The output of the above data sources is prepared jointly in the following manner:


[[File:Marcopolo3-data.png|frame|600px|The schema of data acquisition step]]
[[File:Marcopolo3-data.png|frame|600px|The schema of data acquisition step]]

Revision as of 13:43, 19 November 2020

Motivation

The motivation for our project was to explore the possibilities of natural-language generation in the context of biography generation. It is easy to get structural data from the Wikidata pages, but not all the Wikidata pages have a corresponding Wikipedia page. This project will showcase how we can use the structural data from the Wikidata pages to generate realistic biographies in the Wikipedia pages format.

Project plan

Data sources

In this project, we make use of two different data sources:

  • Wikidata is used to gather the structured information about the people who lived in the Republic of Venice. Multiple information are extracted from their wikidata entries, such as: birth and death times, professions and family names. To gather this data from wikidata, a customizable SPARQL query is used on the official wikidata SPARQL API
  • Wikipedia is used to match the wikidata entries with the unstructured text of the article about that person. The "Wikipedia" package for python is used to find the matching pairs and then to extract the Wikipedia articles matching the entries.

The output of the above data sources is prepared jointly in the following manner:

The schema of data acquisition step

Generation methods

Evaluation

Automatic

Human

Evaluation schema

Deliverables