Universal Aesthetics (Multimodal Focus)

From FDHwiki
Revision as of 21:58, 27 November 2025 by Jiajun.shen (talk | contribs) (→‎Poems)
Jump to navigation Jump to search

Introduction

Methods

Data

As for the convergence of language models, we need both plain texts and aesthetic texts. For simplicity, we reuse this text-image dataset, which is also used in Huh et al.'s paper, and then add another poem dataset.

Plain Text

Poems

For poems, we use the Poems dataset from Kaggle. We find this dataset ideal for this project because of the following reasons:

  • As the plain-text dataset contains 1,024 entries, it provides enough poems to yield a substantial amount of data.
  • It categorizes the poems into 135 types based on their form (haiku, sonnet, etc.), which could facilitate our further studies.

However, this dataset still needs to be cleaned before usage. We identify two problems with the raw dataset. First, some poems contain copyright notices at the end, which introduce noise into subsequent processing. However, because the copyright information is clearly marked with a special mark ©️, it can be easily removed through rule-based filtering. Second, although most poems are in English, a small portion is not. Since the plain-text dataset contains exclusively English texts, we should also remove the non-English poems from this dataset.

Afterward is an unknown term in future
Before that we face the present,
Coming at well future depends on present;
Dismissing hazardous future
Endeavor best early at present.
Copyright © Muzahidul Reza | 29 November,2017

The text above shows an example of poems with copyright information. We assume that the mark © does not appear within the poem itself and remove all the content starting from any line that begins with this symbol.

To filter out non-English poems, we use the word frequency list as an auxiliary resource and construct an English lexicon by selecting only the words whose frequencies exceed a certain threshold (10,000). For each poem, we compute the proportion of lemmatized words that appear in this lexicon and apply a threshold to identify English poems. We initially experimented with this English words list, but it was overly inclusive and contained many non-English words such as bonjour. This caused some non-English poems to match a large number of dictionary entries. Therefore, we adopted a frequency-based filtering approach to exclude words that may have been borrowed from other languages and appear in English text only occasionally, despite being included in comprehensive dictionaries.


Proportion 0.0 0.4 0.5 0.6 0.7 0.74 0.8 0.9 1.0
内容A1 内容A2
内容B1 内容B2

References