English   Italiano

Construction of the corpus

The corpus was constructed according to the following steps:

  1. creation of a seed list of random combinations of frequent Italian words, taken from
    • "Vocabolario di base della lingua italiana" (VdB) by Tullio De Mauro
    • 50,000 tuples overall
  2. retrieval of URLs over search engine, cleaning of URL lists
    • using the BootCaT tools with Yahoo!
    • using Yahoo!'s option to select pages licensed under Creative Commons attribuition
    • removal of pages wrongly classified as Creative Commons by the search engine (based on manually created blacklists)
  3. download of web content for the URLs and creation of cleaned corpora
    • for general pages using the KrdWrd tools for web page retrieval and clean-up
    • for Wiki pages using the Wikipedia Extractor in combination with a script to separate out single documents
    • removal of empty, undersized and oversized files with in-house scripts
  4. linguistic annotation
  5. metadata
    • the source URL is attached to each document of the corpus
  6. indexing for use with OpenCWB/CQP using the cwb-encoding tools

Collection of the documents

The PAISÀ documents were selected in two different ways. A part of the corpus was constructed using a method inspired by the WaCky project. We created 50,000 word pairs by randomly combining terms from an Italian basic vocabulary list, and used the pairs as queries to the Yahoo! search engine in order to retrieve candidate pages. We limited hits to pages in Italian with a Creative Commons license of type: CC-Attribution, CC-Attribution-Sharealike, CC-Attribution-Sharealike-Non-commercial, and CC-Attribution-Non-commercial. Pages that were wrongly tagged as CC-licensed were eliminated using a black-list that was populated by manual inspection of earlier versions of the corpus. The retrieved pages were automatically cleaned using the KrdWrd system.

The remaining pages in the PAISÀ corpus come from the Italian versions of various Wikimedia Foundation projects, namely: Wikipedia, Wikinews, Wikisource, Wikibooks, Wikiversity, Wikivoyage. The official Wikimedia Foundation dumps were used, extracting text with Wikipedia Extractor.

Once all materials were downloaded, the collection was filtered discarding empty documents or documents containing less than 150 words.

The corpus contains approximately 380,000 documents coming from about 1,000 different websites, for a total of about 250 million words. Approximately 260,000 documents are from Wikipedia, approx. 5,600 from other Wikimedia Foundation projects. About 9,300 documents come from Indymedia, and we estimate that about 65,000 documents come from blog services.

Documents are marked in the corpus by an XML "text" tag with "id" and "url" attributes, the first corresponding to a unique numeric code assigned to each document, the second providing the original URL of the document.

See more on the construction process in section construction steps and on the partners' contributions in the section partnership. Online access to the corpus is available via a dedicated interface. Additionally, the corpus is provided for download in different versions.


Data format

Distributed data adhere to the following rules:

Field 1IDToken counter, starting at 1 for each new sentence
Field 2FORMWord form or punctuation symbol
Field 3LEMMALemma of word form
Field 4CPOSTAGCoarse-grained part-of-speech tag
Field 5POSTAGFine-grained part-of-speech tag
Field 6FEATSMorpho-syntactic features
Field 7HEADHead of the current token, which is either a value of ID or zero ('0')
Field 8DEPRELDependency relation linking the token to its head, which is 'ROOT' when the value of HEAD is zero. See the dependency tagset for more information.
Field 9not used
Field 10not used

The morpho-syntactic and dependency tagsets used were jointly developed by the Istituto di Linguistica Computazionale "Antonio Zampolli" (ILC-CNR) and the University of Pisa in the framework of the TANL (Text Analytics and Natural Language processing) project and was used for the annotation of the ISST-TANL dependency annotated corpus.

An annotation example follows:

IDFORMLEMMACPOSTAGPOSTAGFEATSHEADDEPREL
1GliilRRDnum=p|gen=m2det
2statistatiSSnum=p|gen=m4subj
3membrimembroSSnum=p|gen=m2mod
4provvedonoprovvedereVVnum=p|per=3|mod=i|ten=p0ROOT
5affinchéaffinchéCCS_4mod
6ililRRDnum=s|gen=m7det
7gestoregestoreSSnum=s|gen=m9subj_pass
8siaessereVVAnum=s|per=3|mod=c|ten=p9aux
9obbligatoobbligareVVnum=s|mod=p|gen=m5sub
10aaEE_9arg
11trasmetteretrasmettereVVmod=f10prep
12all'aEEAnum=s|gen=n11comp_ind
13autoritàautoritàSSnum=n|gen=f12prep
14competentecompetenteAAnum=s|gen=n13mod
15unaunaRRInum=s|gen=f16det
16notificanotificaSSnum=s|gen=f11obj
17entroentroEE_11comp_temp
18iilRRDnum=p|gen=m20det
19seguentiseguenteAAnum=p|gen=n20mod
20terminitermineSSnum=p|gen=m17prep
21..FFS_4punc