From TEI-XML to TXM, HTML and back

XSL Style Sheet – Strategy for an Import into TXM

Representations of the Other in the British, French and German Discourse on Europe: A Corpus-Based Contrastive Discursive Analysis

Naomi Truan (Université Paris-Sorbonne / Freie Universität Berlin)

The purpose of this article is to present a successful strategy to import the corpus on which my PhD project is based into the open source software TXM. If you still do not know TXM, click on the link! This software, developed at the Ecole Normale Supérieure (Lyon, France), is open source, regularly updated, and offers a wide range of textometric queries (among them cooccurrences and concordancies, but also many more).

The corpus is delivered with the TEI-XML files and two Stylesheets for the import and the conversion/visualisation in an HTML file (like new shampoos: 2-in-1!). The complete version of this document (with figures and two appendixes) is also to be found on the ORTOLANG server as a PDF document on my workspaces: French, English, German.

As a student in German Studies (focussing mainly on Literature, Philosophy, History, and Translation), I had no prior experience in the huge – but magnificent – world of Computational Linguistics. I discovered it through my PhD in Corpus Linguistics and thanks to the help of Laurent Romary and the TXM team (especially Serge Heiden, Alexey Lavrentev, and Bénédicte Pincemin). Thus, this article is not only here to show you how it works, but also to tell you: Yes, you can! Also with no idea on Computational Linguistics, you can acquire and develop your own strategies to make sense of these weird languages computer engineers work with.

My PhD thesis, entitled “Representations of the Other in the British, French and German Discourse on Europe: A Corpus-Based Contrastive Discursive Analysis”, relies on a qualitative and quantitative linguistic analysis of parliamentary debates in three European countries. You can find the corpora and their accurate description on ORTOLANG: French, English, German.

The corpus has been manually annotated according to the TEI Guidelines. If you wish to consult how the corpus was annotated, please see the document entitled “Corpus Annotation” on the ORTOLANG server (links above).

I – Import into TXM

To import the XML-TEI files into TXM, you have to follow the steps of an “Import TEI générique conservateur” and to select the Import XML/w +CSV. 

You have to put all the TEI-XML files into one folder and to add a file named import.properties (with no extension such as .doc, .pdf), in which you write ignoredelements=note|bibl. This way, the statistics of the corpus in TXM will not take into consideration (ignore) the TEI-XML tags <note> and <bibl>.

Then click on “Sélectionner le répertoire des sources” and select the corresponding folder. Please note that for your first import, the import.xml file, which is created during the import, will not be in the folder. If you happen to modify the TEI-XML files, the import.xml file will be recreated at each import.

In “Dossier des sources”, you can add information on the corpus. If you use the corpus I annotated for you own research project, I kindly ask you to refer to it in this section, for instance with following mention: Naomi Truan 2016 – CC BY 4.0.

The “Police d’affichage” depends on your personal taste and does not affect the import at all.

In the section “Langue principale”, do not forget to tick “en” for English, “de” for German, and “fr” for French if you wish to have the corpus syntactically annotated with TreeTagger (the tutorial for installing TreeTagger on TXM is here).

The “Paramètres du segmenteur lexical” do not need to be changed.

For the “Feuille XSL d’entrée”, please use the style sheet in an XSL format provided along with the TEI-XML files, freely adapted from txm-filter-teip5-xmlw-preserve.xsl.

“Editions” and “Commandes” do not need to be changed.

The import of the corpus can begin; you can now visualise the metadata of the corpus by clicking on the information icon.

Please note that the first “Propriétés des unités lexicales” (body, desc, incident, quote, seg) are not reliable; the given numbers do not correspond to anything in the corpus.

On the text level and on the utterance level, though, the information is fully accurate, so that the following metadata enable correct partitions of the corpus according to these variables: date, government, id, party, party-type, position, role, sex, who-party, who-party-type, who-position, who-role, who-sex.

II – HTML Visualisation of the Corpus with the XSL Style Sheet

I will now comment on the XSL Style Sheet, which can be used for the import into TXM (see Part I), but also to enable the visualisation of the corpus as a whole in an HTML-format. On the ORTOLANG server, you can find it under Content > UK TEI-XML Files, or here.

In the XSL Style Sheet, information in green such as <!– Corpus of British Parliamentary Debates –> does not impact on the XSL Style Sheet but simply provides information to guide the reader.

If you open the XSL Style Sheet and the XML Style Sheet together in oXygen XML Editor, and click, within the XML Style Sheet, on the red button on your right, then oXygen XML Editor will automatically run the Style Sheet and open the corpus in an HTML-format in your browser (like a webpage).

Alternatively, and if you have not proceeded to any changes in the corpus, just double-click on the “HTML file UK – Style sheet (2)”, which will automatically open in your browser as well. The procedure described above is necessary only if you wish to encode other tags or to visualise them differently (for instance, if you wish to see the <quote> tags in orange rather than in red) or if you add new tags in the corpus (for instance, if you notice a missing <quote> tag in one of the TEI-XML files of the corpus).

You can then scroll down through the corpus. It enables you a quick search (for instance through ctrl F for people not familiar with TXM, which offers much more queries in this regard) and a quick visualisation (for instance if you feel you have a better impression of the length of an utterance by seeing it on a webpage – how many lines? – rather than counting text units).

The corpus begins with general information, such as: Number of Incidents, Number of Turns, Number of Speakers, Number of Opposition Members, Number of Majority Members. In this regard, I strongly advise to rely on the statistics provided by TXM rather than on the XSL Style Sheet, which appears to be sometimes misleading. For instance, it counts seven Plaid Cymru Members but reports four names, which is inconsistent (there are, actually, fourPlaid Cymru Members):

Plaid Cymru Members: DAFIS – LLWYD – THOMAS – WIGLEY
Number of Plaid Cymru Members: 7
Number of Turns of Plaid Cymru Members: 7

This is it! Normally, every time you adjust the corpus (correct a typo, do some minor changes, rename a speaker, etc.), the HTML version will follow, enabling you to visualise the last version corpus very quickly and to search through it. At the same time, you can re-import the corpus into TXM by following the previous steps. If you do not rename the corpus, TXM will automatically ask you if you wish to replace the existing corpus. By clicking “yes”, you will update the corpus.

You now can see how to visualise (i.e. make nice!) your corpus through an XSL Style Sheet and an XML Style Sheet especially designed for the purposes of your own research. The Style Sheets can be adapted for every type of corpus following the TEI Guidelines. Thus, it should not be seen as a model, but rather as an example suitable for TXM.

Please feel free to contact me for any question regarding TXM, TEI, XML and HTML formats, but also Corpus Linguistics, Cognitive Linguistics, or Political Discourse! I cannot promise you to be able to answer every technical concern, but I will try my best (and can also forward your question(s) to people who know more than I do): Naomi.Truan@paris-sorbonne.fr. Any comment will be much appreciated!

Scholarly work and Open Access

The whole idea of scholarship is oriented towards maximising the dissemination of research results. Carrying out a research activity is all about exploring territories, where knowing what the others are doing, what their most recent advances are, what projects are being undertaken, is essential to make sure that one’s own research actually goes beyond the state of the art and can be situated within a larger corpus of discoveries. Communicating results is thus an essential activity in one’s academic life, all the more that the assessment of such communications through peer review mechanisms impact on the capacity to get institutional recognition and thus financial means to carry out further research.

This is the context in which research organisations should be designing their own Open Access (OA) policies in order to help researchers to get access to existing publications (traditionally through journal subscriptions), publish their own results to a wide audience (by means of publication repositories[1]) and manage associated research assets (laboratory notes, observations, primary sources, databases). This has been made particularly difficult in the recent years because of the incredible increase in publication prices imposed by private publishers, which has been seen as contradicting the intuition that new technologies should indeed simplify dissemination rather than increase operational publication costs. The situation has become all the more unbearable that most of the various processes bringing a research manuscript to publication are carried out for free by researchers themselves.

Even if the OA movement is quite recent, it is possible to outline some principles and possible action lines. Among the various meetings which, in the early years 2000, contributed to stabilize the basic notions of Open Access, we can quote the core statement from the Berlin Declaration (2003) that requested a “…free, irrevocable, worldwide, right of access to, and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose…. ”. Not only does this statement reflect the feeling that the current landscape is inadequate to fulfil the communication needs of the research community, but it also outlines a possible ideal scheme where commercial constraints would have a lesser importance than issues related to public good and wealth.

There are basically two ways to implement such a change in the scholarly publishing environment. The first one (also called green open access) consists in letting the journal publishing industry carry out its business as is and deploy an infrastructure of publication repositories to freely disseminate authors’ versions online. Such an infrastructure may be deployed at the level of a research department, a university or cover a wider geographical or institutional spectrum. In the case of France, a national publication archive infrastructure, HAL[2], has been deployed to cover the needs of most French academic organisations. It is now part of the official Open Access policy of the French Ministry of Research and Higher Education[3]. Such publication platforms are particularly important since they support the immediate dissemination of research papers from an early drafting stage to its final publication. It is also a way for research institutions to get a global picture of all research carried out under their auspices, since it is associated with precise descriptive information (for instance, affiliations) that can be curated by librarians in a coherent way[4].

There is also an increasing demand from scholars to benefit from trusted repository environments where they can deposit their research data with the guaranty that these will be accessible and referentiable in the long-term. Recent developments in the Netherlands with the NARCIS repository[5] for example have shown that such environments can even be coupled with a traditional publication archive.

The other way to implement an OA policy (also called the gold way to OA) is to replace the current subscription-based model by other business models both allowing “fair” publishing organisations to break even in maintaining a publication framework and designing a barrier-free system for the access and reuse of research results. Even if the publishing industry has recently perverted this golden way by introducing an article-processing-charge system that only reshapes what used to be subscription costs into an author-pay model, we will see in the rest of this post that there is room for other perspectives to reform the publication landscape.

A first strategy may be to think of alternative business models that are based on an ethical support to scholarly publishing within an OA perspective. This is what has been proposed by the OpenEdition publishing infrastructure with their Freemium model. It is based on the principle that basic access (for instance in HTML) to published material should be made open for free but that additional services can be sold to libraries (pdf, ePub, cataloguing services) for a reasonable fee. The corresponding benefits are in turn given back to the journals so that they can cover their day-to-day costs. Indeed, as is the case for OpenEdition, the core infrastructure as well as the general editorial support is part of the institutionally funded infrastructure.

If we are still attached to the traditional journal editorial setting, we can observe that its core services, namely identification, certification, dissemination and long-term availability, can be easily implemented on the basis of an existing publication repository. Indeed, such a repository provides a submission environment, which identifies authors and time-stamp the document, and offers a perfect online dissemination platform, with the necessary long-term archiving facility of the hosting institution. In such a context, designing a certification environment mechanism whereby a paper deposited by an author is forwarded to an editorial committee for peer-review, is quite a straightforward endeavour. This is exactly what is now being experimented with the Episciences[6] project on top of the HAL platform. Such a platform is also interesting in that it offers new possibilities for changing our perspective on the certification process: open submission, open peer-review[7], updated versions of an article and community feedback are features that may dramatically change our views on scholarly publishing.

At specific stages of the research process, it is often not so much important to produce an in-depth scholarly than to provide short snapshots on the current developments of an experiment in hard sciences, or the analysis of a source in the humanities. This is a situation where it is more appropriate for a scholar to write small reports in the form of blog entries and publicize them on various social networks. Blogs offer a first layer of scholarly publication with both online availability and the possibility to comment on the actual scholarly content. It is also a simple way to gain a primacy for a specific result or gather observations step-by-step, for instance during an archaeological campaign. The ideal situation is when blogging occurs within a secured scholarly environment such as Hypotheses.org where researchers benefit from an editorial support as well as a wide visibility.

The various possibilities outlined so far only make sense if research institutions invest time, political energy and budget to implement such models and make them part of the daily life of their researchers. A typical best practice example can be taken from the recently published open access policy of Inria[8] which combines a deposit mandate of all publications on the HAL archive, a cautious assessment of any new models provided by the private publishing sector and the funding of the Episciences platform.

We can observe that having a not too overly conservative vision on scholarly communication opens up a whole range of possibilities to improve the way scientific ideas can be seamlessly transmitted to a wide audience. Even more, we can see that a new landscape can be outlined where the management of virtual research environments comprising research data, various types of notes and commentaries, as well as drafted documents linking these objects together could dramatically change the way scholarship will be carried out in the future. In such environments, various levels of “peer-review” are possible, from the simple feedback of known colleagues to the possibility for any member of a research community to comment on the content. Traditional peer-review is just one possible implementation of such a model where the main objective should remain to improve quality and wide accessibility for science.

As a whole, I defend a vision of scholarly communication which is entrenched in the wider notion of research infrastructure and which as such must be considered as part of the realm of public research institutions. We need to see what the consequences of such a vision are, in terms of budget shifts, investments in technological settings, but also in changing the roles of research libraries so that they can provide the necessary editorial support to such environments. The change may be drastic, but I think this is the only way to optimize tax-payers’ money at the service of science.

 


[1] see Romary Laurent & Chris Armbruster “Beyond Institutional Repositories”, International Journal of Digital Library Systems, 1(1), January 2010 — http://hal.inria.fr/hal-00399881/

[2] hal.archives-ouvertes.fr

[3] See http://www.enseignementsup-recherche.gouv.fr/cid71277/partenariat-en-faveur-des-archives-ouvertes-plateforme-mutualisee-hal.html

[4] Note that the quality of such descriptive information is one of the difficulties I see in such commercial platforms as Academia.edu or Research Gate.

[5] http://www.narcis.nl

[6] http://www.nature.com/news/mathematicians-aim-to-take-publishers-out-of-publishing-1.12243

[7] see Pöschl U. (2010) “Interactive open access publishing and peer review: the effectiveness and perspectives of transparency and self-regulation in scientific communication and evaluation”. LIBER Q. 19, 293–314.

[8] see http://tonyhey.net/2013/06/03/a-global-view-of-open-access-part-1/