10:30-11:30 (SR 11.11) Holly Anderson (Aston University ~ England) The Application of Linguistic Strategies of Deception Detection to Earnings Calls
Within the investment community, fraud has traditionally been detected by means of forensic accounting and fundamental equity analysis. More recently, however, linguistically informed approaches are becoming popular (Crawford Camiciottoli 2017; Larcker & Zakolyukina 2012). This paper reports on a project investigating linguistic strategies of deception detection in earnings calls, a form of financial disclosure provided by company management to the investmentcommunity. Detecting deception is challenging, with the results from previous
experiments and studies all suggesting varied and conflicting linguistic
correlates. I have developed a taxonomy of features and apply a hybrid top-down and bottom-up discourse analysis to non-fraudulent company earnings calls and to ones where the CEO and/or CFO management are known to have attempted to “cook the books” and portray a deceptive image of company performance to investors. I will present my preliminary findings and focus on the relevance of context and how deception manifests through different linguistic techniques at different levels of language description. I will also outline the genre of earnings calls and explain how it can play host to a variety of deceptive indicators within fraudulent disclosure. Finally, I will outline the challenges that I have encountered along the way regarding defining deception, ethical
considerations, and locating intention in apparently deceptive discourse.
10:30-11:30 (SR 11.12) Hannes Fromm (University of Graz ~ AUSTRIA)Detecting “Crime and Masculinity in News Discourse
Crime is one of the most prevalent topics in news discourse. News texts not only shape our understanding of various crimes – but also the ways we see demographic groups we associate these crimes with. Some groups are more likely to be represented as victims, whereas others feature in the roles of the suspect or perpetrator more frequently. A strong interdiscursive relationship in this context is that of crime and masculinity. In this paper I investigate semantic profiles of groups of men and their interdiscursive relationship to the field of crime in the US and the UK. Fairclough’s dialectical-relational model for Critical Discourse Analysis (Fairclough 1995; 2005), as well as more specific approaches to corpus-based discourse analysis (Mautner 2009; Baker & Levon 2015, 2016) shall serve as a theoretical framework for this investigation. The analysis draws on a corpus of news texts (900 million words) collected from six US and twelve UK daily news outlets between January 2014 and December 2016. Corpus data has been collected from the databases LexisNexis and FACTIVA. The concordancing software WordSmith was used for statistical analysis. The analytical focus lies on predication, semantic prosody, and transitivity, i.e. the role a member of a certain group is likely to fulfil in a criminal context.
10:30-11:30 (SR 11.13) Adamantia Karali (University of the Aegean ~ Greece)Proceedings at Criminal Courts in Greece: The Experience of the Court Reporter as Literacy Mediator
This paper examines how proceedings are put on record in Greek criminal trials. In Greek criminal courts, which are inquisitorial, the role of the court reporters involves real-time assessing of what is worth writing down and how it is to be reported. They function as mediators a) by taking notes during witness and defendant statements, and b) by reporting them in the judgement text, while embedding questions in the answers. The conversion of oral speech (and extra-linguistic features at times) to written text, and of draft to official version of a judgement document are described in Literacy terms. Not only information but also practitioners’ stances and attitudes, that may influence their choices and strategies at work, are of interest. Think-aloud protocol analysis of the researcher’s own experience as a court reporter has served as a starting point. Qualitative content analysis of two semi-structured focus-group interviews with court reporters at a Greek Court of Appeal (N=14) constitutes the main corpus of the research. Though practitioners have pointed out that their job is “mission impossible” under the current conditions, they have presented various strategies of dealing with challenges. They have reflected on the underestimated yet powerful role of the court reporter in “translating”, decoding and reconstructing meaning, and have suggested ways to better quality of products (judgement documents) and practices in Greek Justice.
11:30-12:30 (SR 11.11) Lennart Ham and Sue Blackwell (Vrije Universiteit Amsterdam University ~ The Netherlands)Detecting “Plagiarism by Translation”: A Semantically Based Approach
An increasingly common form of plagiarism encountered in academic institutions and elsewhere is the use of material translated from another language without indicating the source. Since most automatic translation detection systems (e.g. Copycatch, SafeAssign, Turnitin) rely on identifying shared lexis between documents, texts which have been translated (or substantially paraphrased) are impossible to identify by conventional means. Possible approaches to this problem which are currently being considered are (1) Machine translation of the suspected text into the language of the suspected source, followed by conventional intra-language comparison (Turnitin beta version), possibly augmented by linguistic analysis (Sousa-Silva 2014); (2) comparison of the citations within the texts (Gipp 2014);pre-processing including lemmatisation followed by conversion into a language-independent form (Ceska et al. 2008). This paper suggests a potentially more elegant approach by directly comparing the semantic categories underlying the lexis, rather than the lexical items themselves, with minimal pre-processessing. This is made possible by the Multilingual Semantic Tagger developed at Lancaster University (Piao et al. 2016). The Semantic Analysis System (USAS) tagset has been used to describe the “aboutness” of texts, and its applications in Forensic Linguistics include comparisons of genuine versus false suicide notes (Shapero 2011). We will describe experimental work testing the suitability of USAS for comparing texts in Dutch and English on the same topics, some of which are translations of each other. The corpus was semantically tagged using USAS, and the resulting tag-texts were compared using WCopyfind. Initial results are encouraging.
11:30-12:30 (SR 11.13) Martha Karrebaek, Marta Kirilova, and Paulina Bala (University of Copenhagen, Denmark)Interpreting Encounters: Sociolinguistic Perspectives on Communicative Challenges in the Danish Public Sector
Contemporary societies are characterized by various types of linguistic and cultural diversity. One consequence is the increasing need for interpreting. From a sociolinguistic perspective, the social encounters in which interpreting services are used provide a window into important social and linguistic processes. Interpreting encounters are therefore vital to explore. In Denmark interpreting in the public sector has recently received considerable attention, both because of the significant costs for the legal system, and because of the lack of transparency of the interpreter’s work for the institutional representatives. As a result, the right to manage and ensure the quality of interpreting in the legal system has been given to a private company. This has led to uproar and insecurity among interpreters and legal professionals. In this presentation we discuss results from a project on interpreting in the public sector in Denmark. We focus on the ideologies of interpreting and language, and on the participants’ (interpreters, interpreter users and citizens) different perspectives. The Danish situation is compared to the international sociolinguistic literature on interpreting (Angermeyer 2015,Berk-Seligson 1990, Hale 2007, Maryns 2014). Data comprise interviews and questionnaires among legal professionals, including police officers, and recordings of interpreted encounters in constitutional hearings.
11:30-12:30 (SR 11.12) Ljubica Leone (University of Salerno ~ Italy) Multi-word Verb/Simplex pairs in Late Modern English (1750-1850):the 'Legal-Lay' Discourse in Focus
The present study analyses the synonymous relationship between multi-word verbs (hereafter MWVs) namely phrasal verbs (PVs), prepositional verbs and phrasal-prepositional verbs, as well as simplexes in legal-lay texts dating back to the years 1750-1850. Many studies have highlighted that one of the most important features of MWVs is that there are synonymous simple verbs for most instances (Claridge 2000:221). Nevertheless, even when MWVs can be set in paradigmatic relation with simple forms, ‘a semantic difference’ in the MWV/Simplex pair can be detected, no matter how ‘large or slight it may turn out to be’ (Claridge 2000:228). This means that the occurrence of one or other verb form can produce different semantic and pragmatic effects, a hypothesis which is of particular interest when legal discourse is considered: Legal-lay discourse is, in fact, characterized by an extensive use of discourse strategies (Heffer 2005:32), which overall could even inspire the idea whereby lexical choices are driven by particular motivations and intentions. This study is a corpus-based investigation conducted on the Late Modern English-Old Bailey Corpus (LModE-OBC), a corpus which has been compiled by selecting texts from the Proceedings of the Old Bailey, London central criminal court. The analysis makes clear that the use of MWVs may be functionally interpreted: while PVs and phrasal-prepositional verbs exhibit a more specific meaning, prepositional verbs are the preferred lexemes when the context reveals the negative perception of the situation as experienced by speakers.
14:30-15:30 (SR11.11) Cheima Bouchrara (University of Surrey~ England)Persuasion in Courtroom Discourse: Uncovering Discursive and Linguistic Patterns in Closing Arguments in US Criminal Trials
In a courtroom setting, language is a crucial aspect of the trial process. It is a major tool to present and summarise the case, question witnesses and persuade the judge or jury of the defendants’ guilt or innocence from the alleged charges. Although courtroom discourse has been subjected to linguistic and discursive analyses, most of the research such as Drew (1992), Matoesian (1993) and Cotterill (2004) focused on witness examination and very few studies have examined the language of lawyers in closing arguments (Rosulek, 2009). At the same time, the adversarial nature of the Anglo-American criminal justice system highlights the significance of the linguistic skills during closing arguments, a trial phase in which the opposing lawyers speak directly to the jury. Indeed, in their closing statements, lawyers aim to persuade the jurors that their version of events is more plausible than that of the opposing council in order for them to return a favourable verdict. This presentation is drawn from a study that seeks to offer a systematic account of the linguistic cues of persuasion in closing arguments. Based on a combination of corpus linguistic and discourse analytical approaches, this presentation will focus on exploring selected lexical cues and how they contribute to building the lawyers’ persuasive strategies. Examples will be drawn from a data set of 100 American criminal trials looking at both prosecution and defence closing arguments.
14:30-15:30 (SR11.13) Karoline Marko (University of Graz ~ Austria) Approaching Recent Challenges of Digital Authorship Analysis
Quickly evolving technologies have tremendously influenced the way we communicate. This has also impacted on traditional authorship analysis methods applied to digital data. Thus, this project sets out to address three points of interest in respect to authorship analysis in digital technologies: it will investigate the impact of artificial intelligence (particularly machine translation) on texts; follow and analyze writing styles across different digital genres; and address the potential use of emojis for authorship analysis. This presentation will mainly focus on the latter aspect:the use of emoji for authorship analysis. A corpus of emojis has been constructed, based on public Instagram profiles from a wide range of different people. A preliminary analysis of this corpus has shown that, although several emoji are common for all individuals studied, individuals exhibit patterns of usage that might, in the future, be used as an additional feature included in authorship analysis.
14:30-15:30 (SR 11.13) Susan Blackwell (Vrije University Amsterdam ~ The Netherlands)Language and the Quasi-Law: The IHRA's ‘Working Definition of Antisemitism’
The ‘Working Definition of Antisemitism’ of the International Holocaust Remembrance Alliance (IHRA) was adopted by the British government in 2016; it has also been adopted by the European Parliament and numerous national and local bodies worldwide. Although the document describes itself as a “non-legally binding working definition”, it has been characterised as a “quasi-law, in which capacity it exercises the de facto authority of the law, without having acquired legal legitimacy” (Gould 2018). The implications for curtailment of free speech are potentially serious. While Gould approaches the IHRA definition from the perspective of Critical Legal Studies, this paper will examine it through the lens of Critical Discourse Analysis. I argue that the IHRA definition, like its predecessor from the European Monitoring Centre on Racism and Xenophobia (EUMC), fits Stevenson’s (1938, 1944) concept of a ‘persuasive definition’. Its authors had a political agenda, and the legal applications they had in mind for it can be described as ‘lawfare’: the use of the law for political ends. The controversy over the IHRA definition is a prime example of language being both the “site of, and stake in, struggles for power” (Fairclough 1989:15). Lawyers and linguists should approach this purportedly helpful text with due caution and criticism.
15:30-16:30 (POSTER) Marko Drobnjak (University of Ljubljana ~ Slovenia)Implementation of New Forensic Linguistics Methods: From Theory and Experiments to Case Law
<
In this paper, we will discuss the process of introducing forensic linguistics to case-law. We will primarily focus on key historical events that facilitated the breakthrough of knowledge from subfields of linguistics to a form of forensic expertise in court proceedings. In scientific papers, we usually read about the introduction of new applicative methods for the realization of forensic linguistics, while the insight into the work of forensic experts in the respective field shows the actual reality. We will be interested in cases when courts in criminal proceedings recognize new theoretical methods as credible. Such cases, which enable the breakthrough of theoretical starting points into case law, represent the landmark moments which affect future judicial decisions. We will examine the dichotomy between theory and practice in the field of forensic linguistics. At the same time, we will also discuss judicial discourse, which facilitated the implementation of new forensic methods and, consequently, the transition from theoretical-experimental approaches into courtrooms. Our goal is to present the introduction of theoretical and experimental approaches from forensic linguistics into case law and how within the judicial discourse, new forensic techniques from the field of forensic linguistics are being implemented.
15:30-16:30 (POSTER) Victoria Eibinger (University of Graz ~ Austria)oo Good to Be True? Attributing Authorship to Student Papers
University ESL students sometimes prefer to delegate the task of writing their (under)graduate thesis to a more capable friend or colleague rather than risk an unsatisfactory grade. Provided that this acquaintance avoids plagiarizing, the text will easily pass standard plagiarism checkers, the only technological barrier which many universities currently use to prevent students from passing off somebody else’s work as their own. This paper explores ways of determining with a relatively high degree of certainty whether an academic paper was written by a particular ESL student. I use the open-source authorship attribution program JGAAP (Java Graphical Authorship Attribution Program) to assess the degree of similarity between a student’s previous texts and a new paper/thesis they have supposedly written – taking into account that students’ language skills typically improve over time. JGAAP makes it possible to measure a variety of stylistic features, such as average word length and use of prepositions in a text. In a pilot study, I compare texts by three students with a corpus of academic papers written by five distractor authors. I work with the hypothesis that the degree of similarity across different, relatively independent features is unlikely to be consistently high if the authors are non-identical. My presentation will provide insight into which sets of features are suitable for verifying authorship in student papers and reducing false acceptance errors. The ultimate goal is that these results might one day be integrated into university plagiarism detection programs.
15:30-16:30 (POSTER) Anouschka Foltz (University of Graz ~ Austria)The Art of the Lie: Detecting Deception in Donald Trump’s Statements
The concepts of lying and deception have recently come to the forefront in American politics, with terms such as ‘pathological liar’, ‘post-truth’, and ‘fake news’ dominating political discourse. This project analyzes statements made by Donald Trump during the 2016 presidential debates, which were independently fact-checked and rated to be true, mostly true, half true, mostly false or false, to explore whether Trump’s speech and/or para-linguistic behavior could provide cues as to whether he was engaging in deception. We analyzed dysfluencies and hesitations, blink rate, gaze, gestures, and smiles. A double-blind coding protocol was implemented – i.e. the coder did not know which statements were true or false when coding the statements. The results suggest that only blink rate and smiles showed promise in identifying whether or not Trump was lying. Specifically, Trump’s blink rate range increased when he made false or mostly false statements compared to true, mostly true and half true statements. Furthermore, Trump more frequently smiled with no involvement of the muscles around the eyes (which would indicate actual enjoyment) when lying than when telling the truth. Overall, neither Trump’s speech nor his body language provided cues for deception. Instead, some subtle changes in his facial dynamics provided cues to deception. Currently ongoing analyses are underway to explore whether response latency and speech rate may be additional cues to deception in Trump’s speech.
15:30-16:30 (POSTER) Daniel Leisser and Klara Kager (University of Vienna ~ Austria) Partizipative Lücke und Strafprozessrecht: Der Laie im österreichischen Mandatsverfahren
„Sie haben das Recht, sich zu dem lhnen gemachten Tatvorwurf zu äußern oder nicht auszusagen. lhre Aussage kann lhrer Verteidigung dienen, aber auch als Beweis gegen Sie Verwendung finden“ (Ladung des Verdächtigen im Ermittlungsverfahren). Dieser Beitrag zielt darauf ab, eine rechtslinguistische Perspektive auf die sprachliche Konstruktion des Laien als Beschuldigten vorzustellen und im Hinblick auf die angenommene Alltagsferne des Strafprozessrechts zu reflektieren. Diese Aspekte sollen anhand des österreichischen Mandatsverfahren betrachtet werden, eine Art des beschleunigten Verfahrens, das seit 1. Jänner 2015 ausschließlich auf Akten basiert (§491 öStPO). Nach Einvernahme des Beschuldigten durch die Kriminalpolizei, Übernahme der Strarechtlichen Verantwortung und des Verzichts auf die sonst vorgesehene Hauptverhandlung steht es der Staatsanwaltschaft frei, bei dem zuständigen Gericht den Erlass einer Strafverfügung zu beantragen. Das Gericht kann dieser, nach Prüfung der Ermittlungsergebnisse und der Abwägung der Rechte und Interessen des Opfers, zustimmen. Ein Einspruch des Angeklagten muss binnen vier Wochen schriftlich an das Gericht erfolgen, widrigenfalls die Strafverfügung einem rechtskräftigen Urteil gleichsteht und zu vollstrecken ist (§ 491 Abs 9 öStPO). Der Erlass dieser mündlichen Hauptverhandlung kann zu einer „Partizipativen Lücke“ (Leisser 2018) im Strafverfahren für den juristisch unausgebildeten Laien und Beschuldigten führen. Zwar stellt das geschriebene Wort das wichtigste Medium des juristischen Diskurses dar (Spitzmüller & Warnke 2011), jedoch kommt gerade im Strafprozessrecht der mündlichen Hauptverhandlung eine Vielzahl verschiedener Funktionen zu, die u.a. darauf abzielen, das Recht auf ein faires Verfahren zu verwirklichen (Art. 6 EMRK) und allfällige Machtasymmetrien zwischen Laien und Strafverfolgungsbehörden auszugleichen. Dieser Beitrag möchte daher auch neue Ansätze zur „Entextualisierung“, „Dekontextualisierung“ und „Rekontextualisierung“ (Blommaert 2005) strafrechtlicher Diskurse zur Disposition stellen und diskutieren.
15:30-16:30 (POSTER) Alesia Locker (Danish Defence College ~ Denmark)Authorship Attribution Software: Should we let the Data Speak for Itself?
Authorship Analysis (AA) is a discipline under the umbrella of forensic linguistics in which writing style is analysed as a means of identification. Due to advances in natural language processing and machine learning in recent years, interest in computational methods of AA is gaining over traditional stylistic analysis by human expert. The approach has received a lot of attention due to PAN@CLEF evaluations – a conference on computational AA, where attendance is possible after successfully completing several AA tasks. Attempts have been made to use AA technology in court, and there are ongoing attempts to implement it in security settings. But can we trust its verdict? The existing computational methods of AA receive a lot of critique in scientific literature for the lack of theoretical motivation, black box methodologies and controversial results, and ultimately, many argue that these are unable to deliver viable forensic evidence. One of these black box methods is the so-called “bag-of-words” (word distributions) approach, commonly used in AA models. The study subjects a BoW model to scrutiny to evaluate its decision-making process. It tests it on ground-truth data, examines the parameters that the algorithm bases its conclusions on and offers detailed linguistic explanations for the statistical results that the word distributions as stylistic discriminators produce, in order to alert FL practitioners to potential pitfalls of the method. By building on the theory of Systemic Functional Linguistics and Variationist Sociolinguistics, the study takes steps toward solving the existing problem of the theoretical validity of computational AA.
15:30-16:30 (POSTER) Sweta Sinha (Indian Institute of Technology Patna ~ India)Dialect Classification: An Acoustic- Phonetic Study of Indian English Varieties
India is a multilingual country with its own regional varieties of English. Each of such Indian English (IE) (Lawler, 2005) variety appears to be perceptually different for the listeners. The IE varieties have heavy mother tongue influence which can be acoustically analyzed to yield phonetic cues for dialect classification. Such a study involves the task of estimating the region in which the concerned speaker has spent most of his life before the onset of adulthood. Vowel space characteristics (Chen & Sun, 2010) and vowel duration (Ahuja & Vyas, 2016) proved to be vital indicators in this regard. 10 IE speakers of two distinct mother tongue backgrounds -Bangla and Magahi, henceforth IEB and IEM respectively, were selected to read a phonetically balanced passage in natural speaking voice and the recordings were analyzed using a combination of waveforms and spectrographic analyses. The results of the study show the existence of distinct but partially overlapping vowel spaces for the two set of speakers. The long vowel /a:/, /i:/ and /u:/ for IEB showed lesser mean value for duration compared to IEM speakers. Through the research it is argued that while auditory analysis is indispensable in forensic speaker classification, acoustic analysis can provide important additional information. The findings reported are pertinent to forensic phonetics, enhancing the diagnostic power of naïve and expert listeners’ claims about suspect speakers’ voices.
16:30-17:30 (SR 11.11) Rositsa Zhekov (University of Bonn ~ Germany)The Impact of Offensive Language on Social Media: A Case Study of a Secret Facebook Group.
In the digital era, verbal violence on social media seems to have become habitual for many users even in communities where might not be expected. In hopes of shedding light on this phenomenon, an increasing number of FL experts have investigated violent language and online harassment (e.g. Hardaker 2010, Clarke 2018). It seems, however, that little attention has been paid to social media channels other than Twitter. Additionally, most of this FL research has been done in and on English. The present study focuses on a secret Facebook group with Bulgarian-German bilingual female members. The study had two objectives. The first was to investigate what kind of offensive language the participants used the most (e.g. insults, mocking, put-downs, etc.). The second was to determine the topical contexts in which this offensive language was used. After obtaining user permission and anonymising member data, an ethnographic mixed method approach was used for the analysis. The results showed that insults were most frequently used to target members of other Facebook groups whereas mocking was most often used to target members within the group. This presentation will elaborate upon these results, offer a theory to explain this ingroup-outgroup difference, and suggest topics for future related research.
16:30-17:30 (SR11.12) Mashael AlAmr and Alison Johnson (University of Leeds ~ England) Authorship Attribution and Idiolectal Style in a Specialized Corpus of Najdi Arabic Tweets
Authorship attribution research tends to divide into two camps: forensic statistical/stylistic and computational approaches (e.g. Abbasi & Chen, 2005; Juola, 2008; Koppel et al., 2009; 2010; 2012; Ebrahimpour et al., 2013; Seroussi et al., 2014; Rocha et al., 2017), with the first approach tending to focus on English, while there are a small number of computational studies relating to Arabic (e.g. Abbasi & Chen, 2005; Garcia-Barrero et al., 2012; Ouamour & Sayoud, 2012; 2013; Al-Ayyoub et al., 2017; Al-Takrori et al., 2019). There are no existing forensic linguistic authorship studies that focus on Arabic. Using a purpose-built specialized corpus of Arabic tweets, this study investigates the “idiolectal style” (Turell, 2010) of 6 Najdi Arabic speakers (all male) to identify distinctive dialectal usage. Najdi Arabic is a dialect spoken in the central region of Saudi Arabia. At a time when cybercrime and cyberthreats are on the rise, especially in social media platforms, the study asks whether, keeping gender and genre constant, users of a common dialect can be distinguished from each other through individual patterns of choices in vocabulary and grammar. Using quantitative and qualitative approaches, the analysis examines both dialectal and idiolectal features. This corpus-based study has practical implications in terms of authorship attribution in cases where anonymous aggressive or hateful tweets are sent. Initial findings reveal that Najdi Arabic does contain salient stylistic markers (distal-personal deixis, pronouns, interrogatives, and negatives) that can help identify not only the sociocultural background of a suspected author but also implications of their online identity.
16:30-17:30 (SR11.13)Fleur van der Houwen (Vrije Universiteit Amsterdam ~ The Netherlands)Emergency Calls: Strategies used by Emergency Call Takers to Handle Calls Made by Children
In this exploratory study I use conversation analysis to investigate the strategies centralists use to handle emergency calls made by children in the ages between 3 and 8 years old. The data consist of 12 emergency calls made to 999 and 911 and which were posted on the internet. In all 12 calls the child is the caller and requests help for a parent who collapsed. All calls were successful in getting help to the person in need, which possibly was the reason they were posted on the internet. These calls hence appear to be instances of “good practice”. While there are various studies on emergency calls that examine aspects such as their sequential structure, aspects of gate keeping, emotions or opening sentences, calls made by children have received little attention. The aim of this study is to examine what strategies were used by the call takers that lead to a resolution of the emergency request. Initial findings suggest that call takers employ strategies such as adjusting their tone of voice, complimenting the caller, and keeping the caller engaged when they get distracted.
17:30-18:30 (SR11.11) Marie Bojsen MØller (University of Copenhagen ~ Denmark) ‘I could kill them, I said. I didn’t say I would’ Threats on Trial: The Role of Intent in Cases Involving Threatening Communications
Threats constitute what may be termed an illicit genre, since they are socially and sometimes legally proscribed (Fraser 1998; Gales 2010; Muschalik 2018). The (il)legality of a threat is dependent on legislation (Solan & Tiersma 2005), and, notably, on the emphasis legislation and precedent have placed on threateners’ intent. However, being ultimately a psychological state, intent is notoriously difficult to assess (e.g. Hurt & Grant 2018), and in court, defendants may claim that they never intended to threaten. Furthermore, they can use more or less persuasive linguistic strategies to distance themselves from the language crime they are accused of committing, particularly if the wording of the threat was indirect. Indirect threats are particularly difficult to prosecute and penalize, since reasonable doubt may be raised regarding their intended meaning, possibly allowing the sender a recourse to ‘plausible deniability’ (Solan & Tiersma 2005). This paper takes its starting point in a comparison of the role of ‘intent’ (mens rea) in Danish, UK and US legislation or case law on threats (Danish Criminal Code, § 266; British Offences Against the Person Act 1861, Section 16; US ‘true threat’ case law: e.g. Watts v. United States 1969; Elonis v. United States 2015; Perez v. Florida 2017). I then move to an examination of Danish court cases involving threatening messages, focusing on appeals to defendants’ intent as argued by prosecutor, defense lawyer, defendant and judge.
17:30-18:30 (SR11.12) Gaby Axer (University of Wüppertal ~ Germany) Exploring Options to Automatize Qualitative Authorship Analysis
In the area of authorship analysis, multiple methods are currently applied, both in the qualitative and quantitative approach. A former qualitative blind study of authorship analysis in instant messaging of German native speakers in their German and English writing has yielded promising results, both intra- and cross-linguistically. In order to further test the reliability of the German markers of authorship which emerged as discriminatory in the closed set, a system to automatize the consistency analysis is developed. Moreover, the frequency of usage of different variants is integrated in the consistency coding weighting in order to consider intra-author variation in more detail. Further, the use and pragmatic meaning of emojis needs to be analysed further in order to evaluate their relevance and reliability as markers of authorship, e.g. variation between different smiling, happy emojis or skin tone modifications of gesture emojis. This paper will focus on the methodological issues and theoretical considerations of identifying and weighting markers of authorship in a reliable and practical manner
17:30-18:30 (SR 11.13)Timothy Habick (Reasoning, Inc. ~ United States) Cooperative Communication, Underspecification, and Equivocation
Linguists tasked with determining the communicative adequacy of high-stakes documents naturally appeal to Grice’s (1975, 1989) theory of cooperative communication, which often leads to a discussion of whether the text includes negligent underspecifications [as discussed by Horn (2018), Atlas (1989, 2005), and Borg (2004, 2012)] or contextually puzzling and disorienting overspecifications. The essential meaning of the term cooperative communication, based as it is on Grice’s several clear explanations, is not a serious issue of debate in the relevant linguistic and pragmatic literature. Nonetheless, a revised interpretation of cooperative communication was proposed as a key argument in a recent forensic linguistic debate. This paper reviews cases where the notions of cooperative communication, underspecification, and by extension overspecification have played important roles in resolving forensic issues of communicative adequacy or deficiency. I argue that a certain linguistic structure incorrectly identified as negligently underspecified was communicatively adequate and appropriate in context because readers have no legitimate right to isolate a piece of text from its overall context and then require a literal interpretation of the extracted material. I argue that key concepts in linguistics and pragmatics such as cooperative communication and underspecification cannot without justification be revised for specific forensic purposes. Such revisionist tactics appear to depend on the logical fallacy of equivocation and thus serve to disqualify the purported authoritative stature of the individuals or documents that attempt to use them.