Temamodeller är vanliga inom digitala studier av stora textmängder och används flitigt inom digital humaniora, i detta avsnitt diskuterar vi möjligheterna och begränsningarna med denna metod.
I detta avsnitt pratar vi om stora textmängder som ligger till grund för kunskapsutvinning och vad för typer av frågor som kan besvaras med hjälp av stora digitala textmängder.
This blog is a piece of opinion where I sketch the process of developing NLP-based applications for second language learning and look at the process from the point of view of typical (mis)conceptions and challenges, as I have experienced them. Are we over-trusting the potential of NLP? Are teachers by definition reluctant to use NLP-based solutions in classrooms?
I en värld där AI tar en allt större plats har datadriven forskning blivit orden på allas läppar. I det här blogginlägget tänkte jag prata lite om vad det innebär att forska med hjälp av stora mängder textdata, primärt inom humaniora. Detta inlägg är det första i en serie om de olika delarna av en data-intensiv forskningsmetodologi.
Recently, we have seen a surge of methods that claim to embed meaning from textual corpora. But is that possible? Can text really reveal meaning, and if so, can current NLP methods detect it? Can our methods, as they some times claim, understand?
A comment often received by the reviewers of manuscripts to scientific conferences and journals is one about the representative sample under scrutiny and whether there are any solid arguments for accepting that the population characteristics, and particularly the features extracted from the empirical data acquired from such a population (e.g.
What if you could find all arguments in a text without having to read it? Or, what if you could search a database for a controversial topic and immediately get arguments for and against it, gathered from text all around the internet? Or, imagine when writing an essay you would automatically get an estimation of how persuasive your arguments are.