Recently, we have seen a surge of methods that claim to embed meaning from textual corpora. But is that possible? Can text really reveal meaning, and if so, can current NLP methods detect it? Can our methods, as they some times claim, understand?
A comment often received by the reviewers of manuscripts to scientific conferences and journals is one about the representative sample under scrutiny and whether there are any solid arguments for accepting that the population characteristics, and particularly the features extracted from the empirical data acquired from such a population (e.g.
What if you could find all arguments in a text without having to read it? Or, what if you could search a database for a controversial topic and immediately get arguments for and against it, gathered from text all around the internet? Or, imagine when writing an essay you would automatically get an estimation of how persuasive your arguments are.
The Swedish Parliament (Riksdagen) continuously releases open data on its website, which includes documents approved and used during parliamentary sessions as well as what each member of parliament votes during each roll call (voting session).
In recent years, neural network based approaches (i.e. deep learning) have been the main models for state-of-the-art systems in natural language processing, whether that is in machine translation, natural language inference, language modeling or sentiment analysis.
In our research group, we are exploring ways of analysing language to find early signs of possible cognitive impairment, which may develop to dementia.