Tokeniserar text utifrån tomma rader med hjälp av NLTKs RegexpTokenizer
Dokumentationen för NLTKs RegexpTokenizer finns här.
Bird, Steven, Edward Loper and Ewan Klein (2009), Natural Language Processing with Python. O’Reilly Media Inc.
Dokumentationen för NLTKs RegexpTokenizer finns här.
This analysis is used with Sparv. Check out Sparv's quick start guide to get started!
To use this analysis, add the following line under export.annotations
in the Sparv corpus configuration file:
- segment.token # Token segments
In order to use this tokenizer you need to add the following setting to your Sparv corpus configuration file:
segment:
token_segmenter: blanklines
For more info on how to use Sparv, check out the Sparv documentation.
Example output:
<token>Det</token>
<token>här</token>
<token>är</token>
<token>en</token>
<token>korpus</token>
<token>.</token>