Tokenizes text into tokens by blank lines using the RegexpTokenizer from NLTK
The documentation for NLTK's RegexpTokenizer can be found here.
Bird, Steven, Edward Loper and Ewan Klein (2009), Natural Language Processing with Python. O’Reilly Media Inc.
The documentation for NLTK's RegexpTokenizer can be found here.
This analysis is used with Sparv. Check out Sparv's quick start guide to get started!
To use this analysis, add the following line under export.annotations
in the Sparv corpus configuration file:
- segment.token # Token segments
In order to use this tokenizer you need to add the following setting to your Sparv corpus configuration file:
segment:
token_segmenter: blanklines
For more info on how to use Sparv, check out the Sparv documentation.
Example output:
<token>Det</token>
<token>här</token>
<token>är</token>
<token>en</token>
<token>korpus</token>
<token>.</token>