Segments text into sentences by punctuation marks using the RegexpTokenizer from NLTK
A very simple sentence tokenizer, separating sentences on every .!? no matter the context. The sentence segmenter is based on NLTK's RegexpTokenizer.
Bird, Steven, Edward Loper and Ewan Klein (2009), Natural Language Processing with Python. O’Reilly Media Inc.
A very simple sentence tokenizer, separating sentences on every .!? no matter the context. The sentence segmenter is based on NLTK's RegexpTokenizer.
This analysis is used with Sparv. Check out Sparv's quick start guide to get started!
To use this analysis, add the following line under export.annotations
in the Sparv corpus configuration file:
- segment.sentence # Sentence segments
In order to use this sentence segmenter you need to add the following setting to your Sparv corpus configuration file:
segment:
sentence_segmenter: punctuation
For more info on how to use Sparv, check out the Sparv documentation.
Example output:
<sentence>
<token>Det</token>
<token>här</token>
<token>är</token>
<token>en</token>
<token>korpus</token>
<token>.</token>
</sentence>
<sentence>
<token>Den</token>
<token>har</token>
<token>flera</token>
<token>meningar</token>
<token>.</token>
</sentence>