Segments text into sentences by whitespaces using the RegexpTokenizer from NLTK
      The documentation for NLTK's RegexpTokenizer can be found here.
Bird, Steven, Edward Loper and Ewan Klein (2009), Natural Language Processing with Python. O’Reilly Media Inc.
 Additional ways to cite the dataset.
        Additional ways to cite the dataset.
    The documentation for NLTK's RegexpTokenizer can be found here.
This analysis is used with Sparv. Check out Sparv's quick start guide to get started!
To use this analysis, add the following line under export.annotations in the Sparv corpus configuration file:
- segment.sentence  # Sentence segments
In order to use this sentence segmenter you need to add the following setting to your Sparv corpus configuration file:
segment:
  sentence_segmenter: whitespace
For more info on how to use Sparv, check out the Sparv documentation.
Example output:
<sentence>
  <token>Det</token>
  <token>här</token>
  <token>är</token>
  <token>en</token>
  <token>korpus</token>
  <token>.</token>
</sentence>
<sentence>
  <token>Den</token>
  <token>har</token>
  <token>flera</token>
  <token>meningar</token>
  <token>.</token>
</sentence>