Skip to main content
Språkbanken Text is a part of Språkbanken.

tokenization-sparv-whitespace

Citation Information

Språkbanken Text (2021). tokenization-sparv-whitespace (updated: 2021-05-07). [Analysis]. Språkbanken Text.
BibTeX

Standard reference Information

Bird, Steven, Edward Loper and Ewan Klein (2009), Natural Language Processing with Python. O’Reilly Media Inc.

Tokenizes text into tokens by whitespaces using the RegexpTokenizer from NLTK

The documentation for NLTK's RegexpTokenizer can be found here.

Example

This analysis is used with Sparv. Check out Sparv's quick start guide to get started!

To use this analysis, add the following line under export.annotations in the Sparv corpus configuration file:

- segment.token  # Token segments

In order to use this tokenizer you need to add the following setting to your Sparv corpus configuration file:

segment:
  token_segmenter: whitespace

For more info on how to use Sparv, check out the Sparv documentation.

Example output:

<token>Det</token>
<token>här</token>
<token>är</token>
<token>en</token>
<token>korpus</token>
<token>.</token>

Type

  • Analysis

Task

  • tokenization

Unit

  • token

Tool

NLTK

Created

2010-12-15

Updated

2021-05-07

Contact

Språkbanken Text
sb-info@svenska.gu.se