Skip to content
geeksforgeeks
  • Tutorials
    • Python
    • Java
    • Data Structures & Algorithms
    • ML & Data Science
    • Interview Corner
    • Programming Languages
    • Web Development
    • CS Subjects
    • DevOps And Linux
    • School Learning
    • Practice Coding Problems
  • Courses
    • DSA to Development
    • Get IBM Certification
    • Newly Launched!
      • Master Django Framework
      • Become AWS Certified
    • For Working Professionals
      • Interview 101: DSA & System Design
      • Data Science Training Program
      • JAVA Backend Development (Live)
      • DevOps Engineering (LIVE)
      • Data Structures & Algorithms in Python
    • For Students
      • Placement Preparation Course
      • Data Science (Live)
      • Data Structure & Algorithm-Self Paced (C++/JAVA)
      • Master Competitive Programming (Live)
      • Full Stack Development with React & Node JS (Live)
    • Full Stack Development
    • Data Science Program
    • All Courses
  • Data Science
  • Data Science Projects
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • ML Projects
  • Deep Learning
  • NLP
  • Computer Vision
  • Artificial Intelligence
Open In App
Next Article:
Tokenization Using Spacy
Next article icon

Tokenization Using Spacy

Last Updated : 12 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Before we get into tokenization, let's first take a look at what spaCy is. spaCy is a popular library used in Natural Language Processing (NLP). It's an object-oriented library that helps with processing and analyzing text. We can use spaCy to clean and prepare text, break it into sentences and words and even extract useful information from the text using its various tools and functions. This makes spaCy a great tool for tasks like tokenization, part-of-speech tagging and named entity recognition.

What is Tokenization?

Tokenization is the process of splitting a text or a sentence into segments, which are called tokens. These tokens can be individual words, phrases, or characters depending on the tokenization method used. It is the first step of text preprocessing and is used as input for subsequent processes like text classification, lemmatization and part-of-speech tagging. This step is essential for converting unstructured text into a structured format that can be processed further for tasks such as sentiment analysis, named entity recognition and translation.

Example of Tokenization

This is the sentence: "I love natural language processing!"

After tokenization: ["I", "love", "natural", "language", "processing", "!"]

Each token here represents a word or punctuation mark, making it easier for algorithms to process and analyze the text.

Implementation of Tokenization using Spacy Library

Python
import spacy  # Creating blank language object then # tokenizing words of the sentence nlp = spacy.blank("en")  doc = nlp("GeeksforGeeks is a one stop\ learning destination for geeks.")  for token in doc:     print(token) 

Output:

GeeksforGeeks
is
a
one
stop
learning
destination
for
geeks
.

We can also add functionality in tokens by adding other modules in the pipeline using spacy.load().

Python
nlp = spacy.load("en_core_web_sm")  nlp.pipe_names 

Output:

['tok2vec', 'tagger', 'parser', 'attribute_ruler', 'lemmatizer', 'ner']

Here is an example to show what other functionalities can be enhanced by adding modules to the pipeline.

Python
import spacy  # loading modules to the pipeline. nlp = spacy.load("en_core_web_sm")  # Initialising doc with a sentence. doc = nlp("If you want to be an excellent programmer \ , be consistent to practice daily on GFG.")  # Using properties of token i.e. Part of Speech and Lemmatization for token in doc:     print(token, " | ",           spacy.explain(token.pos_),           " | ", token.lemma_) 

Output:

If  |  subordinating conjunction  |  if
you | pronoun | you
want | verb | want
to | particle | to
be | auxiliary | be
an | determiner | an
excellent | adjective | excellent
programmer | noun | programmer
, | punctuation | ,
be | auxiliary | be
consistent | adjective | consistent
to | particle | to
practice | verb | practice
daily | adverb | daily
on | adposition | on
GFG | proper noun | GFG
. | punctuation | .

In the example above, we utilized part-of-speech (POS) tagging and lemmatization through the spaCy NLP modules. This allowed us to obtain the POS for each word and convert each token to its base form through lemmatization. Prior to loading the NLP model with "en_core_web_sm", we would not have had access to this functionality. The en_core_web_sm model is essential as it provides the necessary linguistic features, such as tokenization, POS tagging and lemmatization, enabling these advanced NLP capabilities.

Read More:

  • Sentiment Analysis using VADER
  • Text Generation using Recurrent Long Short Term Memory Network
  • Text Preprocessing in Python

Next Article
Tokenization Using Spacy

A

apsingh123
Improve
Article Tags :
  • NLP
  • NLP-Projects

Similar Reads

    What is tokenization?
    Tokenization is a fundamental process in Natural Language Processing (NLP) that involves breaking down a stream of text into smaller units called tokens. These tokens can range from individual characters to full words or phrases, Based on how detailed it needs to be. By converting text into these ma
    5 min read
    Rule-Based Tokenization in NLP
    Natural Language Processing (NLP) is a subfield of artificial intelligence that aims to enable computers to process, understand, and generate human language. One of the critical tasks in NLP is tokenization, which is the process of splitting text into smaller meaningful units, known as tokens. Dicti
    4 min read
    Tokenize text using NLTK in python
    To run the below python program, (NLTK) natural language toolkit has to be installed in your system.The NLTK module is a massive tool kit, aimed at helping you with the entire Natural Language Processing (NLP) methodology.In order to install NLTK run the following commands in your terminal. sudo pip
    3 min read
    Dictionary Based Tokenization in NLP
    Natural Language Processing (NLP) is a subfield of artificial intelligence that aims to enable computers to process, understand, and generate human language. One of the critical tasks in NLP is tokenization, which is the process of splitting text into smaller meaningful units, known as tokens. Dicti
    5 min read
    Tokenization with the SentencePiece Python Library
    Tokenization is a crucial step in Natural Language Processing (NLP), where text is divided into smaller units, such as words or subwords, that can be further processed by machine learning models. One of the most popular tools for tokenization is the SentencePiece library, developed by Google. This v
    5 min read
geeksforgeeks-footer-logo
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
GFG App on Play Store GFG App on App Store
Advertise with us
  • Company
  • About Us
  • Legal
  • Privacy Policy
  • In Media
  • Contact Us
  • Advertise with us
  • GFG Corporate Solution
  • Placement Training Program
  • Languages
  • Python
  • Java
  • C++
  • PHP
  • GoLang
  • SQL
  • R Language
  • Android Tutorial
  • Tutorials Archive
  • DSA
  • Data Structures
  • Algorithms
  • DSA for Beginners
  • Basic DSA Problems
  • DSA Roadmap
  • Top 100 DSA Interview Problems
  • DSA Roadmap by Sandeep Jain
  • All Cheat Sheets
  • Data Science & ML
  • Data Science With Python
  • Data Science For Beginner
  • Machine Learning
  • ML Maths
  • Data Visualisation
  • Pandas
  • NumPy
  • NLP
  • Deep Learning
  • Web Technologies
  • HTML
  • CSS
  • JavaScript
  • TypeScript
  • ReactJS
  • NextJS
  • Bootstrap
  • Web Design
  • Python Tutorial
  • Python Programming Examples
  • Python Projects
  • Python Tkinter
  • Python Web Scraping
  • OpenCV Tutorial
  • Python Interview Question
  • Django
  • Computer Science
  • Operating Systems
  • Computer Network
  • Database Management System
  • Software Engineering
  • Digital Logic Design
  • Engineering Maths
  • Software Development
  • Software Testing
  • DevOps
  • Git
  • Linux
  • AWS
  • Docker
  • Kubernetes
  • Azure
  • GCP
  • DevOps Roadmap
  • System Design
  • High Level Design
  • Low Level Design
  • UML Diagrams
  • Interview Guide
  • Design Patterns
  • OOAD
  • System Design Bootcamp
  • Interview Questions
  • Inteview Preparation
  • Competitive Programming
  • Top DS or Algo for CP
  • Company-Wise Recruitment Process
  • Company-Wise Preparation
  • Aptitude Preparation
  • Puzzles
  • School Subjects
  • Mathematics
  • Physics
  • Chemistry
  • Biology
  • Social Science
  • English Grammar
  • Commerce
  • World GK
  • GeeksforGeeks Videos
  • DSA
  • Python
  • Java
  • C++
  • Web Development
  • Data Science
  • CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences