Skip to content
geeksforgeeks
  • Tutorials
    • Python
    • Java
    • Data Structures & Algorithms
    • ML & Data Science
    • Interview Corner
    • Programming Languages
    • Web Development
    • CS Subjects
    • DevOps And Linux
    • School Learning
    • Practice Coding Problems
  • Courses
    • DSA to Development
    • Get IBM Certification
    • Newly Launched!
      • Master Django Framework
      • Become AWS Certified
    • For Working Professionals
      • Interview 101: DSA & System Design
      • Data Science Training Program
      • JAVA Backend Development (Live)
      • DevOps Engineering (LIVE)
      • Data Structures & Algorithms in Python
    • For Students
      • Placement Preparation Course
      • Data Science (Live)
      • Data Structure & Algorithm-Self Paced (C++/JAVA)
      • Master Competitive Programming (Live)
      • Full Stack Development with React & Node JS (Live)
    • Full Stack Development
    • Data Science Program
    • All Courses
  • NLP
  • Data Analysis Tutorial
  • Python - Data visualization tutorial
  • NumPy
  • Pandas
  • OpenCV
  • R
  • Machine Learning Tutorial
  • Machine Learning Projects
  • Machine Learning Interview Questions
  • Machine Learning Mathematics
  • Deep Learning Tutorial
  • Deep Learning Project
  • Deep Learning Interview Questions
  • Computer Vision Tutorial
  • Computer Vision Projects
  • NLP
  • NLP Project
  • NLP Interview Questions
  • Statistics with Python
  • 100 Days of Machine Learning
Open In App
Next Article:
Transfer Learning with Fine-Tuning in NLP
Next article icon

Transfer Learning with Fine-Tuning in NLP

Last Updated : 20 May, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Natural Language Processing (NLP) has transformed models like BERT which can understand language context deeply by looking at words both before and after a target word. While BERT is pre-trained on vast amounts of general text making it adapt it to specific tasks like sentiment analysis that requires fine tuning. This process customizes BERT’s knowledge to perform well on domain-specific data while saving time and computational effort compared to training a model from scratch.

Using Hugging Face’s transformers library, we will fine tune a pre-trained BERT model for binary sentiment classification using transfer learning.

Why we Fine Tune a Model like BERT?

Fine tuning uses BERT’s pre-trained knowledge and adapts it to a target task by retraining on a smaller, labeled dataset. This process:

  • Saves computational resources compared to training a model from scratch.
  • Improves model performance by making it more task-specific.
  • Enhances the model’s ability to generalize to unseen data.

Fine-Tuning BERT Model for Sentiment Analysis using Transfer Learning

1. Installing and Importing Required Libraries

First, we will install the Hugging Face transformers library. The transformers library from Hugging Face provides pre-trained models and tokenizers.

!pip install transformers

We are importing PyTorch for tensor operations and model training.

  • DataLoader and TensorDataset help load and batch data efficiently during training.
  • torch.nn.functional provides functions like softmax for calculating prediction probabilities.
  • AdamW is the optimizer suited for transformer models.
  • BertTokenizer converts text into tokens that BERT can understand.
  • BertForSequenceClassification loads the BERT model adapted for classification tasks.
Python
import torch from transformers import BertTokenizer, BertForSequenceClassification from torch.optim import AdamW from torch.utils.data import DataLoader, TensorDataset import torch.nn.functional as F 

2. Loading the Pre-Trained BERT Model and Tokenizer

We load the bert-base-uncased model and its tokenizer. The tokenizer converts raw text into input IDs and attention masks, which BERT requires.

  • BertTokenizer.from_pretrained(): Prepares text for BERT input.
  • BertForSequenceClassification.from_pretrained(): Loads BERT configured for classification tasks with two output labels like positive and negative.
Python
pretrained_model_name = 'bert-base-uncased' tokenizer = BertTokenizer.from_pretrained(pretrained_model_name) model = BertForSequenceClassification.from_pretrained(pretrained_model_name,                                                        num_labels=2) 

Move the model to GPU if available:

Python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) 

3. Preparing the Training Dataset

We create a small labeled dataset for sentiment analysis. Here:

  • 1 represents positive sentiment
  • 0 represent negative sentiment.
Python
train_texts = [     "I love this product, it's amazing!",  # Positive     "Absolutely fantastic experience, will buy again!",  # Positive     "Worst purchase ever. Completely useless.",  # Negative     "I hate this item, it doesn't work!",  # Negative     "The quality is top-notch, highly recommend!",  # Positive     "Terrible service, never coming back.",  # Negative     "This is the best thing I've ever bought!",  # Positive     "Very disappointing. Waste of money.",  # Negative     "Superb! Exceeded all my expectations.",  # Positive     "Not worth the price at all.",  # Negative ] train_labels = torch.tensor([1, 1, 0, 0, 1, 0, 1, 0, 1, 0]).to(device) 

4. Tokenizing the Dataset

The tokenizer processes text into fixed-length sequences, adding padding and truncation as needed.

  • padding=True: Ensures all input sequences have the same length.
  • truncation=True: Shortens long sentences beyond max_length=128.
Python
encoded_train = tokenizer(train_texts,                            padding=True,                            truncation=True,                            max_length=128,                            return_tensors='pt') train_input_ids = encoded_train['input_ids'].to(device) train_attention_masks = encoded_train['attention_mask'].to(device) 

5. Creating a DataLoader for Efficient Training

Data is wrapped in TensorDataset and loaded into DataLoader to enable mini-batch training which improves training efficiency and stability.

  • TensorDataset(): Combines input IDs, attention masks and labels into a dataset.
  • DataLoader(): Loads data in mini-batches to improve efficiency.
Python
train_dataset = TensorDataset(train_input_ids, train_attention_masks, train_labels) train_loader = DataLoader(train_dataset, batch_size=2, shuffle=True) 

6. Defining the Optimizer

We use the AdamW optimizer which works well with transformer models.

Python
optimizer = AdamW(model.parameters(), lr=2e-5) 

7. Training the model

The training loop iterates over batches, computing loss and gradients, updating model weights and tracking accuracy.

  • optimizer.zero_grad(): Clears gradients before each batch.
  • model(...): Runs forward pass and calculates loss.
  • loss.backward(): Backpropagates the error.
  • optimizer.step(): Updates model weights based on gradients.
  • torch.argmax(F.softmax(...)): Determines predicted class.
Python
epochs = 5 model.train() for epoch in range(epochs):     total_loss = 0     correct = 0     total = 0      for batch in train_loader:         batch_input_ids, batch_attention_masks, batch_labels = batch          optimizer.zero_grad()         outputs = model(input_ids=batch_input_ids,                          attention_mask=batch_attention_masks,                          labels=batch_labels)          loss = outputs.loss         logits = outputs.logits          total_loss += loss.item()         loss.backward()         optimizer.step()          preds = torch.argmax(F.softmax(logits, dim=1), dim=1)         correct += (preds == batch_labels).sum().item()         total += batch_labels.size(0)      avg_loss = total_loss / len(train_loader)     accuracy = correct / total * 100     print(f"Epoch {epoch+1} - Loss: {avg_loss:.4f}, Accuracy: {accuracy:.2f}%") 

Output:

training
Training the model

7. Saving and Loading the Fine-Tuned Model

We can save the model using torch.save() function. The model’s state dictionary is saved and can be reloaded later for inference or further training.

Saving the model:

Python
torch.save(model.state_dict(), "fine_tuned_bert.pth") 

Loading the fine-tuned model:

Python
model.load_state_dict(torch.load("fine_tuned_bert.pth")) model.to(device) 

8. Creating test data for Evaluation

We prepare a test dataset and run the fine-tuned model to measure accuracy and make predictions on new text samples.

  • torch.tensor([...]).to(device) converts the label list into a tensor and moves it to the computing device like CPU or GPU.
  • tokenizer(test_texts, padding=True, truncation=True, max_length=128, return_tensors='pt') tokenizes the test texts.
  • encoded_test['input_ids'] extracts token IDs representing each word or subword.
  • encoded_test['attention_mask'] extracts attention masks indicating which tokens should be attended to (1) and which are padding (0).
Python
test_texts = [     "This is a great product, I love it!",  # Positive     "Horrible experience, I want a refund!",  # Negative     "Highly recommended! Five stars.",  # Positive     "Not worth it. I regret buying this.",  # Negative ] test_labels = torch.tensor([1, 0, 1, 0]).to(device)  encoded_test = tokenizer(test_texts,                           padding=True,                           truncation=True,                           max_length=128,                           return_tensors='pt')                           test_input_ids = encoded_test['input_ids'].to(device) test_attention_masks = encoded_test['attention_mask'].to(device) 

9. Making Predictions and Evaluating Performance

We set the model to evaluation mode using model.eval() to disable training-specific layers like dropout. Accuracy is calculated by comparing predicted labels to true labels and computing the percentage correct. Each test text and its predicted label are printed in a loop for review.

  • torch.no_grad() disables gradient calculations for faster and more memory-efficient inference.
  • The model processes inputs with model(input_ids=..., attention_mask=...) to produce output logits.
  • torch.argmax(outputs.logits, dim=1) selects the class with the highest score as the prediction.
Python
model.eval() with torch.no_grad():     outputs = model(input_ids=test_input_ids,                      attention_mask=test_attention_masks)     predicted_labels = torch.argmax(outputs.logits, dim=1)  test_accuracy = (predicted_labels == test_labels).sum().item() / len(test_labels) * 100 print(f"\nTest Accuracy: {test_accuracy:.2f}%")  for text, label in zip(test_texts, predicted_labels):     print(f'Text: {text}\nPredicted Label: {label.item()}\n') 

Output:

predictions
predictions by the model

Here we can see that our model is working fine.

You can download the source code from here : Transfer Learning with Fine-Tuning in NLP.


Next Article
Transfer Learning with Fine-Tuning in NLP

A

anagha730
Improve
Article Tags :
  • Machine Learning
  • Deep Learning
  • AI-ML-DS
  • Natural-language-processing
Practice Tags :
  • Machine Learning

Similar Reads

    Transfer learning & fine-tuning using Keras
    Transfer learning is a powerful technique used in deep learning tasks. Here, a model developed for a particular task is reused as a starting point for a model on the second task. Thus, transfer learning uses the knowledge gained from a pre-trained model and allows faster convergence with better perf
    7 min read
    Transfer Learning in NLP
    Transfer learning is an important tool in natural language processing (NLP) that helps build powerful models without needing massive amounts of data. This article explains what transfer learning is, why it's important in NLP, and how it works. Table of Content Why Transfer Learning is important in N
    15+ min read
    Difference Between Fine-Tuning and Transfer Learning
    Fine tuning and transfer learning both helps models to use what they have learned from one task to perform better on another task. While both might seem similar but they differ in how they are applied and how their approaches work.Transfer Learning freezes most of the pre-trained model and trains on
    3 min read
    What is Transfer Learning?
    Transfer learning is a machine learning technique where a model trained on one task is repurposed as the foundation for a second task. This approach is beneficial when the second task is related to the first or when data for the second task is limited. Using learned features from the initial task, t
    8 min read
    Transfer Learning in Data Mining
    Transfer learning is the way in which humans apply their knowledge in a task to learn another task. Transfer learning gains the knowledge from one or more tasks that were successfully approved and applies this knowledge to solve the new problem. In Transfer learning, the distributions and the data d
    4 min read
geeksforgeeks-footer-logo
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
GFG App on Play Store GFG App on App Store
Advertise with us
  • Company
  • About Us
  • Legal
  • Privacy Policy
  • In Media
  • Contact Us
  • Advertise with us
  • GFG Corporate Solution
  • Placement Training Program
  • Languages
  • Python
  • Java
  • C++
  • PHP
  • GoLang
  • SQL
  • R Language
  • Android Tutorial
  • Tutorials Archive
  • DSA
  • Data Structures
  • Algorithms
  • DSA for Beginners
  • Basic DSA Problems
  • DSA Roadmap
  • Top 100 DSA Interview Problems
  • DSA Roadmap by Sandeep Jain
  • All Cheat Sheets
  • Data Science & ML
  • Data Science With Python
  • Data Science For Beginner
  • Machine Learning
  • ML Maths
  • Data Visualisation
  • Pandas
  • NumPy
  • NLP
  • Deep Learning
  • Web Technologies
  • HTML
  • CSS
  • JavaScript
  • TypeScript
  • ReactJS
  • NextJS
  • Bootstrap
  • Web Design
  • Python Tutorial
  • Python Programming Examples
  • Python Projects
  • Python Tkinter
  • Python Web Scraping
  • OpenCV Tutorial
  • Python Interview Question
  • Django
  • Computer Science
  • Operating Systems
  • Computer Network
  • Database Management System
  • Software Engineering
  • Digital Logic Design
  • Engineering Maths
  • Software Development
  • Software Testing
  • DevOps
  • Git
  • Linux
  • AWS
  • Docker
  • Kubernetes
  • Azure
  • GCP
  • DevOps Roadmap
  • System Design
  • High Level Design
  • Low Level Design
  • UML Diagrams
  • Interview Guide
  • Design Patterns
  • OOAD
  • System Design Bootcamp
  • Interview Questions
  • Inteview Preparation
  • Competitive Programming
  • Top DS or Algo for CP
  • Company-Wise Recruitment Process
  • Company-Wise Preparation
  • Aptitude Preparation
  • Puzzles
  • School Subjects
  • Mathematics
  • Physics
  • Chemistry
  • Biology
  • Social Science
  • English Grammar
  • Commerce
  • World GK
  • GeeksforGeeks Videos
  • DSA
  • Python
  • Java
  • C++
  • Web Development
  • Data Science
  • CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences