Skip to content
geeksforgeeks
  • Courses
    • DSA to Development
    • Get IBM Certification
    • Newly Launched!
      • Master Django Framework
      • Become AWS Certified
    • For Working Professionals
      • Interview 101: DSA & System Design
      • Data Science Training Program
      • JAVA Backend Development (Live)
      • DevOps Engineering (LIVE)
      • Data Structures & Algorithms in Python
    • For Students
      • Placement Preparation Course
      • Data Science (Live)
      • Data Structure & Algorithm-Self Paced (C++/JAVA)
      • Master Competitive Programming (Live)
      • Full Stack Development with React & Node JS (Live)
    • Full Stack Development
    • Data Science Program
    • All Courses
  • Tutorials
    • Data Structures & Algorithms
    • ML & Data Science
    • Interview Corner
    • Programming Languages
    • Web Development
    • CS Subjects
    • DevOps And Linux
    • School Learning
  • Practice
    • Build your AI Agent
    • GfG 160
    • Problem of the Day
    • Practice Coding Problems
    • GfG SDE Sheet
  • Contests
    • Accenture Hackathon (Ending Soon!)
    • GfG Weekly [Rated Contest]
    • Job-A-Thon Hiring Challenge
    • All Contests and Events
  • Data Science
  • Data Science Projects
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • ML Projects
  • Deep Learning
  • NLP
  • Computer Vision
  • Artificial Intelligence
Open In App
Next Article:
SMOTE for Imbalanced Classification with Python
Next article icon

Classification on Imbalanced data using Tensorflow

Last Updated : 07 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

In the modern days of machine learning, imbalanced datasets are like a curse that degrades the overall model performance in classification tasks. In this article, we will implement a Deep learning model using TensorFlow for classification on a highly imbalanced dataset.

Classification on Imbalanced data using Tensorflow

What is Imbalanced Data?

Most real-world datasets are imbalanced. A dataset can be called imbalanced when the distribution of classes of the target variable of the dataset is highly skewed i.e. the occurrence of one class (minority class) is significantly low from the majority class. This imbalanced dataset greatly reduces the model's performance, degrades the learning process and induces biasing within the model's prediction which leads to wrong and suboptimal predictions. In practical terms, consider a medical diagnosis model attempting to identify a rare disease—where positive cases are sparse compared to negatives. The skewed distribution can potentially compromise the model's ability to generalize and make accurate predictions. So, it is very important to handle imbalanced datasets carefully to achieve optimal model performance.

Step-by-Step Code Implementations

Importing required libraries

At first, we will import all required Python libraries like NumPy, Pandas, Matplotlib, Seaborn, TensorFlow, SKlearn etc.

Python
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import MinMaxScaler, LabelEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense 

Dataset loading

Before loading the dataset, we will handle the randomness of the runtime resources using random seeding. After that, we will load a dataset. Then we will perform simple outlier removal so that our final results are not getting biased.

Python
# Set random seed for reproducibility seed = 42 np.random.seed(seed) tf.random.set_seed(seed) # Set random seed for reproducibility on GPU tf.config.experimental.set_visible_devices([], 'GPU') # load the dataset df = pd.read_csv("data.csv") # https://www.kaggle.com/datasets/uciml/breast-cancer-wisconsin-data/data?select=data.csv df.drop(columns=['id', 'Unnamed: 32'], inplace=True)  # Outlier removal numerical_columns = df.select_dtypes(exclude=object).columns.tolist()  for col in numerical_columns:     upper_limit = df[col].mean() + 3 * df[col].std()     lower_limit = df[col].mean() - 3 * df[col].std()     df = df[(df[col] <= upper_limit) & (df[col] >= lower_limit)] 

Data pre-processing and splitting

Now we will scale numerical columns of the dataset using MinMaxScaler and the target column will be encoded by Label encoder. Then the dataset will be divided into training and testing sets(80:20).

Python
# Scaling The Numerical Columns Using The MinMax Scaler scaler = MinMaxScaler() df[numerical_columns] = scaler.fit_transform(df[numerical_columns]) # Encoding The Target Column Using Label Encoder encoder = LabelEncoder() df['diagnosis'] = encoder.fit_transform(df['diagnosis']) # data spliting X = df.drop(columns=['diagnosis']) y = df['diagnosis'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) 

Check for imbalance

We are performing classification of imbalance dataset. So, it required see how the target column is distributed. This Exploratory Data Analysis will help to under that imbalance factor of the dataset.

Python
# Visualize the distribution of the target variable in a pie chart plt.figure(figsize=(6, 6)) df['diagnosis'].value_counts().plot(kind='pie', autopct='%1.1f%%', colors=['skyblue', 'lightcoral']) plt.title('Distribution of Diagnosis (Malignant vs Benign)') plt.ylabel('') plt.show() 

Output:

Distribution of target column-Geeksforgeeks
Distribution of target column


So, we can say that the dataset is highly imbalanced as there is a difference of approximately 3X of occurrences between 'Malignant' and 'Benign' classes. Now we can smoothly proceed with this dataset.

Class weights calculation

As we are performing classification on imbalance dataset so it is required to manually calculate the class weights for both majority and minority classes. This class weights will be stored in a variable and directed feed to the model during training.

Python
# Calculate class weights total = len(y_train) pos = np.sum(y_train == 1) neg = np.sum(y_train == 0)  weight_for_0 = (1 / neg) * (total / 2.0) weight_for_1 = (1 / pos) * (total / 2.0)  class_weight = {0: weight_for_0, 1: weight_for_1}  print('Weight for class 0: {:.2f}'.format(weight_for_0)) print('Weight for class 1: {:.2f}'.format(weight_for_1)) 

Output:

Weight for class 0: 0.68
Weight for class 1: 1.87

Defining the Deep Leaning model

Now we will define our three-layered Deep Learning model and we will use loss as 'binary_crossentropy' which is used for binary classification and optimizer as 'adam'.

Python
model = Sequential([     Dense(16, activation='relu', input_dim=30),     Dense(8, activation='relu'),     Dense(1, activation='sigmoid') ])  model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.summary() 

Output:

  Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 16) 496

dense_1 (Dense) (None, 8) 136

dense_2 (Dense) (None, 1) 9

=================================================================
Total params: 641 (2.50 KB)
Trainable params: 641 (2.50 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

Model training

We are all set up for model training. We will train the model on 80 epochs. However, it is suggested that more epochs will give better results. Also, here we need to pass a hyper-parameter called class_weight. This can be decided by the formula which is: C_w=\frac{N}{C_nC_s}, where C_w is class weight, N is total samples, C_n is total number of classes and C_s is corresponding class samples.

Python
history = model.fit(X_train, y_train, epochs=80, validation_data=(X_test, y_test), class_weight=class_weight) 

Output:

Epoch 1/80
11/11 [==============================] - 1s 25ms/step - loss: 0.6723 - accuracy: 0.6686 - val_loss: 0.6508 - val_accuracy: 0.7442
.......................................
.......................................
Epoch 80/80
11/11 [==============================] - 0s 5ms/step - loss: 0.0729 - accuracy: 0.9677 - val_loss: 0.1415 - val_accuracy: 0.9419

As the output is large so we are only giving the initial and last epoch for understanding.

Visualizing the training process

For better understanding how our model is learning and progressing at each epoch, we will plot the loss vs. accuracy curve.

Python
# Plot Loss vs. Accuracy plt.figure(figsize=(10, 4))  # Plot Training vs. validation Loss plt.subplot(1, 2, 1) plt.plot(history.history['loss'], label='Training Loss') plt.plot(history.history['val_loss'], label='Validation Loss') plt.title('Training vs. Validation Loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend()  # Plot Training vs. validation Accuracy plt.subplot(1, 2, 2) plt.plot(history.history['accuracy'], label='Training Accuracy') plt.plot(history.history['val_accuracy'], label='Validation Accuracy') plt.title('Training vs. Validation Accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend()  plt.tight_layout() plt.show() 

Output:

imba
Loss vs. Accuracy curve

The above plot clearly depicts that more we go forward with epochs, our model will learn better. So, it is suggested that for better results go for more epochs.

Setting best Bias factor

To evaluate our model, we need to set the bias factor. It is basically a value which indicates the borderline of majority and minority class outputs. Now we can set by assumption but it will be good if we set it by tuning. We will perform tuning based on best F1-score and use that value further.

Python
y_pred = model.predict(X_test) biasing = np.arange(0.4, 0.9)   # range 40% to 90% bias f1_scores = [] # bias setting based on best f1-score  for bias in biasing:     y_pred_binary = (y_pred > bias).astype(int)     f1 = f1_score(y_test, y_pred_binary)     f1_scores.append(f1)  best_bias = biasing[np.argmax(f1_scores)] print("Best Bias:", best_bias) 

Output:

 3/3 [==============================] - 0s 3ms/step
Best Bias: 0.4

Model Evaluation

For binary classification on imbalanced dataset Accuracy is not enough for evaluation. Besides Accuracy, we will evaluate our model in the terms of Precision, Recall and F1-Score. Here we need to set the bias.

Python
import sklearn print(sklearn.metrics.classification_report(y_test, y_pred_binary)) 

Output:

              precision    recall  f1-score   support

0 0.97 0.92 0.94 64
1 0.80 0.91 0.85 22

accuracy 0.92 86
macro avg 0.88 0.92 0.90 86
weighted avg 0.92 0.92 0.92 86

Confusion matrix

Now let us visualize the confusion matrix. It will help us to analyze the predictions with actual.

Python
# Create a confusion matrix cm = confusion_matrix(y_test, y_pred_binary)  # Plot the colorful confusion matrix plt.figure(figsize=(4, 3)) sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=['Benign', 'Malignant'], yticklabels=['Benign', 'Malignant']) plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() 

Output:

Classification on imbalanced data in Tensorflow

Conclusion

We can conclude that classification on imbalanced dataset is very crucial task as all real-world datasets are imbalanced in nature. Our model achieved notable 94% of accuracy and high F1-Score of 88% which denotes that the model is performing well.


Next Article
SMOTE for Imbalanced Classification with Python
author
susmit_sekhar_bhakta
Improve
Article Tags :
  • Geeks Premier League
  • Deep Learning
  • AI-ML-DS
  • Tensorflow
  • TCS-coding-questions
  • Geeks Premier League 2023

Similar Reads

  • Image classification using CIFAR-10 and CIFAR-100 Dataset in TensorFlow
    CIFAR10 and CIFAR100 are some of the famous benchmark datasets which are used to train CNN for the computer vision task.  In this article we are supposed to perform image classification on both of these datasets CIFAR10 as well as CIFAR100 so, we will be using Transfer learning here.  But how? First
    4 min read
  • SMOTE for Imbalanced Classification with Python
    Imbalanced datasets impact the performance of the machine learning models and the Synthetic Minority Over-sampling Technique (SMOTE) addresses the class imbalance problem by generating synthetic samples for the minority class. The article aims to explore the SMOTE, its working procedure, and various
    14 min read
  • How to handle class imbalance in TensorFlow?
    In many real-world machine learning tasks, especially in classification problems, we often encounter datasets where the number of instances in each class significantly differs. This scenario is known as class imbalance. TensorFlow, a powerful deep learning framework, provides several tools and techn
    8 min read
  • CIFAR-10 Image Classification in TensorFlow
    Prerequisites:Image ClassificationConvolution Neural Networks including basic pooling, convolution layers with normalization in neural networks, and dropout.Data Augmentation.Neural Networks.Numpy arrays.In this article, we are going to discuss how to classify images using TensorFlow. Image Classifi
    8 min read
  • Image Classification using ResNet
    This article will walk you through the steps to implement it for image classification using Python and TensorFlow/Keras. Image classification classifies an image into one of several predefined categories. ResNet (Residual Networks), which introduced the concept of residual connections to address the
    3 min read
  • Classification of Neural Network in TensorFlow
    Classification is used for feature categorization, and only allows one output response for every input pattern as opposed to permitting various faults to occur with a specific set of operating parameters. The category that has the greatest output value is chosen by the classification network. When i
    10 min read
  • Image Classification Using PyTorch Lightning
    Image classification is one of the most common tasks in computer vision and involves assigning a label to an input image from a predefined set of categories. While PyTorch is a powerful deep learning framework, PyTorch Lightning builds on it to simplify model training, reduce boilerplate code, and i
    5 min read
  • Text classification using CNN
    Text classification is a widely used NLP task in different business problems, and using Convolution Neural Networks (CNNs) has become the most popular choice. In this article, you will learn about the basics of Convolutional neural networks and the implementation of text classification using CNNs, a
    5 min read
  • Handling Imbalanced Data for Classification
    A key component of machine learning classification tasks is handling unbalanced data, which is characterized by a skewed class distribution with a considerable overrepresentation of one class over the others. The difficulty posed by this imbalance is that models may exhibit inferior performance due
    12 min read
  • Fake News Detection Model using TensorFlow in Python
    Fake news is a type of misinformation that can mislead readers, influence public opinion, and even damage reputations. Detecting fake news prevents its spread and protects individuals and organizations. Media outlets often use these models to help filter and verify content, ensuring that the news sh
    5 min read
geeksforgeeks-footer-logo
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
GFG App on Play Store GFG App on App Store
Advertise with us
  • Company
  • About Us
  • Legal
  • Privacy Policy
  • In Media
  • Contact Us
  • Advertise with us
  • GFG Corporate Solution
  • Placement Training Program
  • Languages
  • Python
  • Java
  • C++
  • PHP
  • GoLang
  • SQL
  • R Language
  • Android Tutorial
  • Tutorials Archive
  • DSA
  • Data Structures
  • Algorithms
  • DSA for Beginners
  • Basic DSA Problems
  • DSA Roadmap
  • Top 100 DSA Interview Problems
  • DSA Roadmap by Sandeep Jain
  • All Cheat Sheets
  • Data Science & ML
  • Data Science With Python
  • Data Science For Beginner
  • Machine Learning
  • ML Maths
  • Data Visualisation
  • Pandas
  • NumPy
  • NLP
  • Deep Learning
  • Web Technologies
  • HTML
  • CSS
  • JavaScript
  • TypeScript
  • ReactJS
  • NextJS
  • Bootstrap
  • Web Design
  • Python Tutorial
  • Python Programming Examples
  • Python Projects
  • Python Tkinter
  • Python Web Scraping
  • OpenCV Tutorial
  • Python Interview Question
  • Django
  • Computer Science
  • Operating Systems
  • Computer Network
  • Database Management System
  • Software Engineering
  • Digital Logic Design
  • Engineering Maths
  • Software Development
  • Software Testing
  • DevOps
  • Git
  • Linux
  • AWS
  • Docker
  • Kubernetes
  • Azure
  • GCP
  • DevOps Roadmap
  • System Design
  • High Level Design
  • Low Level Design
  • UML Diagrams
  • Interview Guide
  • Design Patterns
  • OOAD
  • System Design Bootcamp
  • Interview Questions
  • Inteview Preparation
  • Competitive Programming
  • Top DS or Algo for CP
  • Company-Wise Recruitment Process
  • Company-Wise Preparation
  • Aptitude Preparation
  • Puzzles
  • School Subjects
  • Mathematics
  • Physics
  • Chemistry
  • Biology
  • Social Science
  • English Grammar
  • Commerce
  • World GK
  • GeeksforGeeks Videos
  • DSA
  • Python
  • Java
  • C++
  • Web Development
  • Data Science
  • CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences