Save API data into CSV format using Python
Last Updated : 28 Apr, 2025
In this article, we are going to see how can we fetch data from API and make a CSV file of it, and then we can perform various stuff on it like applying machine learning model data analysis, etc. Sometimes we want to fetch data from our Database Api and train our machine learning model and it was very real-time by applying this method we can train our machine learning model using updated data, so our model's predictions are accurate. Here we used the requests library in Python to fetch data from our API.
Fetching Data from API using Request Library
Step 1: Importing necessary libraries
Here we require two library requests to make API calls, and Pandas to make a DataFrame.
Python3 import pandas as pd import requests from google.colab import files
Step 2: Call the API using the requests library
In this step we are going to call out TMDB API using a requests response then we got a response from it. This line of code is making a GET request to the TMDB API endpoint for top-rated movies. The response will be a JSON object containing information about the top-rated movies, such as the movie title, overview, release date, popularity, vote average, and vote count. The response object also contains other information such as the status code and headers of the response.
Python3 response = requests.get('https://api.themoviedb.org/\ 3/movie/top_rated?api_key=aaa7de53dcab3a19afed\ 86880f364e54&language=en-US&page=1')
Step 3: Creating a new DataFrame
Here we are going to create a new DataFrame using Pandas in which we store our result fetch from the API.
Python3 # Creating a DataFrame df = pd.DataFrame()
Step 4: Putting the Results fetch from our API to the Dataframe
In this step we are using the requests library to make GET requests to the Movie Database (TMDB) API to retrieve the top rated movies. It starts by checking if the initial request has a status code of 200 (which indicates a successful response), and if it does, it enters a loop that runs 399 times(it means we are going to fetch the data of first 400 pages). In each iteration of the loop, it makes a request to the API for the next page of top-rated movies and appends the relevant data (movie id, title, overview, release date, popularity, vote average, and vote count) to a DataFrame called "temp_df". After each iteration, it appends the "temp_df" to another DataFrame called "df" using the .append() method. If the initial request has a status code other than 200, then it prints an error message with the status code.
Python3 if response.status_code == 200: for i in range(1, 400): response = requests.get('https://api.themoviedb.org/3/\ movie/top_rated?api_key=aaa7de53dcab3a19afed86880\ f364e54&language=en-US&page={}'.format(i)) temp_df = pd.DataFrame(response.json()['results'])[['id', 'title', 'overview', 'release_date', 'popularity', 'vote_average', 'vote_count']] df = df.append(temp_df, ignore_index=True) else: print('Error', response.status_code)
Step 5: Printing first five rows of our DataFrame
The below code prints the shape of our dataset. it means it going to print how many rows and columns have present in our data frame. We are going to print the first five rows of our dataset.
Python3
Output:
Step 6: Converting our Dataframe into a CSV file and store it
We are going to save the dataframe df to a CSV file named 'movies.csv' and then download it to our computer.
Python3 # Save the DataFrame as a CSV file df.to_csv('movie_example1.csv', index=False) # Download the CSV file to your local machine files.download('movie_example1.csv')
Output:
Complete Code
Python3 from google.colab import files import pandas as pd import requests response = requests.get( 'https://api.themoviedb.org/3/movie/top_rated?api_key=aaa7de53dcab3a19afed86880f364e54&language=en-US&page=1') df = pd.DataFrame() # Creating a DataFrame if response.status_code == 200: for i in range(1, 400): response = requests.get( 'https://api.themoviedb.org/3/movie/top_rated?api_key=aaa7de53dcab3a19afed86880f364e54&language=en-US&page={}'.format(i)) temp_df = pd.DataFrame(response.json()['results'])[ ['id', 'title', 'overview', 'release_date', 'popularity', 'vote_average', 'vote_count']] df = df.append(temp_df, ignore_index=True) else: print('Error', response.status_code) print(df.shape) print(df.head(5)) # Save the DataFrame as a CSV file df.to_csv('movie_example1.csv', index=False) # Download the CSV file to your local machine files.download('movie_example1.csv')
Output:
Fetching Data from API using urllib Library
Imports the pandas, urllib.request, and json libraries. Initializes an empty pandas DataFrame called df. Uses a for loop to loop through pages 1 to 399 of the TMDb API's top rated movies endpoint. For each iteration of the loop, the code constructs a URL that specifies the API key and language, and the page number to retrieve. The code then sends a GET request to the URL using urllib.request.urlopen(), and reads the response into a variable response. The json library is used to parse the response into a dictionary called data. The code then creates a temporary DataFrame temp_df from a subset of the data obtained from the API, specifically the 'results' key in the data dictionary. The subset includes the columns 'id', 'title', 'overview', 'release_date', 'popularity', 'vote_average', and 'vote_count'. The temporary DataFrame is then appended to the final DataFrame df using df.append(). After the for loop is completed, the code prints the shape of the final DataFrame df, the first five rows of the DataFrame, and then saves the DataFrame as a CSV file. Finally, the code uses the files.download() function to download the CSV file to the local machine.
Note: The API key used in this code is an example and might not work. To use this code, you will need to obtain a valid API key from TMDb and use that in the URL.
Python3 # Importing required libraries from google.colab import files import pandas as pd import urllib.request import json # Creating an empty DataFrame to store movie data df = pd.DataFrame() # Looping through pages of movie data for i in range(1, 400): # Constructing the API url with page number url = 'https://api.themoviedb.org/3/movie/\ top_rated?api_key=aaa7de53dcab3a19afed86880f\ 364e54&language=en-US&page={}'.format(i) # Making a request to the API response = urllib.request.urlopen(url) # Loading the API response into a dictionary data = json.loads(response.read().decode()) # Creating a DataFrame from the 'results' key in the API response temp_df = pd.DataFrame(data['results'])[ ['id', 'title', 'overview', 'release_date', 'popularity', 'vote_average', 'vote_count']] # Appending the temporary DataFrame to the main DataFrame df = df.append(temp_df, ignore_index=True) # Printing the shape of the final DataFrame print(df.shape) # Printing the first five rows of the final DataFrame print(df.head(5)) # Saving the final DataFrame as a CSV file df.to_csv('movie_example2.csv', index=False) # Downloading the final CSV file to the local machine files.download('movie_example2.csv')
Output:
Similar Reads
Writing data from a Python List to CSV row-wise Comma Separated Values (CSV) files a type of a plain text document in which tabular information is structured using a particular format. A CSV file is a bounded text format which uses a comma to separate values. The most common method to write data from a list to CSV file is the writerow() method o
2 min read
Save image properties to CSV using Python In this article, we are going to write python scripts to find the height, width, no. of channels in a given image file and save it into CSV format. Below is the implementation for the same using Python3. The prerequisite of this topic is that you have already installed NumPy and OpenCV. Approach: Fi
3 min read
Visualize data from CSV file in Python CSV stands for Comma-Separated Values, which means that the data in a CSV file is separated by commas, making it easy to store tabular data. The file extension for CSV files is .csv, and these files are commonly used with spreadsheet applications like Google Sheets and Microsoft Excel. A CSV file co
4 min read
Scrape and Save Table Data in CSV file using Selenium in Python Selenium WebDriver is an open-source API that allows you to interact with a browser in the same way a real user would and its scripts are written in various languages i.e. Python, Java, C#, etc. Here we will be working with python to scrape data from tables on the web and store it as a CSV file. As
3 min read
How to write Pandas DataFrame as TSV using Python? In this article, we will discuss how to write pandas dataframe as TSV using Python. Let's start by creating a data frame. It can be done by importing an existing file, but for simplicity, we will create our own. Python3 # importing the module import pandas as pd # creating some sample data sample =
1 min read