DEV Community

Cover image for 5 ways to convert a JSON object to a CSV
Jessica Garson for TwitterDev

Posted on • Originally published at developer.twitter.com

5 ways to convert a JSON object to a CSV

While working with APIs, you may need to convert a JSON object to a CSV. Since CSVs are a widely used, versatile format that can easily be imported into an Excel spreadsheet, database, or data visualization software. Turning a JSON payload into a CSV is a great way to explore an API for the first time since you can see each field individually. This blog post will walk you through five strategies to convert a JSON response generated from a request made using the recent search endpoint from v2 of the Twitter API.

What you need to get started

To get started, you will need to have an approved developer account, and have activated the new developer portal experience. Access to the Twitter API is available with active keys and tokens for a developer App. As a reminder, to create an App to use v2 of the Twitter API, you must first create a Project in the developer portal, and then you will be prompted to create your App. You will also need a bearer token from your App, which can be retrieved in the Twitter developer portal.

1. From the command line using csvkit

Using the command line to turn a JSON object into a CSV can come in handy if you want to explore the payload of an API for the first time. It allows you to quickly make a request and see the results in a tabular structure.

First, set up an environment variable in your terminal to save your bearer token to your request. You need to replace your-bearer-token with your own bearer token. Be sure to keep the single quotes around your bearer token.

export BEARER_TOKEN='your-bearer-token'
Enter fullscreen mode Exit fullscreen mode

Now, install csvkit, a collection of tools that allows you to work with CSVs from the command line. You need to have Python and pip installed on your machine to complete this step.

pip3 install csvkit
Enter fullscreen mode Exit fullscreen mode

Next, use curl to make an HTTP request and save the output as a JSON file.

curl 'https://api.twitter.com/2/tweets/search/recent?query=from:TwitterDev' -H "Authorization: Bearer $BEARER_TOKEN" > response.json
Enter fullscreen mode Exit fullscreen mode

Since the key of the JSON object that gets returned from the Twitter API payload is data, you need to set that as the key using the -k flag and send that to a CSV. This returns the data object of the payload which contains the Tweets and IDs.

in2csv -k data response.json > response_csvkit.csv
Enter fullscreen mode Exit fullscreen mode

2. Using the web

Using the web is a good method to get started for newer developers. It also works well if you need to create a CSV quickly. However, this method can fall short if you need to do additional data cleaning and manipulation.

After you set up your bearer token as an environment variable, as outlined in the last section, make a request using curl. This method is similar to how you did it in the last step.

curl 'https://api.twitter.com/2/tweets/search/recent?query=from:TwitterDev' -H "Authorization: Bearer $BEARER_TOKEN"
Enter fullscreen mode Exit fullscreen mode

Cut and paste the JSON response into this website which automatically converts your payload into a CSV. This site was made by Eric Mill.

3. Using requests and pandas in Python

Using Python and pandas is a good framework for getting set up to perform any additional data analysis, data cleaning, and manipulation, or for combining Twitter data with another data source.

First, pip install the packages requests and pandas which you'll use to make your requests and to work with the data from the JSON object. From here, import the packages you'll be using. Since os is part of the standard library you do not need to install it prior.

import requests
import pandas as pd
import os
Enter fullscreen mode Exit fullscreen mode

Next, use the same method outlined previously to set your bearer token as an environment variable. From here, format your bearer as a dictionary to create the headers needed to authenticate to the Twitter API.

bearer_token = os.environ.get('BEARER_TOKEN')
headers = {"Authorization": "Bearer {}".format(bearer_token)}
Enter fullscreen mode Exit fullscreen mode

Now, create a variable to save the URL you are looking to hit and make a request to the endpoint using the requests package. Make sure to add the .json() method to the end of your request to ensure you get a JSON object back.

url = "https://api.twitter.com/2/tweets/search/recent?query=from:TwitterDev"
response = requests.request("GET", url, headers=headers).json()
Enter fullscreen mode Exit fullscreen mode

To create a CSV, create a data frame for the data object which contains the ID and text of the Tweets and send that data frame to a CSV.

df = pd.DataFrame(response['data'])
df.to_csv('response_python.csv')
Enter fullscreen mode Exit fullscreen mode

4. In R using httr

Similar to Python, you have the control to quickly and easily manipulate your data making this a great method for data analysis, data cleaning, or for combining datasets together.

In the console of your R environment, you first want to set your environment variable. I use R studio as my main environment for R.

Sys.setenv(BEARER_TOKEN = "your-bearer-token")
Enter fullscreen mode Exit fullscreen mode

To make HTTP requests, use the package httr. After installing this package, make sure you add a require statement as follows.

require(httr)
Enter fullscreen mode Exit fullscreen mode

Next, grab the bearer token you set, and pass that into your headers for authentication. You need to define the query you are adding to your request as a parameter to pass in as you make your request.

bearer_token <- Sys.getenv("BEARER_TOKEN")
headers <- c(`Authorization` = sprintf('Bearer %s', bearer_token))
params <- list(`query` = 'from:TwitterDev')
Enter fullscreen mode Exit fullscreen mode

You are now ready to make your request and pass in your headers and query parameters into the request.

response <-
  httr::GET(url = 'https://api.twitter.com/2/tweets/search/recent',
            httr::add_headers(.headers = headers),
            query = params)
Enter fullscreen mode Exit fullscreen mode

At this point, create a data frame for the results from recent search and send the data frame to a CSV.

recent_search_body <-
  content(
    response,
    as = 'parsed',
    type = 'application/json',
    simplifyDataFrame = TRUE
  )
write.csv(recent_search_body, 'response_r.csv')
Enter fullscreen mode Exit fullscreen mode

5. In Node.js using axios and json2csv

Using Node.js allows you the control to work your data in a robust way. Similar to csvkit, json2csv can also be used as a command-line module giving you optionality. This method works particularly well if you are looking to create a backend of a website that has the option of downloading a dataset a CSV.

To achieve this same task in JavaScript, use axios to connect to the Twitter API, json2csv to convert your JSON payload to a CSV, and fs to write a file. First, install the required packages using npm and call them as follows. Similar to os in Python, fs is part of the standard library and you do need to install it first.

const axios = require('axios');
const { parse } = require('json2csv');
const fs = require('fs');
Enter fullscreen mode Exit fullscreen mode

After you set your environment variables on the command line, as you did with csvkit, create a variable for your bearer token and configure your request. Once you have the right parameters, pass your config variable to axios so you are ready to make an HTTP request to the right endpoint.

const bearer_token = process.env.BEARER_TOKEN;
const config = {
  method: 'get',
  url: 'https://api.twitter.com/2/tweets/search/recent?query=from:TwitterDev',
  headers: {
    'Authorization': `Bearer ${bearer_token}`
  }
};
axios(config)
Enter fullscreen mode Exit fullscreen mode

Now, write a promise that will save the response data in a variable called res, set the fields you want for the CSV you are writing, and parse the dataobject of the response into a CSV format. From there, use fs to write the CSV file. Print out the csv object and any errors that may come up to help you troubleshoot if needed.

.then(function (response) {
  const res = response.data;
  const fields = ['id', 'text'];
  const opts = { fields };
  const csv = parse(res.data, opts);
  fs.writeFile('response_nodejs.csv', csv, { flag: 'a+' }, err => {})
  console.log(csv);
})
.catch(function (error) {
  console.log(error);
});
Enter fullscreen mode Exit fullscreen mode

Conclusion

Please reply in the comments with your favorite method for converting JSON to CSV, if it isn't included. The sample code provided above can be found on our GitHub repository. Be sure to let us know on the forums or at @TwitterDev if you run into any troubles along the way or if this list inspires you to create anything new.

Top comments (0)