DEV Community

Cover image for Solved: Exporting Linear Issues to CSV for Custom Reporting
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: Exporting Linear Issues to CSV for Custom Reporting

🚀 Executive Summary

TL;DR: This guide provides a Python script to automate the tedious and error-prone process of manually exporting Linear issues for custom reporting. By leveraging the Linear GraphQL API, users can programmatically fetch issue data and output it into a clean CSV file, saving significant time and improving data accuracy.

🎯 Key Takeaways

  • The solution utilizes Python 3, the requests library for Linear GraphQL API interaction, python-dotenv for secure credential management, and the csv module for file output.
  • A GraphQL query is constructed to fetch specific issue fields (ID, title, state, assignee, labels, createdAt) for a given Linear Team ID, with a config.env file securing the API key and team ID.
  • The script can be automated using cron jobs for scheduled exports, but users must address common pitfalls like credential mismatches, GraphQL syntax errors, and critically, implementing pagination for datasets exceeding the initial 100-issue limit.

Exporting Linear Issues to CSV for Custom Reporting

Hey there, Darian here. Let’s talk about reporting. For a long time, my Monday mornings involved manually pulling data from Linear into a spreadsheet for our weekly engineering sync. I’d click through pages of issues, copy-pasting titles and statuses. It was tedious, error-prone, and easily burned an hour of valuable time. I finally decided to automate it, and it was a complete game-changer. This guide will show you the exact Python script I use to pull that data into a clean CSV, so you can spend your time analyzing insights, not wrestling with data entry.

Prerequisites

Before we dive in, make sure you have the following ready:

  • Python 3 installed on your machine.
  • A Linear account with permissions to access the API.
  • Your personal Linear API Key. You can generate one under Settings > API.
  • Your Team ID from Linear. You can find this in the URL when viewing your team’s board (e.g., linear.app/techresolve/team/**TRT**/all). That TRT part is the ID.

The Step-by-Step Guide

Step 1: Setting Up Your Project Environment

I’ll skip the standard virtual environment setup (venv and so on), as you probably have a preferred workflow for managing your Python projects. Let’s get straight to the code.

You’ll need two main Python libraries: requests to communicate with the Linear API and python-dotenv to manage our credentials securely. You can install them in your environment using pip:

pip install requests python-dotenv

We use python-dotenv so we can keep our API key out of the source code, which is a critical security practice.

Step 2: Storing Your Credentials

Create a file in your project directory named config.env. This is where we’ll store our secrets. Add your API key and Team ID to this file like so:

LINEAR_API_KEY="your_api_key_here"
LINEAR_TEAM_ID="your_team_id_here"
Enter fullscreen mode Exit fullscreen mode

Make sure to add config.env to your .gitignore file. We never commit credentials to version control.

Step 3: The Python Script

Now for the main event. Create a Python file, let’s call it export_linear_issues.py. We’ll build this script in three parts: fetching the data from Linear, processing it, and writing it to a CSV.

Here’s the complete script. I’ll break down what each part does below.

import os
import requests
import csv
from dotenv import load_dotenv

def fetch_linear_issues(api_key, team_id):
    """Fetches issues from the Linear API using a GraphQL query."""
    load_dotenv('config.env')
    api_url = "https://api.linear.app/graphql"

    headers = {
        "Authorization": api_key,
        "Content-Type": "application/json"
    }

    # This GraphQL query fetches the first 100 issues for a specific team.
    # You can customize the fields you need.
    query = """
    query GetTeamIssues($teamId: String!) {
      team(id: $teamId) {
        issues(first: 100, orderBy: createdAt) {
          nodes {
            id
            title
            identifier
            createdAt
            state {
              name
            }
            assignee {
              name
            }
            labels {
              nodes {
                name
              }
            }
          }
        }
      }
    }
    """

    variables = {"teamId": team_id}

    try:
        response = requests.post(api_url, json={"query": query, "variables": variables}, headers=headers)
        response.raise_for_status() # Raises an HTTPError for bad responses (4xx or 5xx)
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"API request failed: {e}")
        return None

def write_issues_to_csv(data, filename="linear_issues.csv"):
    """Processes the API response and writes it to a CSV file."""
    if not data or 'data' not in data or not data['data']['team']:
        print("No data found or invalid response structure.")
        return

    issues = data['data']['team']['issues']['nodes']

    # Define the columns for our CSV file
    fieldnames = ['ID', 'Identifier', 'Title', 'State', 'Assignee', 'Labels', 'Created At']

    with open(filename, 'w', newline='', encoding='utf-8') as csvfile:
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
        writer.writeheader()

        for issue in issues:
            # Flatten nested data for cleaner CSV output
            assignee_name = issue.get('assignee', {}).get('name') if issue.get('assignee') else 'Unassigned'
            state_name = issue.get('state', {}).get('name') if issue.get('state') else 'No Status'

            # Combine multiple labels into a single string
            label_nodes = issue.get('labels', {}).get('nodes', [])
            label_names = ', '.join([label['name'] for label in label_nodes]) if label_nodes else ''

            writer.writerow({
                'ID': issue.get('id'),
                'Identifier': issue.get('identifier'),
                'Title': issue.get('title'),
                'State': state_name,
                'Assignee': assignee_name,
                'Labels': label_names,
                'Created At': issue.get('createdAt')
            })
    print(f"Successfully exported {len(issues)} issues to {filename}")

if __name__ == "__main__":
    load_dotenv('config.env')
    api_key = os.getenv("LINEAR_API_KEY")
    team_id = os.getenv("LINEAR_TEAM_ID")

    if not api_key or not team_id:
        print("Error: LINEAR_API_KEY and LINEAR_TEAM_ID must be set in config.env")
    else:
        issue_data = fetch_linear_issues(api_key, team_id)
        if issue_data:
            write_issues_to_csv(issue_data)
Enter fullscreen mode Exit fullscreen mode

Breaking down the logic:

  • fetch_linear_issues: This function is the heart of the operation. It constructs a GraphQL query to ask for specific fields (id, title, state, etc.) for the latest 100 issues from our team. It then sends this query to Linear’s API endpoint with our API key for authorization.
  • write_issues_to_csv: This takes the raw JSON data from the API call and tidies it up. Notice how it handles potentially missing data (like an unassigned issue) and joins multiple labels into a single, readable string. It then writes everything row by row into linear_issues.csv.
  • if __name__ == "__main__":: This is the entry point. It loads our credentials from the config.env file, calls the fetch function, and if successful, passes the data to the write function. Basic error checking is included to ensure the keys were loaded correctly.

Pro Tip: Before you put a query into your script, I highly recommend testing it in Linear’s built-in GraphQL playground (found under Settings > API). It gives you instant feedback and auto-completion, which saves a ton of debugging time when you’re trying to figure out which fields you need.

Step 4: Running and Automating the Export

To run the script, just navigate to your project directory in the terminal and execute it:

python3 export_linear_issues.py

If everything is configured correctly, you’ll see a new file, linear_issues.csv, appear in your directory.

For true automation, I use cron on my Linux servers. You can set up a cron job to run this script on a schedule. For example, to run it every Monday at 2 AM, you’d set up a job like this:

0 2 * * 1 python3 export_linear_issues.py
Enter fullscreen mode Exit fullscreen mode

This way, the report is always fresh and waiting for you first thing Monday morning.

Common Pitfalls (Where I Usually Mess Up)

  • Credentials Mismatch: The classic one: my script fails because the config.env file is missing, or I mistyped a variable name. Always double-check that LINEAR\_API\_KEY in the file matches what os.getenv() is looking for in the script.
  • GraphQL Syntax Errors: GraphQL is powerful but very picky. A single misplaced curly brace will cause the entire request to fail. As I said before, use the API playground. It’s your best friend for crafting these queries correctly the first time.
  • Forgetting Pagination: My script here fetches the first 100 issues for simplicity. In a real-world production setup with thousands of issues, you absolutely need to handle pagination. The Linear API response includes a pageInfo object with an endCursor. You’d need to modify the script to loop and make subsequent requests, passing the endCursor as an after argument to get the next “page” of results.

Conclusion

And that’s it. You now have a robust, automated pipeline for exporting your Linear data. This simple script opens the door to building custom dashboards in Google Sheets, Tableau, or any other BI tool your team uses. You’ve successfully reclaimed your time from manual data pulling and can focus on what we do best: building and maintaining solid infrastructure. Hope this helps you streamline your workflow!


Darian Vance

👉 Read the original article on TechResolve.blog


Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Top comments (0)