DEV Community

Milena Emmert
Milena Emmert

Posted on

Extracting Pipefy API data into Power BI with M language

Have you ever tried making a web request to https://api.pipefy.com/graphql in Power BI and it didn’t work?

Pipefy uses a GraphQL API that requires a Bearer token in the Authorization header to access the data (you can generate one inside your Pipefy account under Account preferences > Personal access token).

After several failed attempts, I realized that Power Query wasn’t handling the Authorization header very well, requiring anonymous authentication. I made a few adjustments, and more errors appeared:

  • Error 404 (which I didn’t understand why)

  • The Authorization header is only supported during anonymous connection

  • And then, the anonymous connection:

Image description


I had a short deadline, and the solution I found was using M Language.

In Power Query:

Open a Blank query > right click > Advanced editor.

Try the script below:

let
    url = "https://api.pipefy.com/graphql",
    token = "Bearer YOUR_TOKEN_HERE",

    // Function to make the GraphQL request
    FetchPage = (cursor as nullable text) as record =>
        let
            queryBody = if cursor = null then
                "{ ""query"": ""query { allCards(pipeId: \""PIPE_ID_HERE\"", first: 40) { nodes { title updated_at due_date current_phase { name } fields { name value } } pageInfo { hasNextPage endCursor } } }"" }"
            else
                "{ ""query"": ""query { allCards(pipeId: \""PIPE_ID_HERE\"", first: 40, after: \""" & cursor & "\"") { nodes { title updated_at due_date current_phase { name } fields { name value } } pageInfo { hasNextPage endCursor } } }"" }",

            response = Web.Contents(
                url,
                [
                    Headers = [
                        #"Authorization" = token,
                        #"Content-Type" = "application/json"
                    ],
                    Content = Text.ToBinary(queryBody)
                ]
            ),

            jsonResponse = Json.Document(response),
            data = jsonResponse[data][allCards]
        in
            data,

    // Recursive function to iterate through all pages
    GetAllPages = (cursor as nullable text, accumulatedCards as list) as list =>
        let
            currentPage = FetchPage(cursor),
            currentCards = List.Transform(currentPage[nodes], each 
                [
                    title = _[title],
                    updated_at = _[updated_at],
                    due_date = _[due_date],
                    current_phase = _[current_phase][name],
                    fields = _[fields] 
                ]
            ),
            newCards = List.Combine({accumulatedCards, currentCards}),
            hasNext = currentPage[pageInfo][hasNextPage],
            nextCursor = currentPage[pageInfo][endCursor]
        in
            if hasNext then
                @GetAllPages(nextCursor, newCards)
            else
                newCards,

    // Calling GetAllPages function to retrieve all pages
    allCards = GetAllPages(null, {}),

    // Converting the list of records into a table
    resultTable = Table.FromRecords(allCards),

    // Extracting unique custom field names
    allFieldNames = List.Distinct(List.Combine(List.Transform(resultTable[fields], each List.Transform(_, each _[name])))),

    // Adding columns for each custom field
    resultWithFields = Table.TransformColumns(resultTable, {"fields", each 
    let
        // Create a record for each field
        fieldRecord = Record.FromList(
            List.Transform(allFieldNames, (fieldName) => 
                let
                    // Attempt to find the field value
                    selectedField = List.First(List.Select(_, (field) => field[name] = fieldName), null)
                in
                    if selectedField <> null then selectedField[value] else null  // Retorna o valor ou null
            ),
            allFieldNames
        )
    in
        fieldRecord
}),

    // Expanding custom fields into separate columns
    finalTable = Table.ExpandRecordColumn(resultWithFields, "fields", allFieldNames)
in
    finalTable
Enter fullscreen mode Exit fullscreen mode

Considerations:

FetchPage

Makes the HTTP request to the API; uses pagination with the cursor to fetch data in parts (40 records per page).

cursor as nullable text

Used to handle pagination, indicating the position of the next page to be fetched.

first: 40

Defines the number of records retrieved per request (you can adjust this value as needed).

→ I chose 40 because it worked best for my request size and Pipe structure. Check out Limits and Best Practices in the documentation to decide what works best for you.

GetAllPages

Recursively calls FetchPage to fetch all pages until hasNextPage is false.

title, updated_at, due_date, current_phase...

These are some of the fields you can select and retrieve with the help of the Pipefy API documentation.

→ The documentation allows you to build your GraphQL queries by selecting the exact information you need. Once you have your query ready, you can ask gepeto to convert it to JSON format for you.

Final Result

The script converts all the data into a neat table in Power BI (you can expand it as needed, and the code will dynamically adjust).

Image description


If you have another solution, feel free to share it with me!

Image description

Top comments (1)

Collapse
 
robo_automatizado_c71e925 profile image
hans

That was exactly what I needed