Introduction
Every morning as a developer, I was doing the same thing: opening GitHub to check notifications, switching to Gmail to scan for anything urgent, trying to mentally piece together what needed my attention and in what order. It was not a big problem, but it was a constant one. Small context switches that added up.
I wanted something simple: a tool that pulls all of that together, tells me what I have, flags what is urgent, and suggests what to tackle first. No dashboard to maintain, no browser tab to open. Just run it and get your day's context.
That is what I built: a command-line dev assistant that connects to GitHub and Gmail, fetches real data from both, and uses an LLM to return a plain-text summary with priorities and a suggested daily plan.
What I built: An AI Dev Assistant
Dev Assistant is a command-line tool I built to reduce the mental overhead of context switching. Instead of jumping between GitHub and Gmail trying to piece together what needs my attention, I run one command and get everything surfaced in order of priority with a plain-text summary of what to act on first.
The honest version: it runs locally, so I have to run it manually whenever I need context. But that is actually fine for how I use it. Before I start my day, before I pick up a new task, or when I want to clear my head, I run it, get my briefing, and get back to work. No background process, no notifications, no distraction. Just information on demand.
Here is what it does under the hood:
- Fetches unread GitHub notifications (PRs, CI failures, review requests, mentions) paginated and filtered by a lookback window
- Fetches Gmail inbox messages with sender, subject, date, and snippet
- Scores both by priority using custom logic before anything touches the LLM
- Passes the cleaned, prioritized data to Groq's
llama-3.3-70b-versatilemodel - Returns a plain-text summary with urgent items flagged and a suggested order for the day
Stack: Node.js, TypeScript, Nango, OpenAI SDK pointed at Groq, dotenv.
What You’ll Get (Example Output)
Before we dive into the build, here’s what the assistant actually produces when you run it:
=============================
ASSISTANT
=============================
QUICK SUMMARY
- 2 urgent GitHub items need immediate review, including a failed CI workflow on main
- 3 new Gmail messages require attention, including a security alert and an interview update
GITHUB (ACT ON FIRST)
- Review PR in your-repo — changes are blocking deployment and require approval
- Investigate failed CI workflow in your-repo — deployment pipeline is currently broken
GMAIL (ACT ON FIRST)
- Respond to security alert from Google — suspicious login attempt detected
- Reply to interview email — time-sensitive scheduling required
TODAY'S PLAN
- Start with GitHub blockers affecting deployment
- Handle urgent emails next
- Then move to lower-priority updates
This is the entire goal of the tool: one command, one clear plan for your day.
How It Works (Architecture Overview of What I Built)
Here’s a high-level view of how the system flows from data collection to output:
At a high level:
- GitHub and Gmail act as data sources
- Nango handles authentication and API access
- The system processes and scores incoming data
- Groq (LLM) converts it into a structured daily plan
- The CLI outputs a clean, actionable summary
Why Nango
To build this, I needed to connect to two APIs that both use OAuth: GitHub and Gmail. I could have written the OAuth flow myself. It is doable. But token storage, refresh logic, scope management across two providers — that overhead adds up fast and it is not the interesting part of the build.
I had been looking at Nango already for a separate reason, so it was already on my radar. I decided to use it here. Once I set it up, the auth layer essentially disappeared. I connected both integrations through the Nango dashboard, got a providerConfigKey and connectionId for each, and from that point every API call looked like this:
const response = await nango.get<GmailMessageListResponse>({
endpoint: '/gmail/v1/users/me/messages?maxResults=5',
providerConfigKey: gmailProviderConfigKey,
connectionId,
});
No token handling. No refresh logic. Nango injects the credentials, manages token expiry, and returns the response. The generic type parameter <GmailMessageListResponse> is just TypeScript telling the compiler what shape to expect back from response.data. Same pattern works for GitHub: different providerConfigKey, different connectionId, same method.
That consistency is what makes adding a third provider later a small task instead of a big one.
Prerequisites
Before you start, make sure you have the following:
- Node.js v18 or higher (this project was built on v24)
- A Nango account (free to sign up)
- A Groq account (free API key available)
- A GitHub account with notifications enabled
- A Gmail account
Project Setup
-
Create the project folder and initialize it.
mkdir dev-assistant && cd dev-assistant npm init -y -
Install the required dependencies.
npm install @nangohq/node openai dotenv npm install -D typescript ts-node @types/node -
Create a
tsconfig.jsonin the root of your project.
{ "compilerOptions": { "target": "ES2020", "module": "commonjs", "rootDir": "src", "outDir": "dist", "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "strict": true, "skipLibCheck": true } } -
Create the project structure.
mkdir src && touch src/index.ts src/github.ts src/gmail.ts src/summarize.ts src/types.ts src/utils.tsHere is what each file is responsible for:
src/ index.ts — entry point, wires everything together github.ts — GitHub types, priority scoring, fetch function gmail.ts — Gmail types, priority scoring, fetch function summarize.ts — Groq call, prompt, input preparation types.ts — shared types: DigestState, DigestDelta utils.ts — requireEnv, clip, formatAssistantResponse, printSection -
Create a
.envfile in the root of your project.
NANGO_SECRET_KEY= GROQ_API_KEY= NANGO_GITHUB_CONNECTION_ID= NANGO_GMAIL_CONNECTION_ID= NANGO_GMAIL_PROVIDER_CONFIG_KEY= NANGO_GITHUB_PROVIDER_CONFIG_KEY= DEBUG=false GITHUB_NOTIFICATIONS_LOOKBACK_DAYS=30You will fill these in as you set up Nango and Groq in the next section. Make sure to add
.envto your.gitignoreso you do not accidentally commit your API keys.
Connecting GitHub and Gmail via Nango
Before writing any code, you need to set up your integrations on the Nango dashboard and get a test connection for each. This is where your providerConfigKey, connectionId, and NANGO_SECRET_KEY come from.
Setting up Nango
Go to app.nango.dev and sign up for a free account.
Head to Environment Settings and copy your
NANGO_SECRET_KEY. Add it to your.envfile.
Setting up the GitHub integration
In the sidebar click Integrations, then Add New Integration. Search for GitHub and select GitHub OAuth.
Click Custom Developer App. By default Nango provides a test app you can use to get started quickly, but for this build we are using a custom developer app.
Go to github.com/settings/developers and click New OAuth App.
* Fill in the application name. Anything works, for example `dev-assistant`
* Set the **Homepage URL** to `http://localhost:3000`
* Set the **Authorization callback URL** to `https://api.nango.dev/oauth/callback`. Do not change this value
Click Register application. On the next page, copy your Client ID.
Click Generate a new client secret and copy the secret immediately. GitHub only shows it once.
-
Back in the Nango dashboard, paste your client ID and client secret into the custom developer app fields. For scopes, add:
notifications read:user Give your integration an ID. This becomes your
NANGO_GITHUB_PROVIDER_CONFIG_KEY. Something likegithub-dev-assistantworks. Add it to your.env.Go to the Connections tab and click Add Test Connection. Select your GitHub integration, click Authorize, and log in with your GitHub account.
The ID you assigned to that connection is your
NANGO_GITHUB_CONNECTION_ID. Add it to your.env.
Setting up the Gmail integration
Gmail requires a Google Cloud project with the Gmail API enabled and OAuth credentials configured.
Go to console.cloud.google.com and sign in. Click the project selector at the top of the page, select New Project, give it a name (for example
dev-assistant) and click Create.With your new project selected, go to APIs & Services → Library. Search for Gmail API, click on it, and click Enable.
Go to APIs & Services → OAuth consent screen. Select External as the user type and click Create. Fill in the required fields: app name, support email, and developer contact email. Click Save and Continue through the remaining steps.
-
Go to the Audience tab, scroll to Test users, click Add users, and add the Gmail address you want to use with the tool.
Note: Your app will be in Testing mode by default. Only users you explicitly add as test users can authorize the app. If you skip this step, the authorization will fail when you try to connect in Nango.
-
Go to Clients in the left sidebar and click Create Client. Select Web application as the Application type and give it a name.
-
Under Authorized redirect URIs, click Add URI and enter:
https://api.nango.dev/oauth/callback -
Click Create. Google will display your Client ID and Client Secret. Copy both immediately. Google only shows the client secret once.
-
Back in the Nango dashboard, go to Integrations → Add New Integration → Google Mail → Custom Developer App. Paste your client ID and client secret. For the scope add:
https://www.googleapis.com/auth/gmail.readonly Give the integration an ID. This becomes your
NANGO_GMAIL_PROVIDER_CONFIG_KEY. Add it to your.env.Go to Connections → Add Test Connection, select your Gmail integration, and authorize with the Gmail account you added as a test user. The connection ID you set becomes your
NANGO_GMAIL_CONNECTION_ID. Add it to your.env.
Getting your Groq API key
Go to console.groq.com and sign in. In the left sidebar click API Keys.
If you already have a key, copy it and add it to your
.env. If not, click Create API Key, give it a name (for exampledev-assistant) and click Submit. Copy the key immediately. Groq only shows it once.-
Add the key to
GROQ_API_KEYin your.env. Your.envfile should now look like this:
NANGO_SECRET_KEY=your_secret_key GROQ_API_KEY=your_groq_api_key NANGO_GITHUB_CONNECTION_ID=your_github_connection_id NANGO_GMAIL_CONNECTION_ID=your_gmail_connection_id NANGO_GMAIL_PROVIDER_CONFIG_KEY=your_gmail_integration_id NANGO_GITHUB_PROVIDER_CONFIG_KEY=your_github_integration_id DEBUG=false GITHUB_NOTIFICATIONS_LOOKBACK_DAYS=30
Fetching Data from the GitHub API
Create src/github.ts. This file handles everything GitHub related: the types, the priority scoring logic, and the fetch function.
-
Add the imports and types at the top of
github.ts.
import { Nango } from '@nangohq/node'; import { requireEnv } from './utils'; export type GithubNotification = { id: string; unread: boolean; reason: string; updated_at: string; last_read_at?: string; subject?: { title?: string; url?: string | null; latest_comment_url?: string | null; type?: string; }; repository?: { full_name?: string; html_url?: string; }; }; export type CleanGithubNotification = { id: string; unread: boolean; reason: string; updatedAt: string; lastReadAt: string | null; title: string; subjectType: string; repository: string; url: string | null; priority: number; };GithubNotificationmaps directly to what GitHub's API returns.CleanGithubNotificationis the flattened version we actually work with. It has sensible defaults for missing fields, and apriorityscore we calculate ourselves before anything goes to Groq. -
Add the priority scoring function below the types.
export function getGithubNotificationPriority(notification: GithubNotification): number { const reasonPriority: Record<string, number> = { review_requested: 100, mention: 95, author: 90, comment: 85, ci_activity: 80, state_change: 70, assign: 65, subscribed: 40, manual: 30, security_alert: 100, }; const basePriority = reasonPriority[notification.reason] || 50; const unreadBoost = notification.unread ? 10 : 0; const subjectTypeBoost = notification.subject?.type === 'PullRequest' ? 5 : 0; const title = (notification.subject?.title || '').toLowerCase(); const titleKeywordBoost = /failed|security|vulnerability|incident|urgent/.test(title) ? 15 : 0; const securityReasonBoost = notification.reason === 'security_alert' ? 20 : 0; return ( basePriority + unreadBoost + subjectTypeBoost + titleKeywordBoost + securityReasonBoost ); }The score starts with a base value tied to the notification
reason. GitHub tells you why you were notified, and that reason carries a lot of signal. Areview_requestedscores 100 because someone is actively waiting on you. Asubscribednotification scores 40 because you opted in but nothing is demanding your attention.Four boosts can push a notification higher:
* **Unread** adds 10. If you haven't seen it yet, it ranks higher
* **PullRequest** subject type adds 5. PRs tend to be more time-sensitive than issues
* **Title keywords** like `failed`, `security`, or `urgent` add 15. The title is a strong signal
* **Security alerts** get an extra 20 on top of their already high base score
-
Add the main fetch function.
export async function getGithubNotifications( nango: Nango, githubProviderConfigKey: string, githubNotificationsLookbackDays: number, githubNotificationsPerPage: number, githubNotificationsMaxPages: number ) { const connectionId = requireEnv('NANGO_GITHUB_CONNECTION_ID'); const since = new Date( Date.now() - githubNotificationsLookbackDays * 24 * 60 * 60 * 1000 ).toISOString(); const notifications: GithubNotification[] = []; for (let page = 1; page <= githubNotificationsMaxPages; page += 1) { const response = await nango.get<GithubNotification[]>({ endpoint: `/notifications?all=true&participating=false&since=${encodeURIComponent( since )}&per_page=${githubNotificationsPerPage}&page=${page}`, providerConfigKey: githubProviderConfigKey, connectionId, }); const pageItems = response.data || []; notifications.push(...pageItems); if (pageItems.length < githubNotificationsPerPage) { break; } } const cleanedNotifications: CleanGithubNotification[] = notifications .map((notification) => ({ id: notification.id, unread: notification.unread, reason: notification.reason, updatedAt: notification.updated_at, lastReadAt: notification.last_read_at || null, title: notification.subject?.title || '(No title)', subjectType: notification.subject?.type || 'Unknown', repository: notification.repository?.full_name || 'Unknown repository', url: notification.subject?.url || notification.subject?.latest_comment_url || notification.repository?.html_url || null, priority: getGithubNotificationPriority(notification), })) .sort((left, right) => { if (right.priority !== left.priority) { return right.priority - left.priority; } return ( new Date(right.updatedAt).getTime() - new Date(left.updatedAt).getTime() ); }); return { lookbackDays: githubNotificationsLookbackDays, totalCount: cleanedNotifications.length, unreadCount: cleanedNotifications.filter((n) => n.unread).length, urgentCount: cleanedNotifications.filter((n) => n.priority >= 80).length, notifications: cleanedNotifications, }; }A few things worth noting here:
* **The Nango call** is the simplest part. One `nango.get()` with the endpoint, `providerConfigKey`, and `connectionId`. Nango handles the token. You get data back.
* **The lookback window** filters notifications to the last 30 days by default, controlled by `GITHUB_NOTIFICATIONS_LOOKBACK_DAYS` in your `.env`. Without this, GitHub returns everything going back potentially months — stale context you don't need the LLM reasoning about.
* **The pagination loop** fetches up to 5 pages of 50 notifications each. If a page returns fewer items than the page size, we've reached the end and break early. This prevents silently dropping notifications beyond the first page, which would defeat the whole point of the tool.
* **The sort** orders by priority first, then recency as a tiebreaker. The highest-urgency, most recent notifications surface at the top.
Fetching Data from the Gmail API
Create src/gmail.ts. Same pattern as github.ts: types, priority scoring, fetch function.
-
Add the imports and types at the top of
gmail.ts.
import { Nango } from '@nangohq/node'; import { requireEnv } from './utils'; export type GmailMessageListResponse = { messages?: Array<{ id: string; threadId: string; }>; }; export type GmailMessageDetailResponse = { id: string; threadId: string; snippet?: string; internalDate?: string; labelIds?: string[]; payload?: { headers?: Array<{ name?: string; value?: string; }>; }; }; export type CleanGmailMessage = { id: string; threadId: string; from: string; subject: string; date: string | null; snippet: string; labelIds: string[]; priority: number; };Two response types here instead of one.
GmailMessageListResponsehandles the initial list of message IDs, andGmailMessageDetailResponsehandles the full message data. That split exists because Gmail's API works in two steps, which we get to in the fetch function. -
Add the priority scoring function.
export function getGmailMessagePriority(message: { subject: string; snippet: string; labelIds: string[]; from: string; }): number { const labels = new Set(message.labelIds.map((label) => label.toUpperCase())); let score = 20; if (labels.has('UNREAD')) score += 10; if (labels.has('IMPORTANT')) score += 15; const text = `${message.subject} ${message.snippet}`.toLowerCase(); const from = message.from.toLowerCase(); if (/login|password|security|verify|verification|suspicious|alert/.test(text)) score += 35; if (/failed|down|error|incident/.test(text)) score += 25; if (/deadline|interview|offer|application|action required/.test(text)) score += 20; if (/invoice|payment|receipt|due/.test(text)) score += 18; if (/digest|newsletter|promotions|weekly|updates/.test(text)) score -= 10; if (/no-reply|noreply/.test(from)) score -= 5; return score; }Gmail doesn't have a
reasonfield like GitHub does, so the scoring relies on signals from the message itself: labels, subject line, snippet, and sender. Security and authentication keywords score highest. Newsletters and no-reply senders get penalized because they rarely need action. Gmail's ownIMPORTANTlabel adds weight. It's not perfect but it's a useful signal. -
Add the main fetch function.
export async function getGmailMessages( nango: Nango, gmailProviderConfigKey: string ) { const connectionId = requireEnv('NANGO_GMAIL_CONNECTION_ID'); // Step 1: Get a list of message IDs const response = await nango.get<GmailMessageListResponse>({ endpoint: '/gmail/v1/users/me/messages?maxResults=5', providerConfigKey: gmailProviderConfigKey, connectionId, }); const messages = response.data.messages || []; // Step 2: Fetch details for each message const detailedMessages: CleanGmailMessage[] = await Promise.all( messages.map(async ({ id }) => { const detailResponse = await nango.get<GmailMessageDetailResponse>({ endpoint: `/gmail/v1/users/me/messages/${id}?format=metadata&metadataHeaders=From&metadataHeaders=Subject&metadataHeaders=Date`, providerConfigKey: gmailProviderConfigKey, connectionId, }); const headers = detailResponse.data.payload?.headers || []; const getHeader = (name: string) => headers.find( (header) => header.name?.toLowerCase() === name.toLowerCase() )?.value; const labelIds = detailResponse.data.labelIds || []; const subject = getHeader('Subject') || '(No subject)'; const snippet = detailResponse.data.snippet || ''; const from = getHeader('From') || 'Unknown sender'; return { id: detailResponse.data.id, threadId: detailResponse.data.threadId, from, subject, date: getHeader('Date') || null, snippet, labelIds, priority: getGmailMessagePriority({ subject, snippet, labelIds, from }), }; }) ); const sortedMessages = detailedMessages.sort((left, right) => { if (right.priority !== left.priority) { return right.priority - left.priority; } const rightDate = right.date ? new Date(right.date).getTime() : 0; const leftDate = left.date ? new Date(left.date).getTime() : 0; return rightDate - leftDate; }); return { resultSizeEstimate: response.data.messages?.length || 0, unreadCount: sortedMessages.filter((m) => m.labelIds.map((l) => l.toUpperCase()).includes('UNREAD') ).length, urgentCount: sortedMessages.filter((m) => m.priority >= 55).length, messages: sortedMessages, }; }The Gmail fetch works in two round trips by design. The list endpoint returns message IDs only, with no subject, no sender, and no content. To get the actual message data you need a second request per message. This is Gmail's API design, not a Nango limitation. We use
Promise.allto run all the detail fetches in parallel so it stays fast.The
format=metadataparameter tells Gmail to return only headers rather than the full message body. We only need sender, subject, date, and snippet. Pulling the full body would be wasteful and would hit token limits faster when passing data to Groq.Nango handles auth for both calls, the list fetch and every detail fetch, using the same
providerConfigKeyandconnectionId. You write the samenango.get()pattern twice and Nango takes care of the rest.
Summarizing Data with an LLM (Groq + OpenAI SDK)
Create src/summarize.ts. This file handles three things: preparing the data before it goes to the LLM, making the Groq API call, and defining the prompt that shapes the output.
-
Add the input preparation function.
import OpenAI from 'openai'; import { DigestDelta } from './types'; import { clip } from './utils'; export function prepareAssistantInput(data: { notifications: unknown; emails: unknown; digestDelta: DigestDelta; }): string { const notifications = clip(JSON.stringify(data.notifications, null, 2), 10000); const emails = clip(JSON.stringify(data.emails, null, 2), 6000); const digest = JSON.stringify(data.digestDelta, null, 2); return [ 'GitHub notifications data:', notifications, '', 'Gmail messages data:', emails, '', 'Digest delta (new items since previous run):', digest, ].join('\n'); }Before anything goes to Groq,
prepareAssistantInputserializes the GitHub and Gmail data into a single string. Theclip()utility truncates each block if it exceeds a character limit: 10,000 for GitHub notifications and 6,000 for Gmail messages. Without this, large inboxes or notification backlogs could push the input past the model's context window and cause the request to fail. -
Add the Groq call function.
export async function askAssistant(groq: OpenAI, data: string, question: string) { const response = await groq.chat.completions.create({ model: 'llama-3.3-70b-versatile', messages: [ { role: 'system', content: 'You are a friendly and sharp personal developer assistant. Analyze both the GitHub notifications and Gmail data provided. Respond in plain text only, no markdown. Use this exact structure and headings: QUICK SUMMARY, GITHUB (ACT ON FIRST), GITHUB (CAN WAIT), GMAIL (ACT ON FIRST), GMAIL (CAN WAIT), TODAY\'S PLAN. Put each item on its own line starting with "- ". Keep each bullet specific and concrete (about 12 to 28 words), mentioning exact repo names, PR/workflow titles, senders, and subjects where relevant. Avoid generic wording like "check this" or "review that". Use clear action language and include why each urgent item matters now. Keep the tone warm, practical, and supportive without sounding robotic. Prioritize items flagged as new since last run and never repeat the same item in multiple sections.', }, { role: 'user', content: `${data}\n\nQuestion: ${question}`, }, ], }); return response.choices[0].message.content; }A few deliberate decisions here worth explaining:
* **Plain text only, no markdown.** The output is printed directly to the terminal. Markdown formatting like asterisks and hashes renders as literal characters in a CLI context. Telling the model to avoid markdown keeps the output clean.
* **A fixed structure.** The system prompt defines six required sections that appear in the same order every run. Without this, LLMs tend to produce free-form responses that vary in structure. A fixed structure makes the output predictable and easy to scan.
* **Specific and concrete bullets.** The prompt discourages vague language and asks the model to mention exact repo names, PR titles, senders, and subjects. Vague summaries are not useful when you are trying to decide what to do next.
* **Groq via OpenAI SDK.** The `groq` client is an OpenAI instance pointed at Groq's base URL. From the SDK's perspective nothing changes. Same method, same response shape. The model string `llama-3.3-70b-versatile` is the only Groq-specific detail.
Wiring It Together
This section covers the remaining three files: types.ts, utils.ts, and index.ts.
-
Add the shared types to
src/types.ts.
export type DigestState = { lastRunAt: string; githubNotificationIds: string[]; gmailMessageIds: string[]; }; export type DigestDelta = { hasPreviousRun: boolean; previousRunAt: string | null; newGithubNotifications: number; newGmailMessages: number; newGithubIds: string[]; newGmailIds: string[]; };DigestStateis what gets saved to disk after each run: a timestamp and the IDs of everything that was seen.DigestDeltais what gets computed at runtime by comparing the current fetch against the previous state. -
Add the shared utilities to
src/utils.ts.
import * as dotenv from 'dotenv'; dotenv.config(); export function requireEnv(name: string): string { const value = process.env[name]; if (!value) { throw new Error(`Missing required environment variable: ${name}`); } return value; } export function clip(value: string, maxLength: number): string { if (value.length <= maxLength) { return value; } return `${value.slice(0, maxLength)}\n...truncated...`; } export function printSection(title: string, content: string) { console.log('\n============================='); console.log(` ${title}`); console.log('============================='); console.log(content); }
* `requireEnv()` throws immediately at startup if a required environment variable is missing. You find out before any API calls are made, not halfway through a fetch.
* `clip()` truncates a string to a maximum length and appends a truncation notice. Used in `summarize.ts` to keep the LLM input within safe bounds.
* `printSection()` formats terminal output with a consistent header style. Every section of the CLI output goes through this.
-
Add the entry point to
src/index.ts.
import { Nango } from '@nangohq/node'; import OpenAI from 'openai'; import { readFile, writeFile } from 'fs/promises'; import { join } from 'path'; import { requireEnv, formatAssistantResponse, printSection, getSafeErrorMessage } from './utils'; import { getGithubNotifications, CleanGithubNotification } from './github'; import { getGmailMessages, CleanGmailMessage } from './gmail'; import { prepareAssistantInput, askAssistant } from './summarize'; import { DigestState, DigestDelta } from './types'; const nango = new Nango({ secretKey: requireEnv('NANGO_SECRET_KEY') }); const groq = new OpenAI({ apiKey: requireEnv('GROQ_API_KEY'), baseURL: 'https://api.groq.com/openai/v1', }); const githubProviderConfigKey = requireEnv('NANGO_GITHUB_PROVIDER_CONFIG_KEY'); const gmailProviderConfigKey = requireEnv('NANGO_GMAIL_PROVIDER_CONFIG_KEY'); const githubNotificationsLookbackDays = Number( process.env.GITHUB_NOTIFICATIONS_LOOKBACK_DAYS || '30' ); const githubNotificationsPerPage = 50; const githubNotificationsMaxPages = 5; const digestStateFilePath = join(process.cwd(), '.digest-state.json');The top of
index.tsinitializes the two clients (nangoandgroq) and reads all config values from the environment. Nothing runs yet. This is just setup. -
Add the digest state functions below the config.
async function loadDigestState(): Promise<DigestState | null> { try { const raw = await readFile(digestStateFilePath, 'utf-8'); const parsed = JSON.parse(raw) as Partial<DigestState>; if ( typeof parsed.lastRunAt === 'string' && Array.isArray(parsed.githubNotificationIds) && Array.isArray(parsed.gmailMessageIds) ) { return { lastRunAt: parsed.lastRunAt, githubNotificationIds: parsed.githubNotificationIds, gmailMessageIds: parsed.gmailMessageIds, }; } return null; } catch (error) { const maybeNodeError = error as NodeJS.ErrnoException; if (maybeNodeError.code === 'ENOENT') return null; throw error; } } async function saveDigestState(state: DigestState): Promise<void> { await writeFile(digestStateFilePath, JSON.stringify(state, null, 2), 'utf-8'); } function getDigestDelta( previousState: DigestState | null, githubNotifications: CleanGithubNotification[], gmailMessages: CleanGmailMessage[] ): DigestDelta { if (!previousState) { return { hasPreviousRun: false, previousRunAt: null, newGithubNotifications: githubNotifications.length, newGmailMessages: gmailMessages.length, newGithubIds: githubNotifications.map((item) => item.id), newGmailIds: gmailMessages.map((item) => item.id), }; } const previousGithub = new Set(previousState.githubNotificationIds); const previousGmail = new Set(previousState.gmailMessageIds); const newGithubIds = githubNotifications .map((item) => item.id) .filter((id) => !previousGithub.has(id)); const newGmailIds = gmailMessages .map((item) => item.id) .filter((id) => !previousGmail.has(id)); return { hasPreviousRun: true, previousRunAt: previousState.lastRunAt, newGithubNotifications: newGithubIds.length, newGmailMessages: newGmailIds.length, newGithubIds, newGmailIds, }; }loadDigestStatereads the previous run's state from.digest-state.json. If the file doesn't exist yet it returnsnullgracefully.saveDigestStatewrites the current run's state to disk after everything completes.getDigestDeltacompares the two to figure out what is new since the last run. -
Add the
mainfunction.
async function main() { const previousDigestState = await loadDigestState(); console.log('Fetching GitHub notifications...'); const notifications = await getGithubNotifications( nango, githubProviderConfigKey, githubNotificationsLookbackDays, githubNotificationsPerPage, githubNotificationsMaxPages ); console.log('Fetching Gmail messages...'); const emails = await getGmailMessages(nango, gmailProviderConfigKey); const digestDelta = getDigestDelta( previousDigestState, notifications.notifications, emails.messages ); const combinedData = prepareAssistantInput({ notifications, emails, digestDelta }); const answer = await askAssistant( groq, combinedData, 'Give me a clear and friendly update. Prioritize what is new since my previous run, explain what needs attention first and why it matters, then give a short plan for today.' ); if (process.env.DEBUG === 'true') { printSection('GITHUB NOTIFICATIONS', JSON.stringify(notifications, null, 2)); printSection('GMAIL MESSAGES', JSON.stringify(emails, null, 2)); } const formattedAnswer = formatAssistantResponse(answer ?? ''); printSection('ASSISTANT', formattedAnswer); await saveDigestState({ lastRunAt: new Date().toISOString(), githubNotificationIds: notifications.notifications.map((item) => item.id), gmailMessageIds: emails.messages.map((item) => item.id), }); } main().catch((error) => { console.error(getSafeErrorMessage(error)); process.exitCode = 1; });The flow is linear and easy to follow:
- Load the previous digest state from disk
- Fetch GitHub notifications and Gmail messages
- Compute what is new since the last run
- Prepare and send everything to Groq
- Print the formatted summary
- Save the current state to disk for next time
-
Run the tool.
npx ts-node src/index.tsOn the first run there is no previous state so everything is treated as new. On subsequent runs the tool compares against the saved state and the LLM focuses on what has changed since you last checked.
Sample Output
Here is what the tool prints when you run it:
Fetching GitHub notifications...
Fetching Gmail messages...
=============================
DIGEST SNAPSHOT
=============================
GH total=3 unread=1 urgent=1 new=0
MAIL total=5 unread=5 urgent=2 new=5
Compared with previous run at 2026-04-16T21:31:13.959Z
Top GitHub now:
- [GH 1] your-username/your-repo | ci_activity | Deploy workflow run failed for main branch
- [GH 2] your-username/your-repo | state_change | Add new feature to portfolio section
Top Gmail now:
- [MAIL 1] Learning Platform <hello@platform.com> | Course ready: Foundations of Cybersecurity
- [MAIL 2] Financial Service <updates@finance.com> | Market Update: Average Yield Falls 3bps
=============================
ASSISTANT
=============================
QUICK SUMMARY
- There are no new GitHub notifications since the last run, but there are unread items to review
- There are 5 new Gmail messages, with 2 marked as urgent, requiring attention
GITHUB (ACT ON FIRST)
- Review the unread PR in your-username/your-repo — a teammate is waiting and it has been open
since yesterday
GITHUB (CAN WAIT)
- Check the failed Deploy workflow run on main branch in your-username/your-repo to prevent
future failures
- Look at the state change notification in your-username/your-repo for potential updates
GMAIL (ACT ON FIRST)
- Respond to the course email from Learning Platform — it contains a time-sensitive offer
- Read the market update from Financial Service — it contains important information requiring
a decision
GMAIL (CAN WAIT)
- Browse the design inspiration email from your newsletter for later
- Read the founder strategy article from The AI Journal for learning
- Review the weekly stock recommendation for investment insights
TODAY'S PLAN
- First, review the unread GitHub item and respond to the two urgent Gmail messages
- Then prioritize preventing the workflow failure on main — a broken pipeline blocks future work
- Finally, allocate time to the non-urgent Gmail messages for learning and market awareness
A few things to notice in this output:
- The digest snapshot comes first. Before the LLM summary, you get a quick count of total, unread, urgent, and new items since the last run. This gives you the shape of your day in seconds without reading anything else.
-
"New since last run" is the key signal. In this run, GitHub shows
new=0, meaning nothing has changed since the previous run. Gmail showsnew=5, meaning five emails arrived since you last checked. The LLM picks this up and leads with the Gmail items. - The structure is consistent every run. Six sections, always in the same order. You know where to look without reading everything else.
- The bullets are specific. The LLM mentions exact repositories, workflow names, senders, and subjects rather than generic phrases. That specificity comes directly from the prompt instructions.
Bonus: What Else Is In the Code
The core of this build is the Nango integration, the priority scoring, and the Groq summarization. But there are three additional pieces in the codebase worth knowing about.
The digest state system
After every run, the tool saves a .digest-state.json file to the project root:
{
"lastRunAt": "2026-04-16T21:31:13.959Z",
"githubNotificationIds": ["abc123", "def456"],
"gmailMessageIds": ["msg001", "msg002", "msg003"]
}
On the next run it loads this file, compares the current fetch against the saved IDs, and flags anything new. The system prompt explicitly tells the LLM to prioritize those new items, which is why the summary leads with Gmail when five new messages arrive but no new GitHub notifications.
Debug mode
Setting DEBUG=true in your .env dumps the full raw JSON from both fetches before the assistant summary. Useful when the LLM output looks off and you want to see exactly what data it received.
Output formatting
The formatAssistantResponse() function in utils.ts normalizes the LLM output and deduplicates bullets. LLMs sometimes return the same item in multiple sections despite the prompt telling them not to. The function tracks seen bullet content and silently drops duplicates. It also wraps long lines at 96 characters so the output stays readable regardless of terminal width.
What I Learned
Nango is genuinely easy to set up. I came into this build with a specific reason for using Nango that had nothing to do with evaluating it as a tool. But once I was inside it, the setup surprised me. All I needed was my client ID and client secret for each integration, and the dashboard walked me through the rest. No custom OAuth logic, no token management code, no refresh handling. It was smooth in a way I did not expect.
Prompt engineering is where the real work is. The Nango integration and the API fetching came together relatively quickly. The part that took the most iteration was finetuning the prompt and shaping the data before it reached the LLM. Getting the model to be specific rather than generic, to lead with what was new, to avoid repeating items across sections — that required deliberate prompt design and several passes at the input structure. If you are building something similar, budget more time for this than you think you need.
The tool taught me what I actually want next. Building something you use yourself is a good way to find out what is missing. A few things became clear while using it:
- The priority scoring is static. It does not learn from what I actually pay attention to. A smarter version would be more sensitive to context and avoid surfacing things I have already dismissed
- Being able to perform actions from the same place, like replying to an email or marking a notification as read, would close the loop the tool currently leaves open
- A do-not-disturb mode where you can mute certain types of notifications for a set period would make it more respectful of focus time
- Running inside the IDE rather than a separate terminal would fit the developer workflow better
- Real-time data fetching rather than on-demand runs would make it more useful throughout the day
None of these are blockers for the current version. But they are the natural next layer for anyone who wants to take this further.
What's Next
This version is a working CLI tool that fetches real data, scores it, and returns a useful summary. But there are clear directions it could grow in.
A UI layer. The CLI output works but it is text in a terminal. A simple web UI would let you see your GitHub notifications and Gmail messages as cards, with action buttons attached: mark as read, archive, flag for follow-up. That closes the loop the current version leaves open.
Action execution via Nango. The same nango.get() pattern that fetches data works in reverse for writing. nango.patch() and nango.post() can mark GitHub notifications as read, archive Gmail messages, or reply to threads, all without touching the OAuth layer. Adding actions is a natural extension of what is already there.
More integrations. Nango supports 700+ APIs. Adding Slack, Linear, or Jira would follow the same pattern as GitHub and Gmail: a new provider config key, a new connection ID, and a fetch function that fits into the existing flow. The architecture is already set up for it.
An IDE extension. Developers live in their editors. A VS Code or Cursor extension that surfaces the same digest inside the IDE without switching context would be a better fit for the workflow this tool is trying to support.
A do-not-disturb mode. Sometimes you need to focus without any interruptions. A configurable mute window, similar to DND on your phone, would let you suppress certain notification types for a set period. The tool should be useful without being another source of noise.
The full source is available at github.com/techsplot/dev-assistant.




Top comments (0)