DEV Community

Cover image for Smart Doctor: AI-Powered Medical Assistant with Human-in-the-Loop Access Control using Permit.io- Permissions Redefined
Suman Kumar
Suman Kumar

Posted on • Edited on

Smart Doctor: AI-Powered Medical Assistant with Human-in-the-Loop Access Control using Permit.io- Permissions Redefined

This is a submission for the Permit.io Authorization Challenge: AI Access Control

What I Built

Smart Doctor is an AI-powered medical assistant that allows patients to enter symptoms and receive instant diagnosis and treatment suggestions using OpenAI (ChatGPT). But we didn’t stop there — this app is designed to mirror real-world workflows with built-in authorization controls, ensuring AI suggestions are always reviewed by licensed doctors before reaching the patient.

It’s more than a demo — it’s a realistic, scalable solution that demonstrates responsible AI integration with fine-grained, role-based and attribute-based access control (ABAC) using Permit.io.

Demo

🧪 Live Frontend: https://rainbow-parfait-febebb.netlify.app/auth/login
⚙️ Live Backend API: https://smart-doctor-backend.onrender.com/api/

Test Credentials:

Role        Username       Password
Admin       admin      2025DEVChallenge
Doctor      doctor     2025DEVChallenge
Patient     patient    2025DEVChallenge
Enter fullscreen mode Exit fullscreen mode

Project Repo

🔗 GitHub: https://github.com/sumankalia/smart-doctor

🛡️ Feature Access Table

Feature Access control

🛡️ Authorization for AI Applications with Permit.io

In Smart Doctor, authorization is more than just frontend role checks — it's enforced through centralized policies with Permit.io, tightly integrated into our backend system. Here's how we leveraged Permit to secure our AI-driven healthcare platform:

✅ Real-Time User Sync with Permit

As soon as a user is created in MongoDB, we sync them in real-time with Permit using:

await permitInstance.api.syncUser({
  key: user._id.toString(),
  first_name: user.firstName,
  last_name: user.lastName || ".",
  email: user.email,
});
Enter fullscreen mode Exit fullscreen mode

This ensures every user exists in the Permit dashboard and can be managed through policies.

🔐 Automatic Role Assignment
After syncing, we assign roles dynamically via:

await permitInstance.api.assignRole({
  role: roleMap.Doctor, // or Admin, Patient
  tenant: "default",
  user: user._id.toString(),
});
Enter fullscreen mode Exit fullscreen mode

This lets us externalize access logic entirely — users can only see or perform actions based on their role and the resource state (e.g., AI diagnosis approved or not).

Walkthrough

Here’s a step-by-step walkthrough of how Smart Doctor works, showcasing the full user flow and fine-grained access control in action.

1. Patient: Submit a New Case
A logged-in patient starts by submitting symptoms using a simple form.

  • The frontend sends this data to the backend API.
  • The backend calls the OpenAI API to generate a diagnosis and treatment plan.
  • The case is then assigned to the doctor with the lowest workload.

Patient Preview

2. AI: Generate Diagnosis + Treatment
Using the submitted symptoms, the backend triggers a call to OpenAI (ChatGPT model) and returns a recommended diagnosis and treatment.

const userQuery = `
      Symptoms: ${symptoms}
      Case Description: ${caseDescription}
      Additional Information: ${additionalInfo}

      Please provide your response in two sections:
      1. Possible Diagnosis
      2. Suggested Treatment Plan
 `;


const chatCompletion = await openai.chat.completions.create({
      model: process.env.OPENAI_MODEL,
      messages: [
        {
          role: "system",
          content: userQuery,
        },
        {
          role: "user",
          content: `Please analyze the following medical case and 
         provide a professional medical assessment:\n\n${userQuery}`,
        },
      ],
      temperature: 0.7, 
      max_tokens: 1000,
    });

const aiResponse = chatCompletion.choices[0].message.content;

Enter fullscreen mode Exit fullscreen mode

AI Response

3. Doctor: Review + Approve/Override
Doctors are notified of new cases assigned to them. They can:

  • View AI-generated results
  • Approve the diagnosis/treatment
  • Override and edit them
  • Add additional notes

Permit.io enforces that only doctors assigned to the case have edit permissions on that record.

Dashboard

AI suggested Dignosis

AI suggested Treatment

4. Admin: Manage Ecosystem (Without Overreach)
Admins can:

  • View all users and cases
  • Reassign doctors
  • Update case status

BUT: Admins cannot modify diagnosis or treatment fields. This safeguard is enforced using ABAC policies in Permit.io.

Admin dashboard

Admin medical case access

5. Role-Based UI Access
Each user role has a distinct UI:

  • Patients: Can only view their own cases and AI response.
  • Doctors: Can edit AI output only for cases assigned to them.
  • Admins: Full visibility but restricted write access.

Permit.io checks are run before every sensitive action using custom middleware.

// Middleware check
 let permitted;
 if (resource === roleMap.Patient || resource === roleMap.Doctor) {
   permitted = await permitInstance.check(decoded._id.toString(), 
      action, {
          type: "users",
          attributes: {
            role: resource,
          },
        });
      } else {
        permitted = await permitInstance.check(
          decoded._id.toString(),
          action,
          resource
    );
 }

checkPermission("create", "medical_test")
Enter fullscreen mode Exit fullscreen mode

6. Edge Cases & Safeguards

  • If AI fails, the doctor is prompted to diagnose manually.
  • Patients cannot re-edit submitted cases.
  • Unauthorized access triggers a 403 Forbidden from the backend.

AI response

My Journey

The idea sparked from the growing use of AI in health tech, and the need to design systems where AI doesn’t overstep boundaries. I wanted to explore:

  • How can AI suggest treatment, but still leave the final word to humans?
  • How can we govern AI responses using access policies?
  • How can a real app enforce that patients can’t view unapproved AI content, and admins can’t tamper with diagnoses?

I explored Permit.io as the perfect ABAC solution. It allowed me to define policies externally while keeping the code clean and scalable. The biggest challenge was getting Permit and the AI logic to talk smoothly, but once I modularized permissions (middlewares for protect and checkPermission), it all clicked.

Authorization for AI Applications with Permit.io

This project is a practical demonstration of AI Access Control in healthcare:

  • Patients can submit symptoms and view only doctor-approved AI results.
  • Doctors can review, approve, or override AI-generated diagnosis and treatment.
  • Admins can manage users and assignments but can’t access AI data — a deliberate, governance-first design.

How Permit.io Helped

  • Used Permit.io’s ABAC model with resource-level permissions (users, medical_cases).
  • Defined roles (patient, doctor, admin) and enforced actions like read, update, approve, etc.
  • Added checkPermission("update", "medical_case") middleware to secure AI-related endpoints.

By separating AI generation from user access, and wrapping it in Permit.io-driven access gates, the app shows how AI can remain useful without being unchecked.

Special Thanks

Huge thanks to Permit.io for building a developer-friendly access control system and for organizing this hackathon!

Built with ❤️ by Suman Kumar

Top comments (2)

Collapse
 
esc_abhishek profile image
Abhishek Pawar

check the auth, login is not working

Collapse
 
sumankalia profile image
Suman Kumar

Thank you for pointing, it’s working fine now