The Goal: I wanted an AI assistant that doesn't save my logs to the cloud and utilize my hardware.
Though it's quite slow as of now, I believe with further optimization and advancement in local silicon, edge AI would inevitably become the next big thing in the very near future.
The Tech: I used Flask, TinyLlama, and Python-dotenv for security. I also took the help of Claude and Copilot wherever the code was repetitive and well beyond my current grasp on Python.
I am a college freshman (about to become a sophomore), but I seek merciless feedbacks. Do provide me honest feedbacks about this project.
sounak1410
/
Web-Based-Edge-AI-
A personalized Small Language Model that runs locally on your device's hardware without having to access the internet or the cloud
Y Assistant
A privacy-focussed local AI chatbot powered by TinyLlama
Y is a lightweight web-based AI assistant designed to run entirely on your local machine. It features a custom architecture demo, a secure login system, and per-session memory, ensuring that your conversations stay private and contextual. Created by S (My pseudonymn).
Features
- No data leaves your machine. It runs on TinyLlama 1.1B.
- Contextual Memory: Remembers the last 10 messages for a natural conversation flow.
- Secure Access: Protected by a customizable password system.
- Privacy-First: No permanent logs are stored; session data is cleared on request.
- Architecture Demo: Includes a raw GPT-2 initialization script to show how LLMs are structured.
Installation
Install my-project with npm
# Clone the repository
git clone https://github.com/sounak1410/Web-Based-Edge-AI-.git
# Enter the directory
cd Web-Based-Edge-AI-
# Install dependencies
pip install -r requirements.txt
# Run the application
python Edge.py
Chatting with Y
-
Open your browser to http://127.0.0.1:5000.
-
Enter theβ¦
Top comments (0)