Turning an old smartphone into a smart security guard using Kotlin, CameraX, Gemini AI and Telegram
Hello, dev.to!
Before I begin telling you about my project, I want to express my enormous gratitude to all those countless programmers, developers, and enthusiasts who have shared their code, knowledge, and experience on the internet over many years. Your open projects, Stack Overflow answers, articles, tutorials, and forum discussions became the very nurturing environment on which large language models (LLMs) were trained, including the one that helped me. Without your contribution to the common cause, such powerful AI assistants that today so greatly facilitate and accelerate development would simply be impossible. Thank you for making the development world more open and accessible!
The source code of the "Storog" project is fully open and available on GitHub: github.com/premananda108/storog.git
I invite everyone interested to take a look, fork it, and perhaps suggest improvements!
The Idea: Turn a Smartphone into a Smart Guard
It all started with a simple thought: many people have old or unused Android smartphones lying around at home. Why not turn one of them into a smart "guard"? Imagine: you leave home and want to stay informed if something changes in the room — a door opens, a pet appears in a forbidden zone, or, God forbid, someone unauthorized shows up.
Main features I wanted to implement:
- Visual change detection: The app monitors the camera feed and compares it with a reference image
- Configurable sensitivity: The user decides how significant a change should be considered "alarming"
- Intelligent analysis: If a change is detected, AI (Gemini 1.5 Flash) steps in to analyze the image based on a user-defined prompt
- Telegram notifications: A photo of the moment of change and the AI's verdict are sent to a specified Telegram chat
- Ease of use: Clear interface with camera preview and "Start/Stop" buttons
AI Assistant in Action: From "Hello Android" to Working Prototype
The most interesting part of this story — all the code was written by an AI assistant (Gemini 2.5 Pro Preview in Trae IDE). I acted as a product manager, formulating requirements and testing the results.
The development process looked like this:
1. Project Start
Created an empty project in Android Studio (minSDK 26, Kotlin, Jetpack Compose) and let the AI study the structure.
2. First Feature — Camera Display
My request: "Need to display camera image in MainActivity.
Let it take up the upper third of the activity.
Just implement camera work in a separate file"
The AI suggested creating CameraView.kt
and integrating it into MainActivity
.
3. Permission Debugging
Of course, not everything went smoothly on the first try:
- 🐛 Problem: No camera image
- 🔍 AI Query: "No camera image"
✅ Solution: AI found the issue — missing permission in
AndroidManifest.xml
and added<uses-permission android:name="android.permission.CAMERA" />
🐛 Next Problem: Image appeared only after app restart
🔍 Query: "Why didn't the camera image display immediately after enabling access?"
✅ Solution: AI suggested using
mutableStateOf
at theMainActivity
class level and updating it in theregisterForActivityResult
callback
4. Adding Image Comparison Logic
Request: "Add a new file with a function to compare two camera images.
This function should return the difference magnitude in percentage"
This is how ImageComparator.kt
appeared with the calculateDifferencePercentage()
function:
suspend fun calculateDifferencePercentage(
bitmap1: Bitmap,
bitmap2: Bitmap,
resizeWidth: Int? = 100, // Reduce for speed
convertToGrayscale: Boolean = true
): Double = withContext(Dispatchers.Default) {
try {
val processedBitmap1: Bitmap
val processedBitmap2: Bitmap
if (resizeWidth != null) {
val aspectRatio1 = bitmap1.height.toDouble() / bitmap1.width.toDouble()
val resizeHeight1 = (resizeWidth * aspectRatio1).toInt()
processedBitmap1 = Bitmap.createScaledBitmap(bitmap1, resizeWidth, resizeHeight1, true)
// ... similar for bitmap2
}
// ... rest of comparison logic
} catch (e: Exception) {
Log.e("ImageComparator", "Error comparing images", e)
0.0
}
}
5. Monitoring Implementation
Request: "Add a 'Start' button to MainActivity, when pressed it should
remember the initial image and then every 5 seconds get a new image
and compare it with the initial one"
The AI generated corresponding code using coroutines (lifecycleScope.launch
) and delay
.
The process continued iteratively: I described the needed functionality (settings, Gemini prompt input, Telegram sending), the AI proposed code, I integrated it, tested it, and if something went wrong — I turned to the AI again with a problem description or error logs.
What We Got: Key Features of Storog
Currently, the app can:
✅ Display real-time camera preview
✅ Start and stop monitoring with one button
✅ Capture reference frame when monitoring starts
✅ Compare current frame with reference and show difference percentage
✅ Configure trigger threshold (sensitivity from 0% to 100%)
✅ Analyze images through Gemini AI based on user prompt
✅ Send Telegram notifications with photo and analysis result
✅ Filter false positives (doesn't send if AI response starts with "No")
✅ Store settings between app launches
Tech Stack
- Language: Kotlin
- UI: Jetpack Compose
- Camera: CameraX
- Async: Kotlin Coroutines
-
AI: Google Gemini API (
com.google.ai.client.generativeai
) - Network: Ktor Client (for Telegram API)
- Settings Storage: SharedPreferences
Working with AI Assistant: Pros and Pitfalls
🚀 Pros:
Speed: Code generation for standard operations happens very quickly. What used to take hours of documentation study now gets solved in minutes.
Learning: AI often suggests solutions I might not know about or forget. It's a great way to learn new approaches and libraries.
Reduced Routine: Less time spent writing boilerplate — permissions, CameraX setup, SharedPreferences work, etc.
Debugging Help: AI handles error message analysis pretty well and suggests specific fixes.
⚠️ Cons and Considerations:
Context: AI doesn't always perfectly understand the overall project context. You have to break down tasks and give very clear, unambiguous requests.
Not a Replacement for Understanding: AI is a tool. To use it effectively and fix its mistakes, you need to understand the technologies yourself.
Iterative Nature: AI rarely produces the perfect solution on the first try. It's usually a dialogue: request → code → testing → clarification → corrected code.
Practical Tips for Working with AI in Development
Be specific in requests: Instead of "make camera" better say "add camera preview to upper third of screen using CameraX"
Break down tasks: Better to make 5 small requests than one big one
Always test code: AI can make mistakes, especially in API details
Use context: Tell AI about your project structure
Don't hesitate to ask again: If something's unclear, ask for explanation
Current Status and Development Plans
"Storog" is still a prototype, but already quite functional. Plans include:
- Background operation via Service
- Extended monitoring scheduling (time intervals)
Conclusion
Creating "Storog" with an AI assistant was an interesting and productive experience. It's not a magic "do everything for me" button, but rather a very powerful "pair programmer" that can handle routine tasks and help move toward the goal faster.
Main takeaway: AI significantly lowers the barrier to entry in development and allows you to focus on the idea rather than technical implementation details.
If you have an old Android smartphone and desire to experiment — try Storog! The source code is fully open: github.com/premananda108/storog.git
I'd appreciate your stars ⭐, forks 🍴, and improvement suggestions!
What do you think about development with AI assistants? Share your experience in the comments!
Top comments (1)
Interesting. Thanks for sharing!