DEV Community

Priya Negi
Priya Negi

Posted on

How to Turn Old Android Phones into AI Agents (Complete Guide)

What if that old device in your drawer could be transformed into a functional AI agent — capable of executing tasks autonomously? Thanks to emerging open-source tools like OpenClaw, Droidrun, and cloud services like Mobilerun.ai, this idea is now real. You can turn a forgotten Android phone into an intelligent automation node that executes workflows, interacts with apps, scrapes data, and acts as a personal AI assistant.

What It Means to Turn a Phone into an AI Agent

An AI agent, in this context is an LLM powered system that can understand natural language commands, interact with a real mobile device and complete a multi-step task autonomously on its own.

The traditional automation scripts rely on brittle selectors or hard-coded input sequences. Modern AI agents on the other hand can see the device screen, reason about the UI, and decide on the next action based on intent and context.

We planned to automate the old mobile phones because they are inexpensive, have decent CPUs, and run Android versions that support remote control interfaces.

Instead of collecting dust, these phones can become always-on assistants, running tasks around the clock.

Skip the API and use the screen

Most automation tools assume the app you want to automate has an API. Most apps don't.

For example, your Uber app? No public API. WhatsApp? The business API costs money, requires approval, and can't do half of what the actual app does. Your regional food delivery service? Your banking app? Your transit app? Nothing got an API.

But here's the catch, every single one of these apps has a screen and buttons that you can tap, text that you can read, fields you can type into. And it is a universal interface, and it's the exact same interface a human uses.

Our idea is simple: At Droidrun, instead of calling an API, we let the Droidrun agent look at the screen and decide what to do, how to scroll, and where to tap, exactly like you would.

Sounds interesting right? But making it actually work is where it gets interesting.

How it works: accessibility trees + LLMs

Android has a built-in accessibility system designed to help screen readers describe what's on the screen. This accessibility tree contains every UI element (like buttons, text fields, labels, checkboxes) in a structured format.

Droidrun agent reads this tree, sends it to an LLM (GPT, Claude, or Gemini), and asks: "here's what's on the screen. what should I do next?" the LLM responds with a thought process and an action (An example):

think: i'm on the home screen. launching youtube.
action: launch (842ms)

--- step 2/30 ---
think: youtube is open. tapping the search icon.
action: tap (623ms)

--- step 3/30 ---
think: search field focused.
action: type "lofi hip hop" (501ms)
Enter fullscreen mode Exit fullscreen mode

Our agent does not work on a brittle script that breaks when an app updates its layout. The agent is capable of reasoning about what it sees in real time. If a button moved, if a modal bottom sheet popped up, the agent adapts because it's reading the current state, not relying on hardcoded script.

When the accessibility tree is empty (Games, webviews, screens with custom views), the agent falls back to the vision.

Why old phones are perfect for this

First, because they are old and will not cost you extra bucks. Second, they are not being used by anyone, so we can run agents on them 24*7.

Plug it into a charger, connect to Wi-Fi, and you've got an always-on device running automated tasks 24/7.

The complete setup guide

Option 1: Droidrun Framework (open-source, self-hosted)

If you are a developer and can connect your mobile to the framework, you can try the Droidrun framework for automation. Droidrun is an open-source mobile AI agent framework with 7.8K + GitHub stars, specifically built for developers who want full control.

Detailed Process for getting started with Droidrun Framework

Check link (https://droidrun.ai/)

Option 2: [Mobilerun Cloud (managed, no ADB needed)]

No terminal required. Works with your personal phone.

  1. Create account at cloud.mobilerun.ai
  2. Install Portal APK on your phone (droidrun.ai/portal)
  3. Enable Accessibility permission and notifications
  4. Get API key from cloud.mobilerun.ai/api-keys (starts with dr_sk_)
  5. Long-press "Connect to Mobilerun" in Portal, paste key
  6. Run tasks from the Playground or API:

Watch full setup video: Watch Here

Option 3: Mobilerun + OpenClaw (AI agent with skills)

The most powerful setup, chain phone control with web search, calendars, messaging. Here’s how to set it up.

Install Mobilerun skills in OpenClaw

Get Your Mobilerun API Key

Go to https://cloud.mobilerun.ai/api-keys

Click New Key, name it, and copy it — keys start with dr_sk_

Configure the API Key in the OpenClaw config file or in your terminal config file

Connect your Android Device

Run the OpenClaw with the task to do some automation on your device.

Watch full setup video: Niels video link here

Now you can say: "open Instagram and like the latest post from @natgeo" or "check my WhatsApp messages and summarize them"

Getting started today

Get started with OpenClaw and Mobilerun by following the simple steps below.

Find your old phone (check that drawer)

Charge it up, connect to Wi-Fi

Pick your approach:

Developer → Droidrun (free, open-source)

Managed → Mobilerun Cloud (Portal app, no ADB)

AI assistant → Openclaw + Mobilerun skill

Start simple: "open Settings and tell me the Android version"

Build up: chain tasks, schedule automations, go wild

That phone in your drawer has been waiting. Give it something to do.

Top comments (0)