Hands-on experience building a ‘Tool’ for watsonx Orchestrate
TL;DR — What is watsonx Orchestrate
There are several official or unofficial documentation regarding watsonx Orchestrate on the net, I’m not going to extract excerpts from every and each of them, all necessary official IBM watsonx Orchestrate documentation — including core product guides and developer kits — is consolidated and linked in the Resources section for direct access.
IBM watsonx Orchestrate is an intelligent, AI-powered automation platform designed to streamline and simplify complex business operations. It acts as a central hub for building, deploying, and managing autonomous AI agents that can seamlessly interact with a wide array of enterprise applications and data sources. Leveraging generative AI and large language models, the platform allows users — from business analysts to professional developers — to define, connect, and orchestrate these agents to handle repetitive, multi-step tasks across domains like HR, finance, and customer service, ultimately driving operational efficiency and accelerating business growth. It can be adapted easitly to businesses. One can choose the foundation models and automation tools they prefer, orchestrate multiple AI agents in a governed environment, and scale seamlessly across hybrid cloud and on-prem systems without rip-and-replace. Business users can create agents through an intuitive no-code experience, while developers extend capabilities with a full agent development kit (ADK). Both work in the same trusted environment with real-time observability and guardrails.
Having said all the above… you can use the out-of-the-box tools, and the GUI interface and build your agents with simple clicks…
If you are a bit more into coding, and want to build your own specific tools and develop ad-hoc solutions using the developer kit!
The Agent Development Kit (ADK) gives you a set of developer-focused tools to build, test, and manage agents in watsonx Orchestrate. With the ADK, you take full control of agent design using a lightweight framework and a simple CLI.Define agents in clear YAML or JSON files, create custom Python tools, and manage the entire agent lifecycle with just a few commands.This guide walks you through installing the ADK, setting up your local development environment, and deploying your first agent to a watsonx Orchestrate SaaS instance. Start building flexible, reusable AI agents right away.
OK, now let’s jump 🪂 into something more precise and concise…
Sample Tool and Motivation
In a recent implementation leveraging IBM watsonx Orchestrate, a core requirement was enabling an AI agent to interface directly with the client’s Google Drive (GDrive) environment. This integration could efficiently be achieved using the platform’s standard Tool Catalog, which provides pre-built APIs and connectors for the comprehensive Google Suite. This streamlined approach allowis the agent(s) to utilize native Google Drive functionalities seamlessly within the orchestrated workflow.
However if there are specific tasks to be done, we can easily develop a tool and integrate it on the platform thanks to the ADK.
We can follow the sample from ADK to inspire our code (when all requiements are set).
#test_tool.py
from ibm_watsonx_orchestrate.agent_builder.tools import tool
@tool()
def my_tool(input: str) -> str:
"""Executes the tool's action based on the provided input.
Args:
input (str): The input of the tool.
Returns:
str: The action of the tool.
"""
#functionality of the tool
return f"Hello, {input}"
And then;
orchestrate tools import -k python -f my-tool.py -r requirements.txt -a app1 -a app2
Now let’s see the case with a simple application which reads the content of a GDrive (in this case my personnal drive is used). The first step is to prepare in advace the requirement steps from Google Cloud (GCP) to be able to access an account.
🔑 TL;DR — Google Cloud Console Setup for Drive API Access
If you know this part, skip it. I discovered the process!
This section outlines the mandatory steps to configure your Google Cloud Project to obtain the Client ID and Client Secret necessary for the Python script to access your Google Drive via the OAuth 2.0 flow.
- Create a New Project & Enable API: The first step is to ensure you have a Google Cloud Project and the Google Drive API is active.
- Google Drive API
- Configure the OAuth Consent Screen
Basic App Information
- Navigate to APIs & Services > OAuth consent screen.
- Select “External” for the User Type (unless you are using a Google Workspace account, in which case you can choose “Internal”).
- Click “Create”.
- Fill in the required fields under the App registration step:
- App name: (e.g., Drive Inventory Script)
- User support email: (Your email address)
- Developer contact information: (Your email address) Define Scopes (Permissions)
- In the Scopes section, click “Add or Remove Scopes”.
- Search for and select the following scope:
- .../auth/drive.readonly (Allows the script to read file metadata and content, but not to modify or delete anything).
Set Publishing Status
- On the Summary page, check the Publishing status. It will likely be “Testing”.
- If the status is “Testing”, you must add the Google account you used to run the script as a Test User:
- Go to the Test users section.
- Click “Add Users” and enter the exact email address you used to log in when running the Python script.
Create Credentials (Client ID & Secret - OAuth Client ID)
- Navigate to APIs & Services > Credentials.
- Click “Create Credentials” and select “OAuth client ID”.
- Application type: Select Web application. (This is required because the Python library opens a local web server for the OAuth flow).
- Name: (e.g., Local Python Client)
Register Redirect URIs (Crucial Step!)
This setting tells Google where to send the authentication code after the user logs in. Since the Python script uses a dynamic port, you must register the generic local callback URIs.
- Under Authorized redirect URIs, click “Add URI” and enter the following three addresses:
- http://localhost
- http://127.0.0.1
- http://localhost:8080 (A common fallback port)
- Click “Create”.
Download Credentials
- A pop-up window will display your Client ID and Client Secret.
- Copy and save these values.
- More importantly, click the “Download JSON” button and save the resulting file (it will be named something like client_secret_....json).
- Rename this file to credentials.json and place it in the same directory as your Python script.
First step (for me) — Generic code to access Google Drive
Below, the step to build a simple code which can read files on Google Drive and create an ouput markdwon.
- Prepare the environment 🌳
# requirements.txt
ibm-watsonx-orchestrate
google-api-python-client
google-auth google-auth-oauthlib
- And the code 👇
import os
import io
import math
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
# Configuration and Constants
SCOPES = ["https://www.googleapis.com/auth/drive.readonly"] # Ensure this is the read/write scope
FOLDER_MIME_TYPE = "application/vnd.google-apps.folder"
ROOT_FOLDER_ID = 'root'
OUTPUT_DIR = "./output"
OUTPUT_FILE = os.path.join(OUTPUT_DIR, "drive_inventory.md")
def get_human_readable_type(mime_type):
"""Converts the technical MIME type into a simple, readable format."""
if mime_type == FOLDER_MIME_TYPE:
return "Folder"
if mime_type == "application/vnd.google-apps.document":
return "Google Doc"
if mime_type == "application/vnd.google-apps.spreadsheet":
return "Google Sheet"
if mime_type == "application/vnd.google-apps.presentation":
return "Google Slides"
if mime_type.startswith("image/"):
return "Image"
if mime_type == "application/pdf":
return "PDF"
return "Other File"
def format_file_size(size_bytes):
"""Converts bytes to a human-readable string (KB, MB, GB)."""
if size_bytes is None or size_bytes == 0:
return "N/A"
size_bytes = int(size_bytes)
if size_bytes < 1024:
return f"{size_bytes} B"
for unit in ["KB", "MB", "GB", "TB"]:
size_bytes /= 1024.0
if size_bytes < 1024:
return f"{size_bytes:.2f} {unit}"
return f"{size_bytes:.2f} TB"
def format_access_permissions(permissions):
"""Processes the permissions list into a human-readable string."""
if not permissions:
return "Private (Owner Only)"
access_summary = []
for p in permissions:
if p.get("role") == "owner":
continue
role = p.get("role", "viewer").capitalize() # e.g., 'writer' -> 'Writer'
p_type = p.get("type")
if p_type == "user" or p_type == "group":
identifier = p.get("emailAddress", "Unknown User/Group")
elif p_type == "domain":
identifier = f"Domain ({p.get('domain', 'N/A')})"
elif p_type == "anyone":
identifier = "Public Link"
else:
identifier = p_type.capitalize()
access_summary.append(f"{role}: {identifier}")
if not access_summary:
return "Private (Owner Only)"
return "; ".join(access_summary)
def authenticate():
"""Handles the OAuth2 authentication flow and returns the credentials object."""
creds = None
if os.path.exists("token.json"):
creds = Credentials.from_authorized_user_file("token.json", SCOPES)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
print("Refreshing credentials...")
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
"credentials.json", SCOPES
)
print("Starting new authorization flow...")
creds = flow.run_local_server(port=0)
with open("token.json", "w") as token:
token.write(creds.to_json())
print("Authorization complete. Credentials saved to token.json.")
return creds
# --- Recursive Listing and Markdown Generation ---
def list_folder_recursively(service, folder_id, inventory_list, depth=0):
"""
Recursively lists the contents of a folder, requests permissions,
and appends the data to the inventory_list.
"""
query = f"'{folder_id}' in parents and trashed=false"
try:
response = (
service.files()
.list(
q=query,
fields="files(id, name, mimeType, size, modifiedTime, webViewLink, permissions)",
pageSize=100,
)
.execute()
)
items = response.get("files", [])
for item in items:
name = item["name"]
mime_type = item["mimeType"]
is_folder = (mime_type == FOLDER_MIME_TYPE)
# Format data for the table
link_url = item.get("webViewLink")
file_type = get_human_readable_type(mime_type)
file_size_formatted = format_file_size(item.get("size"))
modified_time = item.get("modifiedTime", "N/A").split('.')[0].replace('T', ' ')
access_string = format_access_permissions(item.get("permissions"))
inventory_list.append({
"depth": depth,
"name": name,
"type": file_type,
"size": file_size_formatted,
"modified": modified_time,
"is_folder": is_folder,
"url": link_url,
"access_string": access_string # New field
})
if is_folder:
list_folder_recursively(service, item["id"], inventory_list, depth + 1)
except HttpError as error:
print(f"An API error occurred in folder {folder_id}: {error}")
def create_inventory_table(inventory_list):
"""Generates the Markdown table string with clickable links and access column."""
markdown_output = "# Google Drive Inventory Report\n\n"
markdown_output += f"**Generated On:** {os.path.basename(OUTPUT_FILE)}\n\n"
headers = ["File/Folder Name", "Type", "Size", "Last Updated (UTC)", "Access/Shared With"]
markdown_output += "| " + " | ".join(headers) + " |\n"
markdown_output += "|---" * len(headers) + "|\n"
for item in inventory_list:
indent = " " * (item['depth'] * 4)
icon = "📁" if item['is_folder'] else "📄"
# Create the Markdown link: [Text](URL)
if item['url']:
file_name_with_link = f"[{item['name']}]({item['url']})"
else:
file_name_with_link = item['name']
file_name_cell = f"{indent}{icon} {file_name_with_link}"
row_data = [
file_name_cell,
item['type'],
item['size'],
item['modified'],
item['access_string']
]
markdown_output += "| " + " | ".join(row_data) + " |\n"
return markdown_output
def main_drive_list():
"""Main function to start the recursive listing and output generation."""
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
print(f"Created output directory: {OUTPUT_DIR}")
creds = authenticate()
inventory_list = [] # List to store all file/folder data
try:
service = build("drive", "v3", credentials=creds)
print("\n--- Starting Comprehensive Recursive Drive Scan ---")
print("Scanning from root folder...")
list_folder_recursively(service, ROOT_FOLDER_ID, inventory_list)
print("--- Recursive Scan Complete ---")
markdown_content = create_inventory_table(inventory_list)
with open(OUTPUT_FILE, "w", encoding="utf-8") as f:
f.write(markdown_content)
print(f"✅ Inventory successfully saved to: {OUTPUT_FILE}")
print("\n--- Console Output (First 5 Items) ---")
print("Note: Console output is condensed; Markdown file contains full table.")
for item in inventory_list[:5]:
indent = " " * item['depth']
icon = "📁" if item['is_folder'] else "📄"
print(f"{indent}{icon} {item['name']} ({item['type']} | Access: {item['access_string']})")
print("--- End of Console Sample ---")
except HttpError as error:
print(f"A general API error occurred: {error}")
if __name__ == "__main__":
main_drive_list()
| File/Folder Name | Type | Size | Last Updated (UTC) | Access/Shared With |
| ------------------------------------------------------------ | ------------ | -------- | ------------------- | -------------------- |
| 📄 [Medium-Articles](https://docs.google.com/spreadsheets/xxx | Google Sheet | 36.62 KB | 2025-11-27 13:20:07 | Private (Owner Only) |
| 📄 [Calendrier-2025](https://docs.google.com/spreadsheets/xxx | Google Sheet | 11.40 KB | 2025-11-08 18:02:41 | Private (Owner Only) |
2nd Step — Build a Tool 🛠️
We’re going to change slightly the simple application above to a “Tool” format.
import os
import io
import math
from ibm_watsonx_orchestrate.agent_builder.tools import tool
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
SCOPES = ["https://www.googleapis.com/auth/drive"] # Full read/write access (recommended for interactive links)
FOLDER_MIME_TYPE = "application/vnd.google-apps.folder"
ROOT_FOLDER_ID = 'root'
OUTPUT_FILE_NAME = "drive_inventory.md"
# --- Helper Functions (Encapsulated or kept outside for clean code) ---
def get_human_readable_type(mime_type):
if mime_type == FOLDER_MIME_TYPE:
return "Folder"
if mime_type == "application/vnd.google-apps.document":
return "Google Doc"
if mime_type == "application/vnd.google-apps.spreadsheet":
return "Google Sheet"
if mime_type == "application/vnd.google-apps.presentation":
return "Google Slides"
if mime_type.startswith("image/"):
return "Image"
if mime_type == "application/pdf":
return "PDF"
return "Other File"
def format_file_size(size_bytes):
if size_bytes is None or size_bytes == 0:
return "N/A"
size_bytes = int(size_bytes)
if size_bytes < 1024:
return f"{size_bytes} B"
for unit in ["KB", "MB", "GB", "TB"]:
size_bytes /= 1024.0
if size_bytes < 1024:
return f"{size_bytes:.2f} {unit}"
return f"{size_bytes:.2f} TB"
def format_access_permissions(permissions):
if not permissions:
return "Private (Owner Only)"
access_summary = []
for p in permissions:
if p.get("role") == "owner":
continue
role = p.get("role", "viewer").capitalize()
p_type = p.get("type")
if p_type == "user" or p_type == "group":
identifier = p.get("emailAddress", "Unknown User/Group")
elif p_type == "anyone":
identifier = "Public Link"
else:
identifier = p_type.capitalize()
access_summary.append(f"{role}: {identifier}")
return "; ".join(access_summary) if access_summary else "Private (Owner Only)"
def authenticate_tool():
"""Handles OAuth2 authentication flow for the tool."""
creds = None
if os.path.exists("token.json"):
creds = Credentials.from_authorized_user_file("token.json", SCOPES)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
# NOTE: For deployment, ensure 'credentials.json' is accessible
flow = InstalledAppFlow.from_client_secrets_file("credentials.json", SCOPES)
creds = flow.run_local_server(port=0) # This opens the browser for the initial run
with open("token.json", "w") as token:
token.write(creds.to_json())
return creds
def list_folder_recursively(service, folder_id, inventory_list, depth=0):
"""Recursively lists folder contents and appends data to inventory_list."""
query = f"'{folder_id}' in parents and trashed=false"
try:
response = (
service.files()
.list(
q=query,
fields="files(id, name, mimeType, size, modifiedTime, webViewLink, permissions)",
pageSize=100,
)
.execute()
)
items = response.get("files", [])
for item in items:
name = item["name"]
mime_type = item["mimeType"]
is_folder = (mime_type == FOLDER_MIME_TYPE)
# Format data for the table
link_url = item.get("webViewLink")
file_type = get_human_readable_type(mime_type)
file_size_formatted = format_file_size(item.get("size"))
modified_time = item.get("modifiedTime", "N/A").split('.')[0].replace('T', ' ')
access_string = format_access_permissions(item.get("permissions"))
# Create a dictionary for the row
inventory_list.append({
"depth": depth,
"name": name,
"type": file_type,
"size": file_size_formatted,
"modified": modified_time,
"is_folder": is_folder,
"url": link_url,
"access_string": access_string
})
# Recurse if it is a folder
if is_folder:
list_folder_recursively(service, item["id"], inventory_list, depth + 1)
except HttpError as error:
# NOTE: Returning the error message for the tool to report
raise Exception(f"Google Drive API Error during scan: {error}")
def create_inventory_table(inventory_list, output_file_path):
"""Generates the Markdown table string and writes it to the specified path."""
markdown_output = "# Google Drive Inventory Report\n\n"
markdown_output += f"**Generated On:** {os.path.basename(output_file_path)}\n\n"
headers = ["File/Folder Name", "Type", "Size", "Last Updated (UTC)", "Access/Shared With"]
markdown_output += "| " + " | ".join(headers) + " |\n"
markdown_output += "|---" * len(headers) + "|\n"
for item in inventory_list:
indent = " " * (item['depth'] * 4)
icon = "📁" if item['is_folder'] else "📄"
if item['url']:
file_name_with_link = f"[{item['name']}]({item['url']})"
else:
file_name_with_link = item['name']
file_name_cell = f"{indent}{icon} {file_name_with_link}"
row_data = [
file_name_cell,
item['type'],
item['size'],
item['modified'],
item['access_string']
]
markdown_output += "| " + " | ".join(row_data) + " |\n"
# Write the content to the file
with open(output_file_path, "w", encoding="utf-8") as f:
f.write(markdown_output)
# --- WATSONX ORCHESTRATE TOOL FUNCTION ---
@tool()
def generate_drive_inventory(output_directory: str = "./output") -> str:
"""
Scans the entire user's Google Drive recursively, collecting file and
folder metadata including access permissions, size, and modification time.
It generates a detailed, clickable Markdown inventory report file.
Args:
output_directory (str): The folder path where the 'drive_inventory.md'
file should be saved (e.g., './output').
Default is './output'.
Returns:
str: A message confirming the status and the full path to the generated
Markdown inventory file.
"""
# 1. Setup paths and directories
OUTPUT_DIR = output_directory
OUTPUT_FILE = os.path.join(OUTPUT_DIR, OUTPUT_FILE_NAME)
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
# 2. Authenticate
try:
creds = authenticate_tool()
except Exception as e:
return f"Authentication failed. Please ensure 'credentials.json' and 'token.json' (if refreshing) are configured correctly. Error: {e}"
inventory_list = []
# 3. Build Service and Scan
try:
service = build("drive", "v3", credentials=creds)
list_folder_recursively(service, ROOT_FOLDER_ID, inventory_list)
# 4. Generate and Save Report
create_inventory_table(inventory_list, OUTPUT_FILE)
return f"✅ Google Drive inventory successfully generated and saved to: {os.path.abspath(OUTPUT_FILE)}"
except HttpError as error:
return f"Google Drive API execution failed. Error: {error}"
except Exception as e:
return f"An unexpected error occurred during inventory generation: {e}"
- Add the tool
orchestrate tools import -k python -f gdrive-read-folders-tool-v1.py -r requirements.txt -a a gdrive-read-folders
Success ⛳
Conclusion - The Power of the Agent Development Kit (ADK)
Ultimately, the combination of watsonx Orchestrate’s robust Tool Catalog and the extensive capabilities offered by the Agent Development Kit (ADK) provides a comprehensive, future-proof solution. While the Tool Catalog ensures immediate, zero-code integration with popular platforms like GDrive, GMail, Outlook, OneDrive, SharePoint, Workday…, the ADK unlocks the full customization potential for developers. It empowers teams to rapidly build, test, and deploy custom agents and tools tailored precisely to highly specific or proprietary business logic. This duality ensures we can meet both standard requirements instantly and complex, unique operational demands with complete agility and precision, solidifying watsonx Orchestrate as a truly adaptable and scalable agentic (and agentic orchestration) AI platform.
Thanks for reading!
Links
- IBM watsonx Orchestrate: https://www.ibm.com/products/watsonx-orchestrate
- watsonx Orchestrate Documentation: https://www.ibm.com/docs/en/watsonx/watson-orchestrate/base
- Agent development Kit (ibm-watsonx-orchestrate-adk): https://github.com/ibm/ibm-watsonx-orchestrate-adk/
- Agent development Kit Documentation and How-to Guide(s): https://developer.watson-orchestrate.ibm.com/
- Installing and Configuring IBM watsonx Orchestrate ADK: https://www.youtube.com/watch?v=zz3kNjJrUMg
- What Are Orchestrator Agents? AI Tools Working Smarter Together: https://www.youtube.com/watch?v=X3XJeTApVMM&t=20s
- Orchestrating Complex AI Workflows with AI Agents & LLMs: https://www.youtube.com/watch?v=OFq_CvRCpA0&t=10s
- watsonx Orchestrate SaaS offering on IBM Cloud: https://cloud.ibm.com/catalog/services/watsonx-orchestrate
- watsonx Orchestrate SaaS offering on AWS: https://aws.amazon.com/marketplace/pp/prodview-ua5rm53wrx7hm#:~:text=IBM%C2%AE%20watsonx%20Orchestrate
- Getting User Input in watsonx Orchestrate Agents: https://heidloff.net/article/watsonx-orchestrate-user-input/ (by Niklas Heidloff)
- Accessing local Models in watsonx Orchestrate via Ollama: https://heidloff.net/article/watsonx-orchestrate-ollama/ (by (Niklas Heidloff)




Top comments (0)