DEV Community

Pieces 🌟 for Pieces.app

Posted on

Building a TUI with Pieces SDK

Building a Pieces Copilot TUI - Part 1: Getting Started with PiecesOS SDK

Note: This tutorial is part of the Pieces CLI. We welcome contributions! Feel free to open issues, submit PRs, or suggest improvements.

Introduction

In this two-part tutorial series, we'll build a fully functional Terminal User Interface (TUI) for Pieces Copilot from scratch. In Part 1, we'll explore the PiecesOS SDK and learn how to interact with PiecesOS through coding. In Part 2, we'll create a beautiful TUI using Textual.

What we'll build:

  • A chat interface with streaming responses
  • Chat management (create, view, delete)
  • Long-Term Memory (LTM) support
  • Real-time UI updates

Prerequisites:

  • Python 3.8+ (check with python --version)
  • PiecesOS installed and running (Download here)

Step 1: Setting Up Your Environment

Open or create a new folder for this project. You can name it whatever you’d like. Inside of the folder, follow the steps below:

Create a Virtual Environment

First, let's create an isolated Python environment for our project. Open up a command terminal, and enter the appropriate commands one by one below:


# Create a virtual environment
python -m venv venv

# Activate the virtual environment
## On macOS/Linux:
source venv/bin/activate

## On Windows:
venv\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

Install Dependencies

Still inside of your project folder, create a file called requirements.txt:

# PiecesOS Python SDK for interacting with PiecesOS
pieces-os-client>=3.0.0

# Textual TUI framework for building terminal user interfaces
textual[syntax]>=5.3.0
Enter fullscreen mode Exit fullscreen mode

Inside of a command terminal, install the dependencies:

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Step 2: Connecting to PiecesOS

Initialize the Pieces Client

The PiecesClient is your gateway to PiecesOS. Let's create a simple script to connect. Create a new file called test_connection.py:

# test_connection.py
from pieces_os_client.wrapper import PiecesClient

# Initialize the client
client = PiecesClient()

# Check if PiecesOS is running
if client.is_pieces_running():
    print("βœ… Connected to PiecesOS!")
    print(f"Version: {client.version}")
else:
    print("❌ PiecesOS is not running. Please start it first.")
Enter fullscreen mode Exit fullscreen mode

Go ahead and test it:

python test_connection.py
Enter fullscreen mode Exit fullscreen mode

What's happening here?

  • PiecesClient() automatically discovers your PiecesOS instance port
  • is_pieces_running() checks if PiecesOS is accessible
  • The client handles port scanning and WebSocket connections

Step 3: Working with Chats

Now, let’s start working on creating and getting chats! First we’ll load all of your chats, then create a new chat, and then get that new chat:

List All Chats

# Add this section to the inside of 'if client.is_pieces_running()', as this all should only run if Pieces can connect.

# Get all chats
chats = client.copilot.chats()

print(f"Found {len(chats)} chats:")
for chat in chats:
    print(f"  - {chat.name}: {chat.summary}")
Enter fullscreen mode Exit fullscreen mode

Create a New Chat

# Place this right below the code you just added above

# chats are created automatically when you ask a question
# without setting an active chat

# Clear current chat to create a new one when we call the stream_question method
client.copilot.chat = None

# You can also do this if you want to create one on the spot
# client.copilot.create_chat("My awesome chat")


# Ask a question - this will create a new chat
client.copilot.stream_question("What is Python?")
Enter fullscreen mode Exit fullscreen mode

Load chat Messages

To make our new chat actually print, we have to print each raw_content that comes back from PiecesOS:

# Add this after 'client.copilot.stream_question("What is Python?")'

# Get all messages in the chat
messages = chats[0].messages()

for msg in messages:
    print(f"{msg.role}: {msg.raw_content}")
Enter fullscreen mode Exit fullscreen mode

Step 4: Asking Questions with Streaming

One of the most powerful features is streaming responses. Instead of waiting for the entire response, you get chunks as they're generated, like you’d see in ChatGPT.

Basic Streaming Example

from pieces_os_client.wrapper import PiecesClient

client = PiecesClient()

def handle_stream(response):
    """Callback function for streaming responses."""
    status = response.status

    if status == "INITIALIZED":
        print("πŸ€” Thinking...")

    elif status == "IN-PROGRESS":
        # Get the text chunks
        if response.question and response.question.answers:
            for answer in response.question.answers.iterable:
                if answer.text:
                    print(answer.text, end='', flush=True)

    elif status == "COMPLETED":
        print("\nβœ… Done!")

    elif status == "FAILED":
        print(f"\n❌ Error: {response.error_message}")

# Register the callback
if client.copilot.ask_stream_ws:
    client.copilot.ask_stream_ws.on_message_callback = handle_stream

# Ask a question
client.copilot.stream_question("Explain Python decorators")
Enter fullscreen mode Exit fullscreen mode

⚠️ Error Handling Tip: In production, we will always handle WebSocket disconnections gracefully. The SDK will attempt to reconnect, but you should inform users about connection status.

Understanding the Streaming Flow

The streaming response goes through several states:

  1. INITIALIZED: Copilot is preparing to respond
  2. IN-PROGRESS: Streaming text chunks
  3. COMPLETED: Response is complete
  4. FAILED/STOPPED/CANCELED: Something went wrong

Step 5: Handling Streaming Properly

Let's create a more robust streaming handler:

# streaming_handler.py
from pieces_os_client.wrapper import PiecesClient
from typing import Callable, Optional

class StreamingHandler:
    """Handles streaming responses from Pieces Copilot."""
    def __init__(
        self,
        pieces_client: PiecesClient,
        on_thinking_started: Callable[[], None],
        on_text_chunk: Callable[[str], None],
        on_completed: Callable[[], None],
        on_error: Callable[[str], None],
    ):
        self.pieces_client = pieces_client
        self.on_thinking_started = on_thinking_started
        self.on_text_chunk = on_text_chunk
        self.on_completed = on_completed
        self.on_error = on_error

        self._current_response = ""

        # Register callback
        if self.pieces_client.copilot.ask_stream_ws:
            self.pieces_client.copilot.ask_stream_ws.on_message_callback = (
                self._handle_stream
            )

    def ask_question(self, query: str):
        """Ask a question and handle streaming."""
        self._current_response = ""
        self.on_thinking_started()
        self.pieces_client.copilot.stream_question(query)

    def _handle_stream(self, response):
        """Internal stream handler."""
        status = response.status

        if status == "IN-PROGRESS":
            if response.question and response.question.answers:
                for answer in response.question.answers.iterable:
                    if answer.text:
                        if not self._current_response:
                            # First chunk
                            self._current_response = answer.text
                        else:
                            # Subsequent chunks
                            self._current_response += answer.text

                        self.on_text_chunk(answer.text)

        elif status == "COMPLETED":
            self.on_completed()
            self._current_response = ""

        elif status in ["FAILED", "STOPPED", "CANCELED"]:
            error_msg = getattr(response, "error_message", "Unknown error")
                self.on_error(error_msg)
            self._current_response = ""

        except (AttributeError, ConnectionError, ValueError) as e:
            # Handle specific streaming errors
            self.on_error(str(e))
            self._current_response = ""
Enter fullscreen mode Exit fullscreen mode

πŸ’‘ Best Practice: Use specific exception types rather than broad except Exception handlers. This makes debugging easier and prevents masking unexpected errors.

Using the StreamingHandler

# Example usage
client = PiecesClient()

def on_thinking():
    print("πŸ€” Thinking...")

def on_chunk(text):
    print(f"{text}", end='', flush=True)

def on_done():
    print("\nβœ… Done!")
    client.close()

def on_error(error):
    print(f"\n❌ Error: {error}")

handler = StreamingHandler(
    pieces_client=client,
    on_thinking_started=on_thinking,
    on_text_chunk=on_chunk,
    on_completed=on_done,
    on_error=on_error,
)

# Ask a question
handler.ask_question("What are Python generators?")
Enter fullscreen mode Exit fullscreen mode

Step 6: Working with Long-Term Memory (LTM)

LTM allows Pieces to remember context across chats, making responses more personalized.

Check LTM Status

# Check if LTM system is running - place after ensure_initialization()
    if client.copilot.ltm.is_enabled():
        print("βœ… LTM system is running")
    else:
        print("❌ LTM system is not available")
        # client.copilot.ltm.enable()

    # Check if current chat has LTM enabled - place after LTM system check
    if client.copilot.ltm.is_chat_ltm_enabled:
        print("βœ… Chat LTM is enabled")
    else:
        print("❌ Chat LTM is disabled")


Enter fullscreen mode Exit fullscreen mode

Toggle LTM for Chat

 # Enable LTM for current chat - place after LTM checks
 if client.copilot.ltm.is_enabled():
     client.copilot.ltm.enable()
     print("βœ… Chat LTM enabled")
 else:
     print("❌ LTM system must be running first")

 # Disable LTM for current chat - place after enable LTM
    client.copilot.chat_disable_ltm()
    print("βœ… Chat LTM disabled")
Enter fullscreen mode Exit fullscreen mode

Step 7: Managing chats

Delete a chat

# Get a chat to delete - place after getting all chats
chat_to_delete = chats[0]

# Delete a chat - place after getting chat to delete
print(f"Deleting: {chat_to_delete.name}")
chat_to_delete.delete()
Enter fullscreen mode Exit fullscreen mode

Rename a chat

# Get a chat to rename - place after getting all chats
chat = chats[0]

# Rename a chat - place after getting chat
chat.name = "My New Chat Name"
print(f"βœ… Renamed to: {chat.name}")
Enter fullscreen mode Exit fullscreen mode

Common Issues & Troubleshooting

  1. ### "Connection refused" Error
  • βœ… Ensure PiecesOS is running
  • βœ… Check if port 39300 is available
  1. ### "Module not found" Error
  • βœ… Activate your virtual environment
  • βœ… Reinstall dependencies: pip install -r requirements.txt
  • βœ… Check Python version: python --version (needs 3.8+)
  1. ### Streaming Not Working
  • βœ… Ensure callback is registered before stream_question()
  • βœ… Check that PiecesOS is connected and responsive

Recap

In Part 1, we learned:

  1. βœ… Setting up a Python virtual environment
  2. βœ… Installing the PiecesOS SDK
  3. βœ… Connecting to PiecesOS
  4. βœ… Working with chats
  5. βœ… Handling streaming responses
  6. βœ… Managing Long-Term Memory
  7. βœ… Creating a robust streaming handler

Next Steps

In Part 2, we'll use everything we learned to build a beautiful Terminal User Interface (TUI) with:

  • Split-pane layout for chats and messages
  • Real-time streaming chat interface
  • Interactive widgets and keyboard shortcuts
  • Proper state management and UI updates

Useful Resources


Ready for Part 2? Let's build the TUI! πŸš€

Top comments (0)