my original post
https://baxin.netlify.app/how-to-run-samurai-on-google-colab/
What is Samurai?
SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
Requirements
- Google account for Google Colab
- Hugging Face account to download data
How to Run Samurai on Google Colab
Step 0. Get Hugging Face token and add it to your environment variable
We will need to access to Hugging Face to download data.
If you don't know how to get Hugging Face token, please refer to this page.
Also, if you don't know how to add Hugging Face token to your environment variable, please check this post.
Step 1. Change the default runtime
To run Samurai on Google Colab, we need to change the default runtime to GPU.
We need to use T4 (free-tier GPU).
Step 2. Install packages
!pip install matplotlib==3.7 tikzplotlib jpeg4py opencv-python lmdb pandas scipy loguru
Step 3. Clone the Samurai repository
!git clone https://github.com/yangchris11/samurai.git
Step 4. Install Sam2
%cd samurai/sam2
!pip install -e .
!pip install -e ".[notebooks]"
Step 5. Download checkpoints
%cd /content/samurai/sam2/checkpoints
!./download_ckpts.sh && \
%cd ..
Step 6. Download data from Hugging Face
In this part we will use python script to set up the data that samurai repo mentioned in data preparation section.
https://github.com/yangchris11/samurai?tab=readme-ov-file#data-preparation
The data we will use is l-lt/LaSOT
In this case, we will download cat dataset, so if you want to try other datasets, you can change the code accordingly.
import os
# Define the data directory
data_directory = '/content/samurai/data/LaSOT'
# Create the data directory if it does not exist
try:
os.makedirs(data_directory, exist_ok=True)
print(f"Directory '{data_directory}' created successfully or already exists.")
except OSError as error:
print(f"Error creating directory '{data_directory}': {error}")
# Define the content to be written to the file
content = '''cat-1
cat-20'''
# Define the file path
file_path = os.path.join(data_directory, 'testing_set.txt')
# Write the content to the file
try:
with open(file_path, 'w') as f:
f.write(content)
print(f"Content written to file '{file_path}' successfully.")
except IOError as error:
print(f"Error writing to file '{file_path}': {error}")
# Print the file path
print(f'File path: {file_path}')
import os
from huggingface_hub import hf_hub_download
import zipfile
import shutil
def download_and_extract(base_dir="/content/samurai/data"):
try:
# Create LaSOT and cat directories
lasot_dir = os.path.join(base_dir, "LaSOT")
cat_dir = os.path.join(lasot_dir, "cat")
os.makedirs(cat_dir, exist_ok=True)
# Create directory to save the ZIP file
zip_dir = os.path.join(base_dir, "zips")
os.makedirs(zip_dir, exist_ok=True)
print("Downloading dataset...")
zip_path = hf_hub_download(
repo_id="l-lt/LaSOT",
filename="cat.zip",
repo_type="dataset",
local_dir=zip_dir
)
print(f"Downloaded to: {zip_path}")
# Extract ZIP file to cat directory
print("Extracting ZIP file to cat directory...")
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(cat_dir)
print("\nCreated directory structure:")
print("LaSOT/")
print("└── cat/")
# Display the first few cat folders
for item in sorted(os.listdir(cat_dir))[:6]:
print(f" ├── {item}/")
print(" └── ...")
return lasot_dir
except Exception as e:
print(f"An error occurred: {str(e)}")
return None
if __name__ == "__main__":
extract_path = download_and_extract()
if extract_path:
print("\nDownload and extraction completed successfully!")
else:
print("\nDownload and extraction failed.")
Step 7. Inference
The last step is to run Samurai inference.
Inference will take a while.
%cd /content/samurai
!python scripts/main_inference.py
If everything goes well, you should see the following output:
All the code is available on this GitHub repository.
If you like this post, please give it a star on GitHub.
Top comments (1)
is it possible to use my main videos and to train them instead of the given data sets and if so how