If you’ve ever needed to quickly validate a few user flows, or run a sanity check against a handful of URLs — without spinning up a full testing framework — you already know the pain:
- you want Selenium-level fidelity,
- you want clean output you can review later,
- and you want it to be simple enough to run from a terminal.
That’s the gap Weesitor Console is meant to fill: a console-first Selenium runner for authorized QA checks, demo flows, and lightweight monitoring, focused on reducing setup friction and producing reviewable artifacts (logs, screenshots, summaries).
Authorized Use Only
Use this tool only on websites you own, administer, or have explicit permission to test. Respect site terms, robots policies, and local laws.
What Weesitor Console is (in plain language)
Weesitor Console runs Chrome (headless or visible) via Selenium, visits one or more URLs, and writes out:
- JSONL logs (stream-friendly, parseable)
- screenshots on error (so you can see what happened)
- a summary report (counts, timings, failures)
It’s designed to be:
- console-first (readable output, predictable commands)
- reproducible (config-based runs you can share)
- debuggable (artifacts you can actually inspect)
Repository:
mebularts
/
weesitor-console
Weesitor Console is a console-first Selenium runner for authorized QA checks, demo flows, and lightweight monitoring - built to reduce setup friction and produce clean, reviewable artifacts (logs, screenshots, summaries).
Weesitor Console
Weesitor Console is a console-first Selenium runner for authorized QA checks, demo flows, and lightweight monitoring — built to reduce setup friction and produce clean, reviewable artifacts (logs, screenshots, summaries).
Authorized Use Only: Use this tool only on websites you own, administer, or have explicit permission to test. Respect site terms, robots policies, and local laws.
Badges
Replace
mebularts/Weesitor-Consolebelow if your repository name differs.
About
- Developer: @mebularts
- Focus: fast setup, clean structure, real-world usability
- Output-first: every run produces structured logs and failure artifacts you can actually debug
Why Weesitor?
- Console-first UX: banner, tables, progress, live logs
- Reproducible runs: config-based workflow (shareable JSON)
- Session isolation: fresh browser sessions with per-session User-Agent
- Actionable artifacts: JSONL logs + screenshots on error + summary report
- Conservative defaults: low concurrency and built-in delays to prevent accidental misuse
Features
- Direct navigation to user-provided URLs (single URL, multiple URLs…
Where this fits (good use-cases)
This tool is a strong fit when you want quick signal, not a full-blown test platform:
- Authorized QA smoke checks after deployments
- Demo / walkthrough flows before a client meeting
- Lightweight monitoring for a few key pages (availability, basic load, error detection)
- CI sanity runs where you want minimal moving parts
- Investigating flaky behavior with consistent logging & screenshots
If you’re a solo dev or a small team, Weesitor is the kind of “run it now” tool that often saves you from building a bigger system too early.
Where this does NOT fit (honest limitations)
Weesitor Console is intentionally conservative and simple. It is not:
- a web crawler
- a scraping framework
- a load-testing tool
- a replacement for structured E2E suites (Playwright/Cypress + assertions + fixtures)
If you need complex assertions, a full page-object model, or deep reporting dashboards, you’ll likely outgrow this and should move to a dedicated testing stack.
Key features
- Single URL, multi-URL, or file-based URL list runs
- Headless / non-headless mode
- Timeout controls and robust cleanup
- Optional proxy support (with or without auth)
- Session isolation + per-session User-Agent
- Structured output folders: logs, screenshots, summary
Quick start
Requirements
- Python 3.8+
- Google Chrome installed
Install
Windows (PowerShell)
python -m venv .venv
.\.venv\Scripts\activate
pip install -r requirements.txt
Linux / macOS
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
1) Environment check
python main.py doctor
2) Run a single URL
python main.py run --url https://example.com --duration 30 --iterations 1 --headless
3) Run multiple URLs
python main.py run \
--url https://example.com \
--url https://example.org \
--duration 20 \
--iterations 2 \
--concurrency 2
4) Run from a URL list file
python main.py run --urls-file urls.txt --duration 15 --iterations 1
Reproducible runs: config workflow
If you want a run you can commit to a repo, share with a teammate, or reuse in CI:
python main.py init-config --out config.json
python main.py run --config config.json
Example config:
{
"urls": ["https://example.com"],
"duration": 20,
"iterations": 1,
"concurrency": 1,
"headless": true,
"timeout": 20,
"out_dir": "./output"
}
Output structure (what you get after each run)
By default, artifacts are written under ./output:
output/
logs/
run_YYYYmmdd_HHMMSS.jsonl
screenshots/
error_YYYYmmdd_HHMMSS.png
summary/
summary_YYYYmmdd_HHMMSS.json
Why this matters:
- JSONL logs are easy to grep, parse, or ingest into your own tooling.
- Screenshots on failure turn “it broke” into “here’s what broke.”
- The summary gives you a fast overview (counts, timings, failures).
Responsible use: benefits and risks (the part most posts skip)
Tools like this are powerful, and that comes with responsibility.
Benefits
- You can catch obvious breakages quickly (timeouts, redirects, error pages).
- You get reproducible runs that are easy to share and review.
- You reduce “works on my machine” issues by standardizing a simple run flow.
Risks / potential harms
- Unapproved automation can violate terms of service or local law.
- High concurrency or aggressive loops can stress servers (even unintentionally).
- Running against sites you don’t control can create privacy and data-handling issues.
Practical guardrails (recommended)
- Only test where you have explicit permission.
- Keep concurrency low; treat rate limits as a signal to stop and review.
- Avoid using this as a crawler/scraper. That’s not the goal, and it’s easy to misuse.
Operational tips (what I’d do in real projects)
- Start with
--headlessin CI, but use non-headless when debugging locally. - Keep
--timeoutconservative (especially on slower environments). - Keep artifacts in a predictable folder and archive them for failed builds.
Roadmap ideas (if people want it)
- Scenario DSL (JSON-defined steps: navigate / wait / scroll / click_css / type_css)
- Run-level HTML report export
- GitHub Actions: lint + basic smoke test (
doctor) - Docker image for deterministic environments
License
Weesitor Console is published under the MIT License.
TL;DR
Weesitor Console is a console-first Selenium runner for authorized QA checks and lightweight monitoring, with an “output-first” mindset: logs, screenshots, and summary artifacts you can actually review.
If you try it and you have feedback, issues are welcome.
Top comments (0)