DEV Community

Andrey Petrikovich
Andrey Petrikovich

Posted on

Brief Technical Specification: Browser Automation for Link Visibility and Navigation Testing

It is very interesting for us to implement such a task for our internal product.
We have encountered a challenge: our specialists have not been able to solve this problem, which is why I am reaching out to the broader community of top experts around the world. I am confident that there are people who can either provide valuable guidance or take on this project for compensation.

Objective

Automate a user navigation scenario on a target website in order to reliably reproduce and diagnose the issue of missing or delayed rendering of links, and to verify the correctness of link navigation. The solution is to be used exclusively for QA/diagnostics with explicit permission from the site owner or in a staging environment.

Key Tasks

  • Open target page(s) in a real (headful) browser with full JavaScript support.
  • Wait for dynamic elements (links) to become visible, handling lazy loading, deferred scripts, and intersection observers correctly.
  • Scroll through the page, navigate via links, handle new tabs/pages, and support back/forward navigation.
  • Support testing of different geo-scenarios through proxy usage (for localization/personalization testing), selecting proxies from a file; one run — one proxy.
  • Parameterize links by injecting one variable parameter (from a list/seed) before clicking.
  • Pass the required Referer header (and other headers) within permitted policies and rules of the site.
  • Use realistic interaction timings (randomized reasonable delays) for UX evaluation, without attempting to bypass protections.
  • Logging for diagnostics: HAR files, console messages, network errors, step screenshots, and optionally, video recording.
  • Result verification: after each click, confirm successful page load (status/anchor selector/page title/URL pattern).

Constraints & Compliance

  • Prohibited: techniques to bypass security, emulation of “human-like” patterns for the purpose of deceiving anti-bot systems, generation of “untrusted” events, or interference with security mechanisms.
  • Must comply with site ToS, robots.txt, and rate limits; preferably executed on staging, whitelisted, or with test accounts provided.
  • Proxies must be from legal providers, used only for localization/geo-testing purposes.

Technical Requirements

  • Stack: Playwright (TypeScript/Node.js) or Puppeteer + Chrome DevTools Protocol; run in a real Chrome instance (headful) with WebGL/audio enabled and standard system fonts.
  • Configuration from .env/config.json:
  • baseUrl, list of link paths/selectors, referer, list of link parameter values, proxy file, viewport, timeouts.
  • Element handling: prefer robust selectors (role/ARIA/data-testid), fallback strategies, check element clickability (overlaps, fixed layers, modals, cookie banners).
  • Waiting: waitForSelector/waitForFunction for business-critical anchors, wait for completion of relevant XHR/fetch calls, scroll into view (IntersectionObserver-friendly), support SPA routing.
  • Reporting:
  • JSON/CSV with link visibility metrics (time to appear, retries, click result).
  • HTML report with step-by-step screenshots.
  • Execution modes: local and Docker (Dockerfile + docker-compose.yml).
  • CI integration: pipeline job with artifacts (video/screenshots/HAR).

Validation & Acceptance Criteria
On the provided list of routes/links, scenarios complete successfully in ≥ 95% of runs within the agreed test environment.
For each link, record: time to visibility, click result, final URL/title, key selector on the destination page.
If a link does not appear: save complete logs (HAR + console), final-state screenshots, and a DOM dump of the container element.

Top comments (0)