DEV Community

Anderson Yu-Hong Cai
Anderson Yu-Hong Cai

Posted on

PR-06 at Hacktoberfest: Applying E2E Testing Lessons to Chart.js + Alpha Vantage

Hacktoberfest: Contribution Chronicles

Introduction

In my previous post, I documented how seven rounds of feedback reshaped my HERE Maps E2E tests. This time, I put those lessons straight to work on a new page—Chart.js backed by Alpha Vantage—aiming to get it right on the first try.

Background

  • Project: Hackathon Starter
  • Page: Chart.js with Alpha Vantage data
  • Goal: Add a lean, reliable Playwright E2E test suite
  • Challenge: Avoid the “write everything, then refactor” trap

The Temptation I Skipped

Without the HERE Maps experience, I would have:

  • Loaded the page for every test (8 tests → 8 loads)
  • Checked static HTML (titles, buttons, icons)
  • Verified <script> tags and versions
  • Simulated flaky error scenarios

…which is exactly the bucket of things maintainers asked me to remove last time.

Applying the Lessons — From the Start

Here’s what I did instead, upfront:

1) Shared Page Pattern

Create the page once in beforeAll and reuse it across tests.

Effect: 8 potential page loads → 1 (≈ −87%).

2) Focus on Core Functionality

Not testing: titles, button labels, Font Awesome, <script> tags, descriptive copy.

Do test: chart renders with data, correct labels/points count, and observable behavior.

3) Precise Data Assertions

Where possible, assert exact values (not broad ranges). If data varies, assert deterministic transforms (e.g., label counts, date ordering, first/last point mapping).

4) API Key Strategy

Instead of testing fallbacks, I let the test fail fast when the Alpha Vantage API key is invalid. That surfaces configuration issues immediately and keeps the suite honest.

Final Test Structure

  • Test 1: Page loads once and initializes the chart
  • Test 2: Data → chart pipeline works (labels/points count, x–y mapping)
  • Test 3: Integrity checks (e.g., newest label aligns with latest data; legend present; canvas is drawn)

(Deliberately no static DOM/script/version checks, and no unreliable error simulations.)

Performance & Scope — If vs. Actual

Metric Old Habit (Hypothetical) Actual PR Improvement
Test count 8 3 −62%
Page loads 8 1 −87%
LoC ~230 ~110 −53%
Runtime ~15s ~8.4s −44%
Review rounds 2–3 Likely 1

PR Communication (What I Wrote)

Sorry for the late PR! After your feedback on the HERE Maps tests (#1457), I wanted to apply those optimizations here before submitting:

  • Using a shared page to reduce page loads
  • Removing static element/script checks
  • Focusing on core functionality and precise assertions

Happy to adjust anything further if needed.

Why this works: it explains the delay, shows growth, lists concrete improvements, and stays collaborative.

Side-by-Side: Two PRs, Different Approach

  • HERE Maps PR (#1457): 7 tests → 7 feedback points → heavy refactor → final 3 tests.
  • Chart.js PR (#1489): Started with the optimized pattern → 3 focused tests → expecting minimal feedback.

Key Takeaways

  • Transfer learning matters. Yesterday’s feedback is today’s checklist.
  • Preventive design beats cleanup. Do less, better, earlier.
  • Test what users see, not implementation details.
  • Fail fast on config (API keys). It saves everyone time.
  • Communicate like a teammate. Own trade-offs and invite discussion.

Links

Top comments (0)