<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rich</title>
    <description>The latest articles on DEV Community by Rich (@yerac).</description>
    <link>https://dev.to/yerac</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yerac"/>
    <language>en</language>
    <item>
      <title>From Acceptance Criteria to Playwright Tests with MCP</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Fri, 23 Jan 2026 09:56:39 +0000</pubDate>
      <link>https://dev.to/yerac/from-acceptance-criteria-to-playwright-tests-with-mcp-4ka6</link>
      <guid>https://dev.to/yerac/from-acceptance-criteria-to-playwright-tests-with-mcp-4ka6</guid>
      <description>&lt;p&gt;Modern UI test tooling has quietly raised the bar for who can participate. Playwright is powerful, but it assumes comfort with TypeScript, selectors, repo structure, and terminal use. That gap often collapses testing back onto developers, creating pressure to almost validate their own work. This proof of concept explores using a low-code approach using Playwright MCP for a different split of responsibility. Acceptance criteria stay in plain English, owned by test &amp;amp; Playwright MCP is used as an execution layer to explore the UI and construct real Playwright tests from those criteria. The outcome is not “&lt;em&gt;AI-written tests&lt;/em&gt;”, but executable checks that preserve independent validation without requiring the test team to learn Playwright or its mechanics and focus on acceptance criteria.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with UI testing
&lt;/h2&gt;

&lt;p&gt;The core problem is not test capability, it is familiarity with coded tests. Testers are hired to reason about product behaviour, risk, and intent, yet modern UI testing assumes knowledge of tooling like Cypress/Playwright, and code-first test structures in JavaScript/TypeScript. While non-coded testing tools exist, they tend to be brittle, opaque, or tied to expensive platforms. The result is a gradual drift of test ownership back to developers, reintroducing the “ &lt;strong&gt;&lt;em&gt;marking your own homework&lt;/em&gt;&lt;/strong&gt; ” pressure that independent testing is meant to avoid.&lt;/p&gt;

&lt;p&gt;Yes, developers &lt;em&gt;could&lt;/em&gt; write the tests, but &lt;em&gt;should&lt;/em&gt; they? Yes, the testers &lt;em&gt;could&lt;/em&gt; learn JavaScript and the Playwright framework, but all this takes time, and what if a new hire comes from different tooling or frameworks?&lt;/p&gt;

&lt;p&gt;When we write Features, Stories, or Tasks into our ticketing system at work, they are usually accompanied by requirements and acceptance criteria, to say that in practice, we already have plain-English specifications. With minor adjustments, those acceptance criteria can become explicit UX interaction specs, simply by being a little more deliberate and verbose in the detail.&lt;/p&gt;

&lt;p&gt;Aim:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leverage modern, free test frameworks without forcing the test team to become framework specialists.&lt;/li&gt;
&lt;li&gt;Use automation to translate intent into executable tests, rather than asking humans to translate intent into code.&lt;/li&gt;
&lt;li&gt;Preserve separation between development and validation, avoiding the slow drift toward developers testing their own assumptions.&lt;/li&gt;
&lt;li&gt;Reduce onboarding friction when testers come from different tooling backgrounds, without lowering the quality or rigour of automated tests.&lt;/li&gt;
&lt;li&gt;But mostly: &lt;strong&gt;Allow acceptance criteria to remain the primary artefact, written in plain English and owned by test&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fyer.ac%2Fblog%2Fwp-content%2Fuploads%2F2026%2F01%2FMCPOverview.png%3Fresize%3D1053%252C592" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fyer.ac%2Fblog%2Fwp-content%2Fuploads%2F2026%2F01%2FMCPOverview.png%3Fresize%3D1053%252C592" alt="MCP Overview. &amp;lt;br&amp;gt;
File goes to LLM, MCP tools called, specifically playwright that has tools for creating, debugging and running tests" width="1053" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea is simple. We can use VS Code Co-pilot to take a plain-text test definition, parse this with the LLM to do the reasoning, allow it to call Playwright tooling via MCP and then output a shiny, validated and runnable TypeScript test at the end.&lt;/p&gt;
&lt;h3&gt;
  
  
  Getting setup
&lt;/h3&gt;

&lt;p&gt;We need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Visual Studio Code&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Copilot&lt;/strong&gt; , ideally with a stronger model such as the latest GPT or Claude Sonnet. The better and more &lt;em&gt;appropriate&lt;/em&gt; the model, the better the outcome will typically be.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional but recommended:&lt;/strong&gt; the VS Code extension &lt;em&gt;Playwright Test for VS Code&lt;/em&gt;, which adds native support for Playwright tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What will not be covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any Playwright specifics like best practices or common playwright pitfalls, this will focus purely on the auto-test generation. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create a new folder and open it in VS Code. Run the following command to scaffold a new Playwright project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init playwright@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install Playwright if it is not already present.&lt;/p&gt;

&lt;p&gt;During setup, the tooling will ask for permission to install the &lt;code&gt;create-playwright&lt;/code&gt; package and prompt for a few configuration choices, such as language (JavaScript vs TypeScript), test folder name (“tests” is fine), and whether to install browser dependencies. The defaults are generally sensible for this proof of concept.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv8i1h6n22xjne8dazzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv8i1h6n22xjne8dazzp.png" alt="VS Code terminal showing the setup steps described above" width="543" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  MCP setup
&lt;/h4&gt;

&lt;p&gt;At this point we have a standard Playwright project. Next, we need to install the Playwright MCP tooling.&lt;/p&gt;

&lt;p&gt;This can be done by visiting the Playwright MCP repository on GitHub and clicking &lt;strong&gt;Install&lt;/strong&gt; for VS Code:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/microsoft/playwright-mcp" rel="noopener noreferrer"&gt;https://github.com/microsoft/playwright-mcp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once installed, ensure the MCP server is running. Open the VS Code command palette and search for &lt;strong&gt;“MCP: List Servers”&lt;/strong&gt;. You should see &lt;strong&gt;Playwright&lt;/strong&gt; listed.&lt;/p&gt;

&lt;p&gt;Selecting it will show options to start, stop, and configure the server. You can start it from here if it is not already running, or choose &lt;strong&gt;Show Configuration&lt;/strong&gt; to open the &lt;code&gt;mcp.json&lt;/code&gt; file and inspect or adjust the setup directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9tefxisxo6b9vn6ogif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9tefxisxo6b9vn6ogif.png" alt="Listing and interacting with MCP in VS Code as described above." width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Custom agent instructions and test prep
&lt;/h3&gt;

&lt;p&gt;Like all LLM interaction, this works best when it is provided with appropriate context. We will do two things here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a custom context that explains how the agent should utilise MCP and Playwright, and how it should behave as a tester.&lt;/li&gt;
&lt;li&gt;Create a custom instruction set that references this context where appropriate, along with some quality-of-life instructions to help keep inputs and outputs aligned.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, create two folders at the repository root:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;prompts&lt;/strong&gt;
This is where our plain-language tests will live.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;contexts&lt;/strong&gt;
This is where we place our custom context files. We can, and later will, utilise the &lt;code&gt;.github/agents/instructions&lt;/code&gt; directory, but it will become clear shortly why this is kept separate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inside the &lt;strong&gt;contexts&lt;/strong&gt; folder, create a new markdown file named &lt;code&gt;playwright-tester-agent.md&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Agent Context
You are a Playwright test generator agent.
You're given a natural language scenario describing what to test in a web application.
Your task is to generate a valid Playwright test using @playwright/test in TypeScript.

# Important:
- DO NOT generate the full test code immediately based on the scenario alone.
- DO gather context by executing steps one at a time using the Playwright MCP Tools. These steps may include but not limited to:
    - Inspecting DOM structure.
    - Fetching selectors.
    - Validating element visibility

# Process
1. Parse the scenario and break it down into actionable steps.
2. For each step:
    - Use MCP to fetch the page context
    - Validate element presence, interaction type (click, type, wait, etc.)
3. Once all steps are validating and the context is collected,
    - Emit a final Playwright test using @playwright/test syntax in TypeScript.
    - Include appropriate waits, locators, and assertions based on message history.
4. Save the generated `.spec.ts` file into the `/tests` directory.
5. Execute the tests using the Playwright test runner.
6. If the test fails, re-evaluate using MCP context and regenerate until it passes.

# Notes/Guidance
- Use plain readable locators
- Avoid hardcoding unless required.
- Follow Playwright best practices for stability and retries.
- You may be testing Single Page Apps with initial load/waits. 

# GOAL
To generate reliable, maintainable, and context-aware Playwright tests using AI and MCP.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, the context file exists, but unless it is explicitly assigned in the chat window, it will not be used.&lt;/p&gt;

&lt;p&gt;Earlier I mentioned that this file could be placed in the native instructions folder. While that does work, it causes the context to be injected into &lt;strong&gt;every&lt;/strong&gt; GitHub Copilot chat session for the project. That is not what we want. The goal is to apply this behaviour only when we are explicitly asking the agent to create or evolve tests.,&lt;/p&gt;

&lt;p&gt;To achieve this, create a new global instruction set, named it something like: &lt;code&gt;agent-pw.instructions.md&lt;/code&gt;. We can do this by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the Copilot panel in Visual Studio Code&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Settings → Chat Instructions&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;New Instruction File&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3bdxjiq9ld19fh3wt8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3bdxjiq9ld19fh3wt8r.png" alt="Chat instruction setup in VS Code" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This instruction file can selectively reference the &lt;code&gt;playwright-tester-agent.md&lt;/code&gt; context only when test generation is requested, keeping normal Copilot usage unaffected while still giving us a highly opinionated, test-focused agent when we need it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
applyTo: '**'
---
If you are being asked to generate tests for a piece of code, and acting in agent mode, follow these instructions:
- Read the instructions in the contexts/playwright-tester-agent.md file carefully.
- Follow the step-by-step process outlined in that file to gather context and generate reliable Playwright tests.

On completion. If you used a `.md` file under the `/prompts` folder to generate the tests, update the prompt file to comment the corresponding test file path at the top, i.e. `# File: tests/**/login.spec.ts` This will help track which prompts generated which tests for updates. NO OTHER UPDATES can be made to the prompt file.

When generating tests the following rules should be followed:
- Creating new data, always use unique values to avoid conflicts with existing data.
- All tests that create data must also clean up that data at the end of the test to avoid polluting the test environment.
- You will NEVER delete data that you did not create within the test itself.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point we are essentially saying:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read the agent context for being a tester if you are being asked to generate tests.&lt;/li&gt;
&lt;li&gt;Update the input md with the test name so we can link the 2, useful for asking it to update a test.&lt;/li&gt;
&lt;li&gt;Hygiene guardrails for creating and deleting data – which &lt;em&gt;seemed&lt;/em&gt; to work.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Running our first test
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Manually
&lt;/h3&gt;

&lt;p&gt;First just to get a handle of how this works, if we go to the chat window, ensure it is in agent mode, and just ask it something like (&lt;em&gt;Note that as we did the step above for custom instructions, we do not need to add the context&lt;/em&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generate a Test for this scenario:

1. Navigate to www.google.com
2. Validate the search input is visible
3. Validate the "Google Search" button is visible.
4. Type "Sausages" into the search box, and press the search button.
5. Confirm we have been redirected to a search page, and the page contains information on Sausages.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: Using sites like Google is a poor real-world example, as headless browsers are often blocked by CAPTCHA. This is purely illustrative.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent will explore the page, call the Playwright MCP tools, and construct a Playwright test that satisfies the scenario.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8i0xjuc5a0ucobfinaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8i0xjuc5a0ucobfinaj.png" alt="Agent constructing a test in VS Code CoPilot as described above." width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you run the generated test (placed under the &lt;code&gt;/tests&lt;/code&gt; folder), either via the &lt;strong&gt;Tests&lt;/strong&gt; tab in VS Code if you installed the Playwright extension, or via the CLI, you should see it pass successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknh8job2yy965rw5evn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknh8job2yy965rw5evn1.png" alt="Test runner window in VS Code" width="441" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent prompt files
&lt;/h3&gt;

&lt;p&gt;The issue with doing this manually in chat is that none of it is durable. Once the chat window is closed, the intent behind a test is lost, updating it becomes harder than it should be, and you no longer have a clean record of what the test was actually trying to prove. You end up with code that “&lt;em&gt;works&lt;/em&gt;”, but without the plain-English specification that explains &lt;em&gt;why&lt;/em&gt; it exists..&lt;/p&gt;

&lt;p&gt;Under our &lt;code&gt;prompts&lt;/code&gt; directory, we can create a new markdown file (or a logical folder structure of &lt;code&gt;.md&lt;/code&gt; files) for our tests, where the content is the same as the manual scenario. We can now goto the chat window with this test in context and ask it to “Make tests” (or “Update Tests” later)&lt;/p&gt;

&lt;p&gt;This will create a new test file under &lt;code&gt;Tests&lt;/code&gt;, the same as the manual scenario above.&lt;/p&gt;

&lt;p&gt;Because our instruction set includes a rule to update the prompt file with the corresponding generated test file path, these files are automatically kept in sync, creating a 1:1 mapping between the prompt and the test. This means that if we later add another requirement and ask the agent to regenerate, it updates the existing test rather than creating a new one.&lt;/p&gt;

&lt;p&gt;At this point we have a framework in place for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Placing markdown files into a structured &lt;code&gt;prompts&lt;/code&gt; folder&lt;/li&gt;
&lt;li&gt;Generating and updating tests without needing to understand the underlying test tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftco2892mv8tzss0j95hn.png" alt="⚠" width="72" height="72"&gt;Test your tests!
&lt;/h2&gt;

&lt;p&gt;The auto-healing loop is powerful, but it needs clear boundaries. An agent that is allowed to regenerate tests until they pass can easily optimise for success rather than correctness, for example by weakening assertions or validating incidental behaviour.&lt;/p&gt;

&lt;p&gt;Make sure that you debug (as in, watch, or record using Playwright) each test execution each time the test changes to ensure it’s doing what &lt;em&gt;you think&lt;/em&gt; it should be doing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full Autonomy as an option
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Agents and why we deliberately did not use all of them
&lt;/h3&gt;

&lt;p&gt;Playwright MCP ships with multiple &lt;a href="https://playwright.dev/docs/test-agents" rel="noopener noreferrer"&gt;agent&lt;/a&gt;profiles, typically oriented around planning, execution, and validation. On paper, this looks appealing: a fully autonomous loop that can explore an application, decide what to test, and generate the tests itself. These can optionally be added into the solution easily by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx playwright init-agents --loop=vscode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;..and honestly, its _ &lt;strong&gt;impressive&lt;/strong&gt;._ In practice however, this is exactly the boundary we chose not to cross for this proof of concept.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The issue is not capability, it is authority&lt;/strong&gt;. When the same agent is allowed to discover behaviour, define expectations, and then validate those expectations, you collapse intent and verification into a single feedback loop. That produces coverage, but it does not produce confidence. To contrast with the opening problem, in this case you are not just marking your own homework, you are writing the subject and the test as well. This could be even worse if AI also &lt;a href="https://dev.to/yerac/vibing-in-kiro-to-create-a-self-serve-portainer-wrapper-28ee"&gt;wrote the code&lt;/a&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  But you should check them out anyway…
&lt;/h3&gt;

&lt;p&gt;Once the tooling is installed, providing your running VS Code v1.105 (released October 9, 2025) or later, we will see 3x new agents in the chat window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0065w9txrb3gajpwiftu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0065w9txrb3gajpwiftu.png" alt="Agents list in Co-Pilot showing the 3 Playwright agents" width="535" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agents are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Planner&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;This agent explores a URL and generates verbose test plans based on what it discovers, without requiring explicit direction, although guidance can be provided. &lt;strong&gt;I did find value in asking it to explore pages I had already written tests for&lt;/strong&gt; , as it occasionally surfaced additional edge cases. The output is a Markdown test plan that can be reviewed, edited, or passed on to another agent.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Generator&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;This agent takes a test plan, optionally produced by the Planner, and generates the required Playwright tests.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Healer&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;This agent works against failing tests and focuses on repairing them. &lt;strong&gt;It is particularly useful for stabilising brittle UI tests&lt;/strong&gt; where selectors or timing assumptions have drifted.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;I won’t go into detail on this, as their docs have plenty of more verbose notes.&lt;/p&gt;

&lt;p&gt;I did actually add these into my project, but only used them for focussed activities rather than in a fully autonomous mode.&lt;/p&gt;

&lt;h2&gt;
  
  
  What isn’t covered here / Assumed knowledge and scope
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Testers still need to think like someone writing &lt;em&gt;automated&lt;/em&gt; tests.
&lt;/h3&gt;

&lt;p&gt;This post does not attempt to cover every aspect of building robust automated UI tests. Topics such as environment setup and teardown, test data seeding, authentication flows, and isolation between tests are all still critical to long-term stability and are deliberately out of scope here.&lt;/p&gt;

&lt;p&gt;While this approach removes the need for testers to work directly in Playwright or TypeScript (as in, understand the frameworks), it does not remove the need for good testing judgement. Understanding how well-structured automated tests should behave, how to avoid hidden coupling between tests, and how to reason about state, data lifecycle, and failure modes remains essential.&lt;/p&gt;

&lt;p&gt;In other words, this lowers the barrier to expressing tests in code, but it &lt;strong&gt;does not eliminate the need to think like someone who writes good automated tests&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why markdown structure matters, aka. My examples are bad…
&lt;/h3&gt;

&lt;p&gt;I chose to use basic numbered list for my test example. Whilst this &lt;em&gt;is&lt;/em&gt; valid and may be suitable for basic tests, real prompt files should be structured like lightweight test specs in markdown, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;Before Each Test&lt;/strong&gt; section for navigation and preconditions&lt;/li&gt;
&lt;li&gt;Optional &lt;strong&gt;After Each Test&lt;/strong&gt; for clean-up&lt;/li&gt;
&lt;li&gt;A numbered set of &lt;strong&gt;scenarios&lt;/strong&gt; , each with a clear name and deterministic assertions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps prompts durable, reviewable, and easy to evolve over time, and it helps the generated Playwright code stay aligned with the intent rather than becoming a pile of incidental checks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Google : Basic Search

# Before Each Test
- Navigate to https://www.google.com
- If a consent dialog is shown, accept it (only if present) so the page is usable.

# 1. Search

## 1.1 Can load the homepage and see the core controls
- Navigate to https://www.google.com
- Search input is visible.
- "Google Search" button is visible.

## 1.2 Can perform a search and reach results
- Navigate to https://www.google.com
- Search input is visible.
- Type "Sausages" into the search input.
- Click "Google Search" (or submit the search form).
- URL indicates we are on a results page (typically contains `/search`).
- Results page contains content related to "Sausages" (at minimum, the query appears on the page).

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final considerations
&lt;/h2&gt;

&lt;p&gt;This is not about replacing testers with AI. While Playwright MCP can be configured to operate in a fully autonomous loop, the more interesting value is in using it as a bridge between human intent and executable tests.&lt;/p&gt;

&lt;p&gt;By keeping acceptance criteria explicit and human-owned, and constraining the agent to translate rather than invent intent, you preserve independent validation while removing the need for testers to work directly in code. The result is not fewer testers, but better leverage of their time on behaviour, risk, and edge cases rather than tooling mechanics.&lt;/p&gt;

&lt;p&gt;Full autonomy remains a useful option in specific scenarios, such as discovery, baseline coverage, or large UI changes. For validating that a system behaves as intended, however, a constrained, intent-driven approach tends to produce clearer tests and greater confidence.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://yer.ac/blog/2026/01/23/from-acceptance-criteria-to-playwright-tests-with-mcp/" rel="noopener noreferrer"&gt;From Acceptance Criteria to Playwright Tests with MCP&lt;/a&gt; appeared first on &lt;a href="https://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>testing</category>
      <category>playwright</category>
      <category>automation</category>
    </item>
    <item>
      <title>Vibing in Kiro to create a self-serve Portainer wrapper.</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Mon, 18 Aug 2025 10:15:11 +0000</pubDate>
      <link>https://dev.to/yerac/vibing-in-kiro-to-create-a-self-serve-portainer-wrapper-28ee</link>
      <guid>https://dev.to/yerac/vibing-in-kiro-to-create-a-self-serve-portainer-wrapper-28ee</guid>
      <description>&lt;p&gt;Original&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://yer.ac/blog/2025/08/18/vibing-in-kiro-to-create-a-self-serve-portainer-wrapper/" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fyer.ac%2Fblog%2Fwp-content%2Fuploads%2F2025%2F08%2FUntitled-1.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://yer.ac/blog/2025/08/18/vibing-in-kiro-to-create-a-self-serve-portainer-wrapper/" rel="noopener noreferrer" class="c-link"&gt;
            Vibing in Kiro to create a self-serve Portainer wrapper. - yer.ac | Adventures of a developer, and other things.
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Using an Agentic IDE to create a self-service Portainer dashboard
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
          yer.ac
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;
.
&lt;/h2&gt;

&lt;p&gt;I recently (finally) got my invite to &lt;a href="http://Kiro.Dev" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt;, Amazon’s new agentic IDE. I’d tinkered with it, but never had a real use case outside of the usual “make me a to-do app” test.&lt;/p&gt;

&lt;p&gt;Then Friday afternoon rolled around. With 30 minutes to go, I was asked to start parts of our Docker infrastructure. It’s a simple click in Portainer, which got me thinking… &lt;em&gt;we should automate this&lt;/em&gt;. Developers can sign in and toggle services on or off, but that still blocks non-technical users from self-serving. Why isn’t there a &lt;em&gt;simple&lt;/em&gt; dashboard for stopping (scale to zero), starting (scale to 1), or restarting a Portainer service? At least, not one that doesn’t require infra changes or digging through old repos – and again, this is Friday afternoon.&lt;/p&gt;

&lt;p&gt;As an aside, our internal environments have a 1:1 mapping between service and container, so service-level actions are the quickest path. Your mileage may vary. This post is also more “mental notes” than tutorial.&lt;/p&gt;

&lt;p&gt;I wrote this before the recent Kiro announcement, see the final concluding thoughts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Validating with PowerShell first
&lt;/h2&gt;

&lt;p&gt;First I wanted to confirm that Portainer exposes an API I could hit to scale services. If that worked, I could feed it into Kiro later.I used ChatGPT to outline the Portainer API boundaries, then had it generate a small PowerShell harness to prove the calls and auth. Straightforward enough: API key (Portainer → Account → Keys), your Environment ID, and a service name.&lt;/p&gt;

&lt;p&gt;The gist is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find the service by name&lt;/li&gt;
&lt;li&gt;Pull its spec/version&lt;/li&gt;
&lt;li&gt;Change the replica count (0 = stop, 1 = start)&lt;/li&gt;
&lt;li&gt;Update the service&lt;/li&gt;
&lt;li&gt;Verify&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With that working — and being able to scale to 0 or 1 easily — I dusted off my Kiro preview access.&lt;/p&gt;

&lt;p&gt;Code below for reference:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$BASE = 'http://portainer:9000/api'
$HEADERS = @{ 'X-API-Key' = 'your-key' }
$envId = 8 # change this to the environment id
$svcName = 'PRODUCT-SIT' # change this to the service name under the environment

# 1) Find the service by name
$filters = @{ name = @{ $svcName = $true } } | ConvertTo-Json -Compress
$enc = [uri]::EscapeDataString($filters)
$svcList = Invoke-RestMethod -Headers $HEADERS -Uri "$BASE/endpoints/$envId/docker/services?filters=$enc"

if (-not $svcList) { throw "Service '$svcName' not found on endpoint $envId." }
$svcId = $svcList[0].ID

# 2) Get full spec and version
$svc = Invoke-RestMethod -Headers $HEADERS -Uri "$BASE/endpoints/$envId/docker/services/$svcId"

# 3) Set replicas to 0 (replicated mode only)

if ($svc.Spec.Mode.Replicated -eq $null) { throw "Service is not replicated." }
Write-Host $svc
$svc.Spec.Mode.Replicated.Replicas = 0

# 4) Update the service with the new spec
$ver = $svc.Version.Index
$body = $svc.Spec | ConvertTo-Json -Depth 100

Write-Host $body

Invoke-RestMethod -Method Post -Headers $HEADERS -ContentType 'application/json' `
  -Uri "$BASE/endpoints/$envId/docker/services/$svcId/update?version=$ver" -Body $body

# 5) Optional verify
$verify = Invoke-RestMethod -Headers $HEADERS -Uri "$BASE/endpoints/$envId/docker/services/$svcId"
$verify.Spec.Mode.Replicated.Replicas
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Spec driven Vibe Coding with Kiro
&lt;/h2&gt;

&lt;p&gt;Kiro is another fork of VS Code with agentic AI built in, similar to Cursor. Where Kiro differs is its &lt;strong&gt;Spec Mode&lt;/strong&gt;. We can write a compact prompt that defines data, actions, states, and constraints, and the agent treats it like a contract. For my prompt, I kept it rough. I just pasted my PowerShell as the API reference and wrote:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I can turn off/on my docker instances by altering the scale set to 0 or 1 in portainer.
I have a script to do this like this:
{{ powershell code }}
I would like system where I have a JSOn config file which holds the URL, APIKEY, and a list of service names.
Then I want a UI for this which will:
- Show all the services
- Indicator to say if the svc is running or not (based on the scale &amp;gt;0)
- An option to turn off (scale 0), turn on (scale 1), or restart (scale 0, wait, scale 1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, Kiro will convert the prompt into system requirements. Each requirement will have its own Acceptance Criteria and will be written in a BDD syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzon1hsg9qqr5dbqqzr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzon1hsg9qqr5dbqqzr5.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we are happy with the Requirements and validated its what we want, we can move to &lt;strong&gt;Design&lt;/strong&gt; Phase. The design phase validates the technology choices it will use for the frontend and backend, designs and maps out the interactions between components with some sample code, and finally it adds details on testing strategy which it will use to validate itself as it goes. Here we can make any amendments (I didn’t) before moving on. Interestingly, over every test I did it defaulted to Node/React/Typescript in every instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fck1cgvjzeublqaqdqrq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fck1cgvjzeublqaqdqrq0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final phase is the &lt;strong&gt;Task List&lt;/strong&gt; and is where we can see all the steps the agent will execute to be able to achieve the end-goal. Each step has sub-tasks and a link back to the requirement for self-validation and guidance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rew70u1wz8k903wnjy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rew70u1wz8k903wnjy9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whilst the screenshot above shows me post-execution, its simply a case of selecting “Start Task” next to an uncompleted item to kick the process off&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfumopcb63u2036eauf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfumopcb63u2036eauf7.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kiro also supports &lt;strong&gt;steering rules&lt;/strong&gt; for guardrails and native &lt;strong&gt;MCP&lt;/strong&gt; integration to plug in tools, APIs, and docs (Although I didn’t make use of this).&lt;/p&gt;

&lt;h2&gt;
  
  
  Let the magic happen…
&lt;/h2&gt;

&lt;p&gt;I kicked things off by running the tasks in order — starting with the basic scaffolding and React setup. As it progressed, Kiro validated its own work with unit tests and checks along the way.&lt;/p&gt;

&lt;p&gt;Like most agentic IDEs, Kiro can’t run commands without approval. Whenever it needed to spin up a server, run tests, or interact with the system, it prompted me first. You can add commands to a trust list, either as one-offs or using wildcards — for example, trusting all &lt;code&gt;npm *&lt;/code&gt; commands versus just &lt;code&gt;npm start&lt;/code&gt;. That means over time it can run more autonomously on repeat runs.&lt;/p&gt;

&lt;p&gt;There were a few hiccups though. A common one was getting stuck on steps like “Validating that Express server starts,” since it doesn’t seem to realise the terminal is tied up if the command keeps running. Similarly, Jest sometimes hung after running tests, waiting for me to manually stop it -which left Kiro stuck as well.&lt;/p&gt;

&lt;p&gt;When I intervened (Ctrl+C to the rescue), it usually recovered and checked the terminal output to confirm whether tests had passed. But a couple of times it jumped to the wrong conclusion and marked everything as fine despite failures – I assume because there was no proper exit code or error thrown.&lt;/p&gt;

&lt;p&gt;In those moments, the chat window proved useful. For example, when it couldn’t confirm if the server was running (because the terminal was still tied up with &lt;code&gt;npm start&lt;/code&gt;), I told it to try &lt;code&gt;curl&lt;/code&gt; instead. It did, and even remembered that approach for later runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j1ui2z090kij3fmi8gz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j1ui2z090kij3fmi8gz.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interestingly, when I asked “&lt;em&gt;Are you stuck?&lt;/em&gt;“, the agent responded quite sarcastically telling me “&lt;em&gt;I’m not stuck thank you, I am just busy!!&lt;/em&gt;“.&lt;/p&gt;

&lt;h2&gt;
  
  
  The end result
&lt;/h2&gt;

&lt;p&gt;The end result was quite impressive really given my &lt;em&gt;very basic&lt;/em&gt; prompt. I updated my config with a few of our environments and ran the start command. This is way closer to a working system than I got from other agents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eyxqme25q38hs9am3y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eyxqme25q38hs9am3y0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was 99% the way there, the only thing that didn’t work well was the restart which was simply sending a malformed model to the API resulting in a HTTP 400. I simply switched to Kiro chat and told it there was a problem in a specific file and gave it the API info again and it corrected itself. I don’t know if I can blame Kiro for that though as I was hesitate to give it my real API key for testing until the very end in a human controlled test – I didn’t want it going rogue and messing up our environments!&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts on the output
&lt;/h2&gt;

&lt;p&gt;I mean, it did what I asked. In under half an hour on a Friday, with nothing more from me than a rough couple hundred characters and the occasional tap on “Trust command,” it built a system that actually worked and would be enough for my team to self-service our test environments.&lt;/p&gt;

&lt;p&gt;Would I rely on this for anything beyond quick POCs? Probably not. It’s a fun tool/toy, and for an internal use case like this it got about 80% of the way there with almost zero effort from me. But if this were destined for production, I’d need to review every file to be confident, and that’s where I think these agents shine more as “peer coders” than full replacements (For example I quite like Co-Pilot).&lt;/p&gt;

&lt;p&gt;There’s also the risk of people shipping AI-generated slop without understanding it. Honestly, if I’d been a younger dev, I might have looked at this output and thought it was ready to ship even though I wouldn’t have had the React experience (I still don’t!) to properly validate it.&lt;/p&gt;

&lt;h2&gt;
  
  
  More importantly, Kiro’s priced themselves out.
&lt;/h2&gt;

&lt;p&gt;I got &lt;em&gt;very lucky&lt;/em&gt; with my timing here as literally &lt;strong&gt;that evening&lt;/strong&gt; Kiro officially announced the &lt;a href="https://kiro.dev/changelog/paid-tiers-and-waitlist-codes/" rel="noopener noreferrer"&gt;tiering and free-usage limits&lt;/a&gt;, as well as the &lt;a href="https://kiro.dev/blog/pricing-plans-are-live/" rel="noopener noreferrer"&gt;pricing plans.&lt;/a&gt; It looks like free users don’t get any spec requests at all (other than the 100 trial ones that expire), and even the $20 tier gets a mere 125 Spec requests and 225 Vibe requests. Even users on $40 pro accounts were finding that they were burning through their limits within an hour.&lt;/p&gt;

&lt;p&gt;The issue here is that this doesn’t equate to 125 Spec requests or 225 additional interactions through vibing, as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each interaction could cost at last 1 spec and 1 vibe each.&lt;/li&gt;
&lt;li&gt;The agent re-validating itself uses credits&lt;/li&gt;
&lt;li&gt;Some specs/vibes are consuming multiple credits during feedback loops – with one user finding they burnt through a lot of their allowance with &lt;a href="https://www.reddit.com/r/kiroIDE/comments/1msllie/very_expensive_with_nontransparent_pricing/" rel="noopener noreferrer"&gt;225 vibes equating to ~11 questions&lt;/a&gt; and another burnt through &amp;gt;100 vibe requests asking Kiro to validate its own requirements vs the design and task list, and many more burning through the allowance &lt;a href="https://www.reddit.com/r/kiroIDE/comments/1msksw5/renew_the_account_and_all_the_vibe_credits/" rel="noopener noreferrer"&gt;under an hour&lt;/a&gt;. Even the $200 plan seems low-value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was met by quite a bit of backlash in places like the &lt;a href="https://www.reddit.com/r/kiroIDE/" rel="noopener noreferrer"&gt;Kiro subreddit&lt;/a&gt; and their official discord with many users stating how expensive the plans are, the lack of transparency and in general a bit of a rug-pull.&lt;/p&gt;

&lt;p&gt;Its a shame really as the hype was quite big and it &lt;em&gt;seemed&lt;/em&gt; like it addressed a bunch of issues that Agentic IDEs like Cursor and Windsurf had. Just unfortunate that the pricing plan has essentially killed the product off before it got to its full release.&lt;/p&gt;

&lt;h1&gt;
  
  
  Verbosity is suddenly an issue
&lt;/h1&gt;

&lt;p&gt;I didn't think much about this (More is better, right?!) until I came to edit this post. &lt;/p&gt;

&lt;p&gt;I did find that the task list was a little too verbose, and I didn't actually run the final few tasks which were all about adding additional integration testing, logging, comprehensive unit tests (which it already had a lot of), and "productionisation readiness".  Overall there were 10 main steps, with most having between 1 and 5 subtasks.&lt;/p&gt;

&lt;p&gt;There is nothing wrong with the requirements and task list - If I were writing a spec to give to another dev I would include all this information, but in a world where each interaction has a considerable cost there is even more risk that people will omit "non-critical-path" deliverables like testing and documentation like I did. In a world of AI slop where nobody understands the outputs, do we want to risk the omission of documentation? Also what happens if the model gets changed to default to enhanced unit testing? Suddenly the costs increase...&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;Honestly, despite the sour taste around Kiro’s usage limits which makes me unlikely to stick with it for further enhancements, I do see a solid future for the self-service tool itself. It solved a real pain point in our Docker infrastructure, and with a few minor tweaks (maybe continuing the “zero-code” approach with Co-Pilot?) and deploying it into our cluster, we’ll be saving time without interrupting any critical path work. &lt;strong&gt;Overall, I’d call the tool a success&lt;/strong&gt;  &lt;strong&gt;and a fun experience nonetheless.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://yer.ac/blog/2025/08/18/vibing-in-kiro-to-create-a-self-serve-portainer-wrapper/" rel="noopener noreferrer"&gt;Vibing in Kiro to create a self-serve Portainer wrapper.&lt;/a&gt; appeared first on &lt;a href="https://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>react</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Moving local workspaces between users VS2019/TFS [Self Hosted/ Azure Devops]</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Wed, 10 Mar 2021 09:13:49 +0000</pubDate>
      <link>https://dev.to/yerac/moving-local-workspaces-between-users-vs2019-tfs-self-hosted-azure-devops-1h6</link>
      <guid>https://dev.to/yerac/moving-local-workspaces-between-users-vs2019-tfs-self-hosted-azure-devops-1h6</guid>
      <description>&lt;p&gt;Whilst I use GIT for most my source control these days, I still have some projects in TFSVC. On a recent switch of Visual Studio accounts I temporarily lost access to my mapped workspaces as these are linked to the VS logged in user, rather than to the machine. This post will show migrating a workspace to another user.&lt;/p&gt;

&lt;p&gt;There are 3 scenarios covered here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accessing a remote workspace for another user including any pending changes&lt;/li&gt;
&lt;li&gt;Migrating a workspace with no pending changes&lt;/li&gt;
&lt;li&gt;Migrating a workspace with pending changes &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  The low effort approach instead:
&lt;/h4&gt;

&lt;p&gt;If you don’t have many mappings, and have access to your old workspace still. Simply shelve any changes, map the new workspace and then unshelve the old users changes to your new workspace – all in VS. No need to read on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You are logged into VS2019 with your new user that you wish the workspaces to belong to.&lt;/p&gt;

&lt;p&gt;I am assuming that the workspaces exist on the current machine. If not you will need to use the &lt;code&gt;tf&lt;/code&gt; command for setting the computer name to your current machine first.&lt;/p&gt;

&lt;p&gt;This will be using the &lt;strong&gt;VS Developer Command Prompt&lt;/strong&gt;. I will also be using Windows Terminal to launch this, as per the guide in &lt;a href="https://dev.to/wabbbit/adding-vs-developer-command-prompt-to-windows-terminal-vs-2019-44pg-temp-slug-8547805"&gt;Adding VS Developer Command Prompt To Windows Terminal (VS 2019)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You will (Or we did) need to run these commands on each users machine, as although the remote update call succeeded it didn’t actually appear to have changed the workspace. YMMV.&lt;/p&gt;

&lt;p&gt;You will also need permission to administer workspaces. In Azure DevOps this is at organization level under repos, as below. Note that this is a global permission so ensure any commands are scoped to a particular user or workspace as there is no revert.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjxlep1h4h4sbilvwpaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjxlep1h4h4sbilvwpaa.png" width="700" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing your old workspace (Pending changes intact)
&lt;/h2&gt;

&lt;p&gt;If you don’t need to migrate, but simply need access you can do this by making the workspace public with the command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tf workspace /collection:https://domain.com "WorkspaceName;UserName" /permission:Public&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will display the VS popup for editing a workspace. From what I can tell this has to be done on the machine that contains the mappings(?). Press OK. If OK is greyed out, you do not have appropriate permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1k5zcfpm6ks2wdt9w23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1k5zcfpm6ks2wdt9w23.png" width="577" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may need to restart VS, but should now see an accessible remote workspace which you can work against.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating workspaces
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Locating workspaces
&lt;/h4&gt;

&lt;p&gt;Launching the VS Developer Command Prompt and executing the command &lt;strong&gt;tf workspaces&lt;/strong&gt; will display the workspaces that are accessible on the &lt;em&gt;system&lt;/em&gt;, but not necessarily the logged in user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e408s87nn6ef5rmi768.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e408s87nn6ef5rmi768.png" width="514" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use this to get the name of the collection, and the workspace. You can also run the command below if you want to view for an owner which is not yourself, or the workspace is remote. (replacing owner and collection as appropraite)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tf workspaces /owner:olduser@domain.com /collection:https://your-collection.com&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Migrating (Workspace has NO pending changes)
&lt;/h4&gt;

&lt;p&gt;Run the command below, replacing the workspace name, user and&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tf workspace "WORKSPACENAME;OWNER@DOMAIN.COM" /newowner:newowner@domain.com /collection:https://domain.com /noprompt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This should attempt to migrate. If you have no pending changes this should resolve successfully.&lt;/p&gt;

&lt;h4&gt;
  
  
  Migrating (Workspace has Pending changes)
&lt;/h4&gt;

&lt;p&gt;If you attempt the above but have pending changes you will get:&lt;/p&gt;

&lt;p&gt;TF14006: Cannot change the owner of workspace WORKSPACE;USER to NEWUSER because the workspace has pending changes. To change the owner of a workspace with pending changes, shelve the changes, change the workspace owner, and then unshelve the changes&lt;/p&gt;

&lt;p&gt;If you have access to the old workspace, shelve the changes and undo any checkouts and retry. If you do not have access, then continue reading.&lt;/p&gt;

&lt;p&gt;For this I coud’n’t find a CLI route that allowed me do this fully remote. This is a bit convoluted but MSDN didn’t show any commands for executing this fully remote.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow guide above for “Accessing your old workspace” to get the remote workspace and ensure its mapped locally.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cd&lt;/code&gt; into any directory which is under that workspace.&lt;/li&gt;
&lt;li&gt;Running &lt;code&gt;tf status&lt;/code&gt; should display what changes are made to that workspace.&lt;/li&gt;
&lt;li&gt;Now shelve this with: &lt;code&gt;tf shelve "Migrate" /move /recursive /noprompt&lt;/code&gt; (You can also do this in VS)&lt;/li&gt;
&lt;li&gt;Running &lt;code&gt;tf status&lt;/code&gt; now should yield no changes and running &lt;code&gt;tf shelvesets&lt;/code&gt; should show a single “migrate” shelveset.&lt;/li&gt;
&lt;li&gt;Migrate the workspace with &lt;code&gt;tf workspace "WORKSPACENAME;OWNER@DOMAIN.COM" /newowner:newowner@domain.com /collection:https://domain.com /noprompt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;In VS you should now see your workspace. Unshelve in VS, or run the command &lt;code&gt;tf unshelve migrate /noprompt&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Alternatives
&lt;/h2&gt;

&lt;p&gt;There is a GUI based tool called &lt;a href="http://www.attrice.info/cm/tfs/" rel="noopener noreferrer"&gt;TFS Sidekicks&lt;/a&gt; – Although I have not tried it it comes highly recommended on various posts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common issues
&lt;/h2&gt;

&lt;h4&gt;
  
  
  TF14045: The identity [user] is not a recognized identity.
&lt;/h4&gt;

&lt;p&gt;In this scenario, make sure you are defining the login rather than the Owner name – i.e. &lt;a href="mailto:me@mydomain.com"&gt;me@mydomain.com&lt;/a&gt; or DOMAIN\USER vs John Smith.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://yer.ac/blog/2021/03/10/moving-local-workspaces-between-users-vs2019-tfs-self-hosted-azure-devops/" rel="noopener noreferrer"&gt;Moving local workspaces between users VS2019/TFS [Self Hosted/ Azure Devops]&lt;/a&gt; appeared first on &lt;a href="https://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>visualstudio</category>
      <category>dotnet</category>
      <category>devops</category>
      <category>windowsterminal</category>
    </item>
    <item>
      <title>Adding VS Developer Command Prompt To Windows Terminal (VS 2019)</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Tue, 09 Mar 2021 14:02:58 +0000</pubDate>
      <link>https://dev.to/yerac/adding-vs-developer-command-prompt-to-windows-terminal-vs-2019-1bi8</link>
      <guid>https://dev.to/yerac/adding-vs-developer-command-prompt-to-windows-terminal-vs-2019-1bi8</guid>
      <description>&lt;p&gt;In an effort to be using Windows Terminal for everything Powershell/Command related these days it occurred to me that I hadn’t moved my VS2019 Command Prompt to Windows Terminal. This meant having to open VS all the time (as well as make sure that the command was mapped in external tools!) (If you need help on that I cover that partially in the post on &lt;a href="https://yer.ac/blog/2019/05/14/unshelving-tfs-changes-into-another-branch-vs-2017/" rel="noopener noreferrer"&gt;Unshelving TFS changes into another branch (VS 2017)&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This guide will assume you already have Windows Terminal on your system. If you don’t you can get it from the &lt;a href="https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701" rel="noopener noreferrer"&gt;Windows Store,&lt;/a&gt; or on GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  Editing Your Terminal Settings
&lt;/h2&gt;

&lt;p&gt;Open up Windows Terminal, and go to settings (Ctrl + ,)&lt;/p&gt;

&lt;p&gt;This is the JSON that controls the Terminal. Find the &lt;code&gt;list&lt;/code&gt; array.&lt;/p&gt;

&lt;p&gt;Add a new JSON object to this array, as per below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{       
        "commandline": "cmd.exe /k \"C://Program Files (x86)//Microsoft Visual Studio//2019//Enterprise//Common7//Tools//VsDevCmd.bat\"",
        "cursorColor": "#EEEEEE",
        "cursorShape": "bar",
        "fontFace": "Consolas",
        "fontSize": 10,
        "guid": "{5ee0706e-b015-46b2-98a3-2122a8e627d3}",
        "historySize": 9001,
        "icon": "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Enterprise\\Common7\\IDE\\Assets\\VisualStudio.70x70.contrast-standard_scale-80.png",
        "name": "Developer Command Prompt for VS2019",
        "padding": "0, 0, 0, 0",
        "snapOnInput": true,
        "startingDirectory": "%USERPROFILE%"
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will tell Terminal to start cmd.exe and run the VS developer command line tools.&lt;/p&gt;

&lt;p&gt;Now go back to Windows Terminal and you should see a new entry in the dropdown list which when selected should launch the VS prompt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqjtkv32tonlg7vgzzyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqjtkv32tonlg7vgzzyn.png" width="633" height="295"&gt;&lt;/a&gt;Windows Terminal with new option selected&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4o000axtsy78l5o36p9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4o000axtsy78l5o36p9.png" width="580" height="221"&gt;&lt;/a&gt;VS command prompt within Windows Terminal&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding An Icon
&lt;/h2&gt;

&lt;p&gt;I added an Icon object that VS already had on the system so that it will look nicer in the dropdown. You can point it to &lt;em&gt;any&lt;/em&gt; image on your system, but for reference the images that VS uses are stored permanently at:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\IDE\Assets&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://yer.ac/blog/2021/03/09/adding-vs-developer-command-prompt-to-windows-terminal-vs-2019/" rel="noopener noreferrer"&gt;Adding VS Developer Command Prompt To Windows Terminal (VS 2019)&lt;/a&gt; appeared first on &lt;a href="https://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>windowsterminal</category>
      <category>visualstudio</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Supporting multiple configurations in Cypress</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Fri, 07 Feb 2020 16:13:12 +0000</pubDate>
      <link>https://dev.to/yerac/supporting-multiple-configurations-in-cypress-2hg3</link>
      <guid>https://dev.to/yerac/supporting-multiple-configurations-in-cypress-2hg3</guid>
      <description>&lt;p&gt;By default, Cypress will support a single configuration based on the optional file &lt;code&gt;cypress.json&lt;/code&gt; as described in their documentation &lt;a href="https://docs.cypress.io/guides/references/configuration.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Whilst this works fine for most, it would be great if we could have access to a &lt;code&gt;cypress.dev.json&lt;/code&gt; for local development, or even better, a whole host of configuration files for use against a multi-tenant environment – for example &lt;code&gt;cypress.clientA.json&lt;/code&gt;, &lt;code&gt;cypress.clientB.json&lt;/code&gt; etc.&lt;/p&gt;

&lt;p&gt;Whilst Cypress accepts a different config file during startup with the &lt;code&gt;--config-file&lt;/code&gt; flag, it would be better if we could just pass the environment name through instead of the full file name and/or location, right?&lt;/p&gt;

&lt;h3&gt;
  
  
  Uses for environmental variables
&lt;/h3&gt;

&lt;p&gt;I personally use these environmental files to store things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base URL: Each client has its own SIT/UAT environments with different URLs&lt;/li&gt;
&lt;li&gt;Default username and password for test environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating the different config files
&lt;/h3&gt;

&lt;p&gt;We can create a root level folder named “Config”. Under here we can create as many files as we need to cover, for example I have &lt;code&gt;config.ClientA.json&lt;/code&gt; which contains:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "baseUrl": "http://clientA.internalserver.co.uk/",
  "env": {
    "someVariable": "Foo"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And &lt;code&gt;config.ClientB.json&lt;/code&gt; which contains:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "baseUrl": "http://clientB.internalserver.co.uk/",
  "env": {
    "someVariable": "Bar"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Editing the plugin file
&lt;/h2&gt;

&lt;p&gt;First we need to import “path” and “fs-extra” packages by adding the following at the top of the &lt;code&gt;index.js&lt;/code&gt; file within the &lt;code&gt;/Plugins&lt;/code&gt; folder (if it doesn’t already exist!). These will allow the file to be located and subsequently read.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const path = require("path");
const fs = require("fs-extra");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we need the method which will take in a client name/environmental variable, locate the appropriate config file (being /config/config. &lt;strong&gt;name&lt;/strong&gt;.json), and then reading that file back to the calling method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function getConfigurationFileByEnvName(env) {
  const fileLocation = path.resolve("cypress/config", `config.${env}.json`);
  return fs.readJson(fileLocation);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and finally we need the index.js file to export this file. This will also have a fallback in place if one is not defined.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = (on, config) =&amp;gt; {  
  const envFile = config.env.configFile || "local";
  return getConfigurationFileByEnvName(envFile);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The eagle eyed may realise that I am using &lt;code&gt;config.env.configFile&lt;/code&gt; here which will mean passing an environmental flag in the command line rather than making direct use of the &lt;code&gt;--config&lt;/code&gt; flag. This is personal preference, as I aim to expand on the &lt;code&gt;env&lt;/code&gt; flags later so this will look cleaner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consuming the configuration
&lt;/h3&gt;

&lt;p&gt;Now, when running the usual open command, we can make use of the &lt;code&gt;--env&lt;/code&gt; flag to pass it the environmental variable. We do so with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./node_modules/.bin/cypress open --env configFile=clientA&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It should now launch the test runner with your different files environmental variables available via &lt;code&gt;Cypress.env('key')&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2020/02/07/supporting-multiple-configurations-in-cypress/" rel="noopener noreferrer"&gt;Supporting multiple configurations in Cypress&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>cypress</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>⚡lightning-fast testing of web applications with Cypress</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Fri, 07 Feb 2020 12:00:00 +0000</pubDate>
      <link>https://dev.to/yerac/lightning-fast-testing-of-web-applications-with-cypress-4mi9</link>
      <guid>https://dev.to/yerac/lightning-fast-testing-of-web-applications-with-cypress-4mi9</guid>
      <description>&lt;p&gt;&lt;a href="https://www.cypress.io/" rel="noopener noreferrer"&gt;Cypress (Cypress.io)&lt;/a&gt;is a automation framework for web app testing built and configured with Javascript. Automated front-end testing is definitely not new, but Cypress really is something different. It’s silly fast, requires almost no setup, has quick-to-learn syntax and has a really nice, feature packed test runner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Cypress?&lt;/strong&gt; I’ll let you read the summary on the summary page at &lt;a href="https://www.cypress.io/how-it-works" rel="noopener noreferrer"&gt;cypress&lt;/a&gt;&lt;a href="http://cypress.io" rel="noopener noreferrer"&gt;.io&lt;/a&gt;, whilst also stealing this image from their blurb&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wz9luul9115z5y3zs27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wz9luul9115z5y3zs27.png" width="699" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Why have all those libraries to manage, drivers to install and syntax’s to remember?!&lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t we already have tons of testing frameworks?
&lt;/h2&gt;

&lt;p&gt;Yes. I have previously used tooling like Selenium with C#, and know our QA team use paid tooling like Sahi Pro, for a start.&lt;/p&gt;

&lt;p&gt;Whilst these tools are OK, they often feel clunky with tooling oddities and not-to-friendly syntax. Supplementary to this, a lot of these tools are Selenium based which means they are all sharing the same annoyances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up
&lt;/h2&gt;

&lt;p&gt;To get going with Cypress, simply run the NPM command: &lt;code&gt;npm install cypress --save-dev&lt;/code&gt; within the folder you want to use Cypress from. Note that Yarn variants are also available and can be found on their site.&lt;/p&gt;

&lt;p&gt;If the command executes successfully, you should have a new &lt;code&gt;./node_modules&lt;/code&gt; directory and the package-lock.json.&lt;/p&gt;

&lt;p&gt;To setup and open Cypress for the first time, simply execute the command below, whilst in the context of your installation folder.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./node_modules/.bin/cypress open&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will do a couple of things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a folder named &lt;code&gt;cypress&lt;/code&gt; within your working directory – This will be where all your test descriptions and configuration lives&lt;/li&gt;
&lt;li&gt;Opens up the Cypress App.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco5b6bmn5op1irpai6do.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco5b6bmn5op1irpai6do.png" width="700" height="337"&gt;&lt;/a&gt;Cypress test explorer launched for the first time.&lt;/p&gt;

&lt;p&gt;Feel free to explore the examples which provide samples of common tests, but we won’t cover them in this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project structure
&lt;/h2&gt;

&lt;p&gt;If you open the Cypress folder in VS code, you will find the default project files for a Cypress project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration&lt;/strong&gt; : This folder will contain all the spec files for this project. Creating sub folders within here will be echoed in the test runner. For example, you may have a folder structure like ./integration/cms/account which contains just the tests for the account functionality. How you structure this is up to you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support:&lt;/strong&gt; The support folder contains 2 files, &lt;code&gt;index.js&lt;/code&gt; and &lt;code&gt;commands.js&lt;/code&gt;. The &lt;code&gt;index.js&lt;/code&gt; file will be ran before every single test fixture and is useful if you need to do something common like reset state. The index file also imports the &lt;code&gt;commands.js&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;commands.js&lt;/code&gt; is imported by the index file, and is another place to store common code, but has the advantage that it can be called from any test fixture, at any stage. An example of this could be storing the login method here under a command named &lt;code&gt;DoLogin&lt;/code&gt; which saves having to define this in every fixture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plugins:&lt;/strong&gt; Contains a single file &lt;code&gt;index.js&lt;/code&gt; which is a jump-off point for importing or defining changes to how Cypress works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diving into testing with a real example
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating and running tests
&lt;/h3&gt;

&lt;p&gt;First of all, I will delete the examples folder. For this post I will be “testing” the Twitter desktop site as all my real examples are for enterprise or private software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : This software is not designed for general browsing automation and should only be used against websites you maintain/own. In fact, a lot of sites try to block this and I actually struggled to find a public site I could use this against consistently!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a test fixture/spec&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a new file underneath the “Integration folder” named “MyTest.spec.js” the “.spec” is a naming standard for defining specifications which I suggest you keep, but isn’t strict.&lt;/p&gt;

&lt;p&gt;The structure of this file should be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe("Twitter example tests", function() {
  it("Page should load", function() {
    cy.visit("https://twitter.com/login");
  });
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each file contains a single description, which in turn can contain many steps. I advise a high level of granularity when writing tests, such as a test spec for testing the login page with several steps is fine, having one that tests your website with hundreds of steps, not so much.&lt;/p&gt;

&lt;p&gt;If you save this file, and still have the test runner open, it should have automatically found this new test. If you closed the runner, simply re-run the &lt;code&gt;./node_modules/.bin/cypress open&lt;/code&gt; command again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym9ikzt6lrlprlkhw99c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym9ikzt6lrlprlkhw99c.png" width="699" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on this test will open up a new browser instance (based on the one selected in the drop down – seen in the top right of the screenshot above). The test runner will open a split-window with the executing tests (and results) on the left, and the browser view on the right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh6twu7u4vbglhn28gpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh6twu7u4vbglhn28gpx.png" width="700" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, this test passes as it doesn’t actually *do* anything! Let’s change this! You don’t need to close this runner either, as any changes to this test will be picked up automatically and re-ran.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Interactions
&lt;/h3&gt;

&lt;p&gt;For this example, we will take the existing test above and have it test logging in to the website and navigating to the settings panel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loading a web page&lt;/strong&gt; : A redirect or page load is done with &lt;code&gt;cy.visit(url)&lt;/code&gt;. For this example we used &lt;code&gt;cy.visit("https://twitter.com/login");&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locating an element:&lt;/strong&gt; This is done similar to how jQuery finds objects in that you can find them on type, id, class or data attribute. The flow is always to find an item first, then chose what to do with it. For this we need to find 2 text boxes- one for user and one for password.&lt;/p&gt;

&lt;p&gt;As Twitter does some magic with their element classes I will be locating the boxes by their unique attributes. If I use the code below, you can see the test will pass as it finds the element on the page. Hovering over the test in the test steps will highlight the matching field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe("Twitter example tests", function() {
  it("Page should load", function() {
    cy.visit("https://twitter.com/login");
    cy.get("input[name='session[username_or_email]']");
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pdytthvgwfo5578xphy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pdytthvgwfo5578xphy.png" width="698" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interacting with an element&lt;/strong&gt; : Once we have located the element we can interact with it with methods such as &lt;code&gt;.type()&lt;/code&gt;, &lt;code&gt;.click()&lt;/code&gt; and more. In this example I want to set the username and password field appropriately and then click the enter button, so the code now looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe("Twitter example tests", function() {
  it("Page should load", function() {
    cy.visit("https://twitter.com/login");
    cy.get("input[name='session[username_or_email]']")
      .first()
      .type("MyHandle");
    cy.get("input[name='session[password]']")
      .first()
      .type("password1234");

    cy.get("form[action='/sessions']")
      .first()
      .submit();
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we run this now we can see that the page is loaded, the form is filled out and the form is submitted. The test passes, but should fail as the actual login fails due to incorrect details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding text:&lt;/strong&gt; One way we could validate if the test above succeeds is to check for the existence of an object, or some text on the page which states the login was not a success. To do this we can add the line &lt;code&gt;cy.contains("The username and password you entered did not match our records. Please double-check and try again.");&lt;/code&gt; which will check the entire DOM for that specific text. We could also find a specific element using &lt;code&gt;.get()&lt;/code&gt; and chaining on the &lt;code&gt;.contains()&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Waiting:&lt;/strong&gt; Waiting is part of all web applications, and although Cypress will retry a few times if it cannot locate an element, it does not have a long timeout. The &lt;code&gt;cy.get()&lt;/code&gt; takes in an additional options object in which a timeout can be specified. For example: &lt;code&gt;cy.get(".some-class-which-isnt-visible-yet", { timeout: 30000 });&lt;/code&gt; would pause the execution of the test until the element is located, or the 30,000ms timeout occurs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code sharing and re-use
&lt;/h2&gt;

&lt;p&gt;Lets say we have expanded our tests so we have a new test which detects if the word “Home” is displayed to the user on their dashboard once logged in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe("Twitter tweet tests", function() {
  it("When logged in the word Home appears", function() {
    cy.contains("Home");
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this will fail as it doesn’t know which website to use. We could use the &lt;code&gt;cy.visit()&lt;/code&gt; method, but as each test is ran is isolation of the others we wouldn’t be logged in. Whilst we could just copy the login code from the first test into this one (either in the &lt;code&gt;it&lt;/code&gt; method, or in a &lt;code&gt;beforeEach&lt;/code&gt; block), its a little messy to do so and introduces duplication and more maintenance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commands &amp;amp; Shared Code
&lt;/h3&gt;

&lt;p&gt;Remember that commands.js file under the Support directory? Lets create a new command which will do our login from a central place! We will simply cut and paste in the contents of the login section of the previous test, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cypress.Commands.add("twitterLogin", () =&amp;gt; {
  cy.visit("https://twitter.com/login");
  cy.get("input[name='session[username_or_email]']")
    .first()
    .type("MyValidUser");
  cy.get("input[name='session[password]']")
    .first()
    .type("MyPassword");

  cy.get("form[action='/sessions']")
    .first()
    .submit();
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Cypress that there is a command available called “twitterLogin” and which steps to execute when this command is called. Now we can simply update the login.spec.js to be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe("Twitter tweet tests!", function() {
  it("Can compose a tweet", function() {
    cy.twitterLogin();
    cy.contains(
      "The username and password you entered did not match our records. Please double-check and try again."
    );
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can call &lt;code&gt;cy.twitterLogin()&lt;/code&gt; from any of our spec files!&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Cypress may well become my favorite UI testing framework. In less than a day I was able to gain enough knowledge to put together a fairly large proof of concept for testing one of our front end applications. The only “difficulties” were things like persisting authentication which only took a few google searches to solve. I may have other posts around adding additional flexibility in the future.&lt;/p&gt;

&lt;p&gt;The main benefit to me (other than the flexibility, speed, and the obvious) is that the syntax is flexible enough for a developer, but easy enough for somebody with less coding knowledge (QA, BA, etc).&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2020/02/07/%e2%9a%a1lightning-fast-testing-of-web-applications-with-cypress/" rel="noopener noreferrer"&gt;⚡lightning-fast testing of web applications with Cypress&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>cypress</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Using Docker Containers for easy local WordPress development🐳</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Mon, 06 Jan 2020 09:00:00 +0000</pubDate>
      <link>https://dev.to/yerac/using-docker-containers-for-easy-local-wordpress-development-4j3g</link>
      <guid>https://dev.to/yerac/using-docker-containers-for-easy-local-wordpress-development-4j3g</guid>
      <description>&lt;p&gt;This will cover off setting up a Docker container for local WordPress development &amp;amp; mounting the container folder for easier development.&lt;/p&gt;

&lt;p&gt;Major edit: there is a performance issue with this which I have mentioned in.more detail on the original article &lt;a href="http://yer.ac/blog/2020/01/06/using-docker-containers-for-easy-local-wordpress-development%F0%9F%90%B3/" rel="noopener noreferrer"&gt;http://yer.ac/blog/2020/01/06/using-docker-containers-for-easy-local-wordpress-development🐳/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do this?
&lt;/h2&gt;

&lt;p&gt;Typically when doing any kind of WordPress work (Which is &lt;em&gt;exceptionally&lt;/em&gt; rare for me), I would spin up a new WP instance on my host, or use something like a local LAMP/XAMP server.&lt;/p&gt;

&lt;p&gt;Whilst this works, leveraging things like Docker mean almost zero configuration and more time developing. Better still, these then become throwaway environments!&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;The only requirement for this is the installation of Docker (&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;https://www.docker.com&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the WordPress instance
&lt;/h2&gt;

&lt;p&gt;Firstly, create the folder where the configuration will live, for example &lt;code&gt;C:\Docker\Wordpress&lt;/code&gt; In this folder we need to make a file named &lt;code&gt;docker-compose.yml&lt;/code&gt;. This will be the YAML file detailing our wordpress installation – such as login information and MySQL setup.&lt;/p&gt;

&lt;p&gt;In this file, copy and paste the content below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.1'
services:
  wordpress:
    image: wordpress
    restart: always
    ports:
      - 1234:80
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: exampleuser
      WORDPRESS_DB_PASSWORD: examplepass
      WORDPRESS_DB_NAME: exampledb
  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: exampledb
      MYSQL_USER: exampleuser
      MYSQL_PASSWORD: examplepass
      MYSQL_RANDOM_ROOT_PASSWORD: '1'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will tell Docker that we need to use the image name “WordPress” from the Docker Hub (&lt;a href="https://hub.docker.com/_/wordpress" rel="noopener noreferrer"&gt;https://hub.docker.com/_/wordpress&lt;/a&gt;), forward port 80 of the container (Which will be the exposed website) to port number 1234 of the parent host. This means going to &lt;a href="http://localhost:1234" rel="noopener noreferrer"&gt;http://localhost:1234&lt;/a&gt; would go to port 80 of the docker container.&lt;/p&gt;

&lt;p&gt;Finally to use MySQL v5.7 as our database. You shouldn’t need to change this information, but if you do, make sure that the Database information in the &lt;code&gt;wordpress&lt;/code&gt; section matches the information in the &lt;code&gt;db&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;Once this file is saved, we can run &lt;code&gt;docker-compose up -d&lt;/code&gt; whilst in the same directory. This will take the YAML file, download any images that are not already on the local system and then set WordPress and MySQL up based on the YAML.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; : If you get the error &lt;code&gt;no matching manifest for windows/amd64 in the manifest list entries&lt;/code&gt; when running this command, and you are on Windows 10, you will need to enable “Experimental Mode” in your Docker installation. To do this, right-click on Docker in the system tray and go to settings, checking the box named “Experimental mode”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; : If you get an error like &lt;code&gt;ERROR: for wordpress_db_1 Cannot create container for service db:&lt;/code&gt;, this is usually caused by there being a conflict in ports. Try changing the external port (in this example “1234” for something else)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can confirm if the docker container is running by running &lt;code&gt;docker ps&lt;/code&gt; to list all running containers, or by going to &lt;a href="http://localhost:1234" rel="noopener noreferrer"&gt;http://localhost:1234&lt;/a&gt; (The port will be whatever you set it to in the YAML)&lt;/p&gt;

&lt;p&gt;From here you can follow the standard WordPress installation steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stopping and starting instances
&lt;/h2&gt;

&lt;p&gt;You can stop and start instances by doing the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose stop&lt;/code&gt; and &lt;code&gt;docker-compose up -d&lt;/code&gt;. Stop may take a little while.&lt;/p&gt;

&lt;p&gt;Note that these commands (in this syntax) have to be ran at the same level as the YAML file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with the instance
&lt;/h2&gt;

&lt;p&gt;If you just wanted to play with WordPress this would be enough, but what if you wanted to copy files to the instance, or edit files?&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1: SSH/ FTP
&lt;/h3&gt;

&lt;p&gt;Although not personally a fan of this method, we could SSH to the container with a tool like PuTTY, see this guide: &lt;a href="https://phase2.github.io/devtools/common-tasks/ssh-into-a-container/" rel="noopener noreferrer"&gt;https://phase2.github.io/devtools/common-tasks/ssh-into-a-container/&lt;/a&gt; We could also configure the container to have an FTP server. This won’t be covered as I believe the next option to be better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2: Mount a windows folder
&lt;/h3&gt;

&lt;p&gt;This option will mount a specific folder within the Docker container to a folder on your own system.&lt;/p&gt;

&lt;p&gt;Before continuing, we need to make sure that the container is currently stopped (&lt;code&gt;docker-compose stop&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;We will make a change to the YAML file so that we add a &lt;code&gt;volumes&lt;/code&gt; option, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.1'

services:

  wordpress:
    image: wordpress
    restart: always
    ports:
      - 1234:80
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: exampleuser
      WORDPRESS_DB_PASSWORD: examplepass
      WORDPRESS_DB_NAME: exampledb
    volumes: 
    - "C:/Docker/Wordpress/Mounted:/var/www/html"

  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: exampledb
      MYSQL_USER: exampleuser
      MYSQL_PASSWORD: examplepass
      MYSQL_RANDOM_ROOT_PASSWORD: '1'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see, the &lt;code&gt;volumes&lt;/code&gt; setting contains a string which is made up of 2 parts, seperated by a colon &lt;code&gt;:&lt;/code&gt;. These are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;C:/Docker/Wordpress/Mounted – This is the local path you want to mount the folder against.&lt;/li&gt;
&lt;li&gt;/var/www/html” – This is the path within the container you want to mount.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that personally I like having the mount in the same directory as the configuration YAML for clarity, but it can live anywhere on your system.&lt;/p&gt;

&lt;p&gt;Saving this file and re-running the docker-compose command will now map the specified volume.&lt;/p&gt;

&lt;h3&gt;
  
  
  Footnote
&lt;/h3&gt;

&lt;p&gt;I am a bit of a newbie when it comes to Docker, and I don’t use WordPress an awful lot so do take with a pinch of salt, and do let me know if this can be improved!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2020/01/06/using-docker-containers-for-easy-local-wordpress-development%f0%9f%90%b3/" rel="noopener noreferrer"&gt;Using Docker Containers for easy local WordPress development🐳&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>wordpress</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Identifying Nuget package references which are using relative paths across whole solution</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Wed, 20 Nov 2019 12:25:56 +0000</pubDate>
      <link>https://dev.to/yerac/identifying-nuget-package-references-which-are-using-relative-paths-across-whole-solution-3ao8</link>
      <guid>https://dev.to/yerac/identifying-nuget-package-references-which-are-using-relative-paths-across-whole-solution-3ao8</guid>
      <description>&lt;p&gt;Keeping up with a recent binge upgrading projects, including &lt;a href="https://dev.to/wabbbit/pragmatically-upgrading-net-framework-version-for-all-projects-with-powershell-m8l"&gt;upgrading all my projects in a solution&lt;/a&gt; to 4.8, I have been upgrading nuget packages. Whilst this is a relatively simple task, what irks me is that when you add or upgrade a nuget package in Visual Studio, it will often change the package path to be relative to the project file.&lt;/p&gt;

&lt;h4&gt;
  
  
  What does the issue look like?
&lt;/h4&gt;

&lt;p&gt;When we do something like &lt;code&gt;Install-Package EntityFramework -Version 6.3.0&lt;/code&gt; we end up with something like this in the CSPROJ&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8jmevemfjp6t0if2fzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8jmevemfjp6t0if2fzy.png" width="695" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, the &lt;code&gt;&amp;lt;HintPath&amp;gt;&lt;/code&gt; is using a relative path based on the project file location to the solution file (As packages are kept under /Packages at the same level of the .sln)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is this a problem?&lt;/strong&gt; Although it could “just work” for some scenarios, if there are shared libraries which are used across different solutions we can end up with a build failure due to missing packages! Ideally, the hint path should start with &lt;code&gt;$(SolutionDir)packages\&lt;/code&gt; so the correct package will be located regardless of the solution being built.&lt;/p&gt;

&lt;p&gt;There are workarounds to stop this occurring (I had varied results), but this doesn’t really solve the issue of retrospectively locating all the places this has happened!&lt;/p&gt;

&lt;h4&gt;
  
  
  Identifying all references which use relative paths
&lt;/h4&gt;

&lt;p&gt;My solution has close to a hundred projects nested within it, so a manual check could be problematic and prone to human error.&lt;/p&gt;

&lt;p&gt;Below is a snippet that will take a Solution file, and for each project check for any hint path which does not contain &lt;code&gt;$($SolutionDir)&lt;/code&gt; providing the directory and solution name are changed.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once the script runs it will output any projects and hint paths that may be an issue. For me it was simply going over these one by one and changing everything before \Packages (i..e …....\somefolder\Packages) to use the solution directory variable – i.e. &lt;code&gt;$(SolutionDir)\Packages&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating a helper in Visual Studio
&lt;/h4&gt;

&lt;p&gt;Although the snippet above will quickly identify any potential issues, it’s not very reusable as you have to change the paths for each solution.&lt;/p&gt;

&lt;p&gt;One thing we can do is add a custom tool into Visual Studio to execute the Powershell on demand for the given solution.&lt;/p&gt;

&lt;p&gt;To do this, first save the script in the snippet below to your own system. Note this is slightly different to the script above as it takes in a parameter.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once this is done, go to Visual Studio -&amp;gt; Tools -&amp;gt; External Tools -&amp;gt; Add&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezinkrmi1lyi20e7hj4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezinkrmi1lyi20e7hj4n.png" width="455" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In here we can make a new external tool called something like “Find Bad References”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command:&lt;/strong&gt; Path to the system Powershell executable. &lt;code&gt;C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Arguments:&lt;/strong&gt; &lt;code&gt;-file "C:\Path\To\File.ps1" $(SolutionFileName)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initial Directory:&lt;/strong&gt; &lt;code&gt;$(SolutionDir)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Output Window&lt;/strong&gt; : Checked&lt;/p&gt;

&lt;p&gt;Now, once saved we should have a new option under tools with the same name as the provided title.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupydf58szgo2p246k00u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupydf58szgo2p246k00u.png" width="376" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once clicked, it will execute the script and provide results (if any) in the output tab.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;This isn’t great, and there are plugins that do similar things already, hell, some may even automatically fix these issues! This was designed to scan and generate the potential issues for a manual verification. Feel free to improve it and let me know!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2019/11/20/identifying-nuget-package-references-which-are-using-relative-paths-across-whole-solution/" rel="noopener noreferrer"&gt;Identifying Nuget package references which are using relative paths across whole solution&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>nuget</category>
      <category>dotnet</category>
      <category>refactoring</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Pragmatically upgrading .net framework version for all projects with PowerShell</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Wed, 06 Nov 2019 10:28:44 +0000</pubDate>
      <link>https://dev.to/yerac/pragmatically-upgrading-net-framework-version-for-all-projects-with-powershell-m8l</link>
      <guid>https://dev.to/yerac/pragmatically-upgrading-net-framework-version-for-all-projects-with-powershell-m8l</guid>
      <description>&lt;p&gt;&lt;em&gt;Do feel free to provide any comments/feedback to&lt;/em&gt; &lt;a href="https://twitter.com/therichcarey" rel="noopener noreferrer"&gt;&lt;em&gt;@TheRichCarey&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on Twitter&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We had a situation where we needed to upgrade all the CSPROJ files in a solution to 4.8. The issue is that some of our solutions contain almost a hundred projects so a manual intervention would be prone to error. (plus we have multiple solutions to apply this against!)&lt;/p&gt;

&lt;p&gt;Whilst there are a number of extensions that do this on the VS Marketplace, they seemed a little overkill for something that can surely be achieved in PowerShell? At the end of the day its a simple find-and-replace, right?&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This will load the solution file, iterate over each CSPROJ referenced and then replace the current framework version with the one specified in &lt;code&gt;$versionToUse&lt;/code&gt;. It will then overwrite the file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Potential improvements
&lt;/h3&gt;

&lt;p&gt;This met my needs fine, but could be improved for sure! Things that it could do better are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto scan for &lt;code&gt;.sln&lt;/code&gt; files.&lt;/li&gt;
&lt;li&gt;Provide some form of report at the end&lt;/li&gt;
&lt;li&gt;Auto-checkout for TFS or Git (Using git CLI or TFS CLI)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2019/11/06/pragmatically-upgrading-net-framework-version-for-all-projects-with-powershell/" rel="noopener noreferrer"&gt;Pragmatically upgrading .net framework version for all projects with PowerShell&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>powershell</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Ensuring “dotnet test” TRX &amp; Coverage files end up in SonarQube</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Wed, 16 Oct 2019 14:13:10 +0000</pubDate>
      <link>https://dev.to/yerac/ensuring-dotnet-test-trx-coverage-files-end-up-in-sonarqube-74g</link>
      <guid>https://dev.to/yerac/ensuring-dotnet-test-trx-coverage-files-end-up-in-sonarqube-74g</guid>
      <description>&lt;p&gt;&lt;em&gt;Do feel free to provide any comments/feedback to&lt;/em&gt; &lt;a href="https://twitter.com/therichcarey" rel="noopener noreferrer"&gt;&lt;em&gt;@TheRichCarey&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on Twitter&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I have written &lt;a href="https://dev.to/wabbbit/my-attempt-at-using-sonarqube-for-static-code-analysis-23ck"&gt;before&lt;/a&gt;about using SonarQube to do static analysis, but one issue I never came back to was ensuring that code coverage files generated via a build pipeline end up being picked up by the Sonar Scanner to assess code coverage.&lt;/p&gt;

&lt;p&gt;Note that the following I am actually using the ‘dotnet test’ build step, rather than the ‘Vs Test’ one. Do let me know if you find a nice work around for the VS Test variant, as I couldn’t get it to drop coverage files!&lt;/p&gt;

&lt;h2&gt;
  
  
  The issue
&lt;/h2&gt;

&lt;p&gt;The issue is that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When using VSTest, TRX files are deleted automatically if using version 2+ of the VS Test task as per this &lt;a href="https://stackoverflow.com/questions/50082797/vsts-visual-studio-test-task-deletes-trx-file-after-publish" rel="noopener noreferrer"&gt;stack overflow post&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;When I switched back to ‘dotnet test’ the same thing appeared to be happening.&lt;/li&gt;
&lt;li&gt;.coverage files are not output by default&lt;/li&gt;
&lt;li&gt;TRX and Coverage files are placed in a temporary folder of the build agent rather than the executing agents working directory.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Even though SonarQube could detect the tests, it would still register as 0.0% code coverage!&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting ‘dotnet test’ to collect coverage
&lt;/h2&gt;

&lt;p&gt;The first step was to get the ‘dotnet test’ build step to collect the code coverage, and not just dump TRX files.&lt;/p&gt;

&lt;p&gt;To do this, go to the “Arguments” field of the dotnet test build step and append &lt;code&gt;--collect "Code Coverage"&lt;/code&gt;, as well as ensuring that “Publish test results and code coverage” is enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfzrt13xig9vzwqjla5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfzrt13xig9vzwqjla5r.png" width="370" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ensure generated files are copied to the working directory
&lt;/h2&gt;

&lt;p&gt;As the coverage files will end up in the /tmp folder of the build agent, SonarQube will not be able to scan them.&lt;/p&gt;

&lt;p&gt;We will need to add a new build step of “&lt;em&gt;copy files&lt;/em&gt;” with the correct filter set to get the &lt;code&gt;.trx&lt;/code&gt;and &lt;code&gt;.coverage&lt;/code&gt; files from the default temporary directory on the build agent, to the test results folder of the workspace. To do this we need to add the “Copy Files” task into the build and place it after the “VS Test” task. The source folder for the copy will be &lt;code&gt;$(Agent.HomeDirectory)\_work\_temp&lt;/code&gt; and the target folder will be &lt;code&gt;$(Common.TestResultsDirectory)&lt;/code&gt; – The contents can remain as ** but feel free to filter if required. Example below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt6byzlxl3flze3qnnp8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt6byzlxl3flze3qnnp8.png" width="372" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we run a build now, we should now see files in the TestResults folder of the build agent’s working directory.&lt;/p&gt;

&lt;p&gt;I didn’t have to make any changes to the configuration within SonarQube as it should just pick up the coverage files. If I follow the above I get the following (Lets just ignore the fact the number is low ;)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkltaaj67068j01vi36sn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkltaaj67068j01vi36sn.png" width="589" height="153"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  CSPROJ Changes to test projects
&lt;/h2&gt;

&lt;p&gt;One thing I did notice in the console when attempting to fix this code coverage issue was that I got a lot of warnings like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SonarQube.Integration.targets: warning : **The project does not have a valid ProjectGuid**. Analysis results for this project will not be uploaded to SonarQube.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As all my projects were .net core or .net standard the CSPROJ files do not contain a &amp;lt;ProjectGuid&amp;gt; tag by default. As also suggested in this &lt;a href="https://stackoverflow.com/questions/56814444/azure-devops-issue-with-sonar-cloud-code-coverage" rel="noopener noreferrer"&gt;stack overflow answer&lt;/a&gt;, I added a GUID to my test project file. I am not 100% if this is required, but it stopped warnings appearing in my console and does no harm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60ebjzd8dlnlsb65hrvr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60ebjzd8dlnlsb65hrvr.png" width="700" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus
&lt;/h2&gt;

&lt;p&gt;If you have multiple builds to update and you are using Azure Devops, you can take advantage of “Task Groups”. This allows you to create a single build step which in turn executes a series of other build steps. Using the steps above, you can create a new Task Group to create a single build step to run the test script and make sure the files are copied to the correct location for analysis. For example I have the single build step below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkr51mksaynplp7yeafyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkr51mksaynplp7yeafyz.png" width="700" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which means I can then just call this single build step in all my builds&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhave1d6a2c3xoz899j2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhave1d6a2c3xoz899j2h.png" width="644" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2019/10/16/ensuring-dotnet-test-trx-coverage-files-end-up-in-sonarqube/" rel="noopener noreferrer"&gt;Ensuring “dotnet test” TRX &amp;amp; Coverage files end up in SonarQube&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>testing</category>
      <category>sonarqube</category>
    </item>
    <item>
      <title>Include both Nuget Package References *and* project reference DLL using “dotnet pack” 📦</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Thu, 05 Sep 2019 09:04:05 +0000</pubDate>
      <link>https://dev.to/yerac/include-both-nuget-package-references-and-project-reference-dll-using-dotnet-pack-2d8p</link>
      <guid>https://dev.to/yerac/include-both-nuget-package-references-and-project-reference-dll-using-dotnet-pack-2d8p</guid>
      <description>&lt;p&gt;Edit: Looks like at somepoint dev.to did a formatting change which broke some bits here. Check out the Original: &lt;a href="http://yer.ac/blog/2019/09/05/dotnet-pack-project-reference-and-nuget-dependency/" rel="noopener noreferrer"&gt;http://yer.ac/blog/2019/09/05/dotnet-pack-project-reference-and-nuget-dependency/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently I have been trying to generate more Nuget packages for our dotnet core projects, utilizing the &lt;code&gt;dotnet pack&lt;/code&gt; command. One issue I have been encountering is that the command was either referencing the required nuget packages, or the project reference DLLs, never both.&lt;/p&gt;

&lt;h2&gt;
  
  
  The current problem.
&lt;/h2&gt;

&lt;p&gt;If you have Project A which has a project reference to Project B as well as including a nuget package called Package A you would expect the generated package to contain a link to both the required nuget package, and the DLL(s) for Project B, yes? This however is not how the dotnet pack command works.&lt;/p&gt;

&lt;p&gt;This issue is widely reported on their repo (I.e. &lt;a href="https://github.com/NuGet/Home/issues/3891" rel="noopener noreferrer"&gt;https://github.com/NuGet/Home/issues/3891&lt;/a&gt; ) and unfortunately it seems the developers and the community are in a bit of a disagreement to what is “correct”. The official stance (as I understood it) is that the project references won’t be included as they should be their own packages. This however is not always practical or desired.&lt;/p&gt;

&lt;h2&gt;
  
  
  The workaround.
&lt;/h2&gt;

&lt;p&gt;Plenty of workarounds have been suggested around Stack Overflow and Github including having a seperate nuspec file, using Powershell to inject things into the generated nupkg and so on…&lt;/p&gt;

&lt;p&gt;The solution below worked for me, but of course, YMMV.&lt;/p&gt;

&lt;p&gt;In the end I ditched having my own &lt;code&gt;.nuspec&lt;/code&gt; file within my project (as per some SO posts) and instead used the CSPROJ (as recommended). Below you can see the required fields for the packaging (version, naming, etc), a reference to a nuget package, and a reference to another project within the solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq43v3f66mqs21uaz7q54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq43v3f66mqs21uaz7q54.png" alt="CSProj Snippet of dotnet core project" width="567" height="381"&gt;&lt;/a&gt;Snippet of CSPROJ with basic package info filled in.&lt;/p&gt;

&lt;p&gt;If you run dotnet pack now, it will generate an appropriately named package which will contain a nuget dependancy on &lt;code&gt;SomeNugetPackage&lt;/code&gt;. This can be confirmed by opening the nupkg with an archive tool (7Zip,WinRar, WinZip…) and seeing that the only DLL in the &lt;code&gt;lib&lt;/code&gt; folder will be the DLL of the project being packed.&lt;/p&gt;

&lt;p&gt;The fix is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alter the project reference to set the &lt;code&gt;ReferenceOutputAssembly&lt;/code&gt; flag to true, and &lt;code&gt;IncludeAssets&lt;/code&gt; to the DLL name
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;ProjectReference Include="..\ProjectB.csproj"&amp;gt;
  &amp;lt;ReferenceOutputAssembly&amp;gt;true&amp;lt;/ReferenceOutputAssembly&amp;gt;
  &amp;lt;IncludeAssets&amp;gt;ProjectB.dll&amp;lt;/IncludeAssets&amp;gt;
&amp;lt;/ProjectReference&amp;gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add the following line into the &lt;code&gt;&amp;lt;PropertyGroup&amp;gt;&lt;/code&gt; element
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;TargetsForTfmSpecificBuildOutput&amp;gt;$(TargetsForTfmSpecificBuildOutput);CopyProjectReferencesToPackage&amp;lt;/TargetsForTfmSpecificBuildOutput&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add new target between &lt;code&gt;&amp;lt;project&amp;gt;&lt;/code&gt; tags
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Target DependsOnTargets="ResolveReferences" Name="CopyProjectReferencesToPackage"&amp;gt;
    &amp;lt;ItemGroup&amp;gt;
      &amp;lt;BuildOutputInPackage Include="@(ReferenceCopyLocalPaths-&amp;gt;WithMetadataValue('ReferenceSourceTarget', 'ProjectReference'))"/&amp;gt;
    &amp;lt;/ItemGroup&amp;gt;
  &amp;lt;/Target&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now you end up with something that looks like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fyer.ac%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2Fimage-2.png%3Ffit%3D700%252C298" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi1.wp.com%2Fyer.ac%2Fblog%2Fwp-content%2Fuploads%2F2019%2F09%2Fimage-2.png%3Ffit%3D700%252C298" alt="&amp;lt;Project Sdk=" width="699" height="298"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;br&gt;
    netstandard2.0&lt;br&gt;
    1.0.9&lt;br&gt;
    MyProduct&lt;br&gt;
    MyProduct&lt;br&gt;
    MyProduct&lt;br&gt;
    Your name&lt;br&gt;
    Company Name&lt;br&gt;
    My library&lt;br&gt;
    Copyright © 2019 MyCompany&lt;br&gt;
    $(TargetsForTfmSpecificBuildOutput);CopyProjectReferencesToPackage&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
      &lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
    &lt;br&gt;
      true&lt;br&gt;
      ProjectB.dll&lt;br&gt;
      &lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
    &lt;br&gt;
      &lt;br&gt;
    &lt;br&gt;
  &lt;br&gt;
&lt;br&gt;End result CSPROJ. (Click to enlarge)
"/&amp;gt;&lt;/p&gt;

&lt;p&gt;Now if you run dotnet pack you should see any project reference DLL under the &lt;code&gt;lib&lt;/code&gt; folder of the package, and if you inspect the nuspec file inside the package (or upload it to your package repo) you should see the nuget dependencies.&lt;/p&gt;

&lt;p&gt;Hopefully this helps someone, as there is a lot of conflicting info around. Please let me know if this would cause any issues!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2019/09/05/dotnet-pack-project-reference-and-nuget-dependency/" rel="noopener noreferrer"&gt;Include both Nuget Package References and project reference DLL using “dotnet pack” 📦&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>nuget</category>
      <category>dotnet</category>
      <category>devops</category>
    </item>
    <item>
      <title>💰Penny Pinching in Azure💰: Setting up a self-hosted build agent for Azure DevOps to save cash</title>
      <dc:creator>Rich</dc:creator>
      <pubDate>Wed, 31 Jul 2019 19:41:51 +0000</pubDate>
      <link>https://dev.to/yerac/penny-pinching-in-azure-setting-up-a-self-hosted-build-agent-for-azure-devops-to-save-cash-2clp</link>
      <guid>https://dev.to/yerac/penny-pinching-in-azure-setting-up-a-self-hosted-build-agent-for-azure-devops-to-save-cash-2clp</guid>
      <description>&lt;p&gt;Azure DevOps has brilliant build pipeline options and as easy as it is to get set up with their hosted build agents, it can get quite costly rather quick. In this post I cover off setting up a self-hosted build agent for use with Azure.&lt;/p&gt;

&lt;p&gt;This post won’t cover setting up the build box, but can be covered in a later guide if required. I actually have my build box scripted out using Choco commands to allow building of .NET projects to make this step easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros/Cons
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pro: Full control over the build&lt;/li&gt;
&lt;li&gt;Pro: Can have your builds build items or run services which simply aren’t available in the Hosted agents.&lt;/li&gt;
&lt;li&gt;Pro: Low cost. If you already have the hardware, why pay for Azure VMs?&lt;/li&gt;
&lt;li&gt;Con: Maintenance and redundancy. If the machine goes down or breaks it blocks your pipeline.&lt;/li&gt;
&lt;li&gt;Con: Extra setup steps. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before starting you will need to make sure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are a collection/build admin&lt;/li&gt;
&lt;li&gt;You have a server configured to build the appropriate software (i.e. Correct SDKs etc which won’t be covered in this post)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Personal Access Tokens
&lt;/h2&gt;

&lt;p&gt;First of all, you will need a personal access token for your account. This is used to allow your build agent access to Azure without hard-coding your credentials into your build scripts. You can use your own account for this, or a specially created service account – Just note it will need permissions to access the collections it will be building.&lt;/p&gt;

&lt;p&gt;To get this, log in to your Azure Devops portal, and navigate to your security page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0yrrwoaqt83jmvu3bgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0yrrwoaqt83jmvu3bgx.png" width="467" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In here, select “Personal Access Tokens” and then “New”. A panel will be displayed to configure this PAT. Specify a friendly and unique name, select the organisation you are using this token for, and then set its security access.&lt;/p&gt;

&lt;p&gt;For the security access, I recommend selecting &lt;strong&gt;Full Access&lt;/strong&gt; under “Scopes” so you can use this PAT for general Dev Ops activities. You can fine-tune the control, but you must ensure it has read/execute on the build scope as an absolute minimum. For expiry I typically select the longest period which is 1 year.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bfn74h9z1xwakpwmjkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bfn74h9z1xwakpwmjkd.png" width="657" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent download and configuration
&lt;/h2&gt;

&lt;p&gt;Next up you will need to navigate to the project settings &amp;gt; Pipelines &amp;gt; Agent Pools.&lt;/p&gt;

&lt;p&gt;Create a new Agent Pool with an appropriate name (You don’t *have* to do this and can just use the default pool if you wish, but I like the separation). When your pool is created you will see the option to add a new agent to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznejazizswe1xa4cqnwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznejazizswe1xa4cqnwu.png" width="629" height="104"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Clicking “New Agent” will give you the instructions for the OS of your choice. As per the instructions, download the agent (A ~130 ZIP file) and then place somewhere sensible on the machine that will be acting as a build server. When extracted, run config.cmd &lt;strong&gt;in an elevated command window&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k7hmh7lprwtrbgqq3lm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k7hmh7lprwtrbgqq3lm.png" width="700" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When running the config.cmd command you will require the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Server URL

&lt;ul&gt;
&lt;li&gt;This will be &lt;a href="https://dev.azure.com/%7Borganisation" rel="noopener noreferrer"&gt;https://dev.azure.com/{organisation&lt;/a&gt; name}&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;What type of authentication you will use (Just press return as it will default to PAT)&lt;/li&gt;

&lt;li&gt;Your PAT to access the server, as set up in the first step.&lt;/li&gt;

&lt;li&gt;The Pool to connect to. This will be the name of the agent pool created above.&lt;/li&gt;

&lt;li&gt;The working folder. The folder to use for storing workspaces being built.&lt;/li&gt;

&lt;li&gt;A name for this agent. Call it whatever you want, but I would personally always include the machine name as it makes it easier to work out which agents are running. &lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Providing all the above settings are specified correctly and there are no authentication issues, it should now attempt to start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Confirming the agent is active
&lt;/h2&gt;

&lt;p&gt;Going back to the Agent Pools configuration screen you should now see the agent listed in the appropriate agent pool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6v7araux9ts73c3bru4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6v7araux9ts73c3bru4i.png" width="650" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the agent is not displaying after a few minutes, something went wrong in setup.&lt;/p&gt;

&lt;p&gt;If the agent is displaying offline, try running the “run.cmd” command in an elevated command window on your build server.&lt;/p&gt;

&lt;p&gt;Now all you have to do is select your new agent pool when creating your next build!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://yer.ac/blog/2019/07/31/setting-up-a-self-hosted-build-agent-for-azure-devops/" rel="noopener noreferrer"&gt;Setting up a self-hosted build agent for Azure DevOps&lt;/a&gt; appeared first on &lt;a href="http://yer.ac/blog" rel="noopener noreferrer"&gt;yer.ac | Adventures of a developer, and other things.&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
