<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Linas Valiukas</title>
    <description>The latest articles on DEV Community by Linas Valiukas (@nesisuksibedarbis).</description>
    <link>https://dev.to/nesisuksibedarbis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nesisuksibedarbis"/>
    <language>en</language>
    <item>
      <title>Your e-shop data lives in three places that don't talk to each other</title>
      <dc:creator>Linas Valiukas</dc:creator>
      <pubDate>Fri, 03 Apr 2026 22:37:48 +0000</pubDate>
      <link>https://dev.to/nesisuksibedarbis/your-e-shop-data-lives-in-three-places-that-dont-talk-to-each-other-454j</link>
      <guid>https://dev.to/nesisuksibedarbis/your-e-shop-data-lives-in-three-places-that-dont-talk-to-each-other-454j</guid>
      <description>&lt;p&gt;I recently scoped a project for an e-commerce client running PrestaShop. Smart guy, profitable business, good product margins. He had one question he couldn't answer: which of his Google Ads campaigns actually make money?&lt;/p&gt;

&lt;p&gt;Not which ones get clicks. Not which ones drive traffic. Which ones drive purchases of products with margins high enough to justify the ad spend? He'd been running Google Ads for years and couldn't tell me.&lt;/p&gt;

&lt;p&gt;The data existed. All of it. Sales data in PrestaShop's MySQL database. Ad spend in Google Ads. Traffic patterns in Google Analytics. Three systems, three dashboards, zero connection between them. To answer even a basic cross-channel question, he'd have to pull a CSV from each, line them up in a spreadsheet, and hope the dates and product names matched. He didn't do this. Nobody does this. So the money question stayed unanswered.&lt;/p&gt;

&lt;p&gt;This isn't a PrestaShop problem. Shopify, WooCommerce, Magento — doesn't matter. If you sell online and advertise on Google, you almost certainly have the same blind spot.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three silos
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Your e-shop database&lt;/strong&gt; knows what sold, when, to whom, at what price, and at what margin. It doesn't know how the customer found you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Ads&lt;/strong&gt; knows what you spent, which campaigns got clicks, and what your cost-per-click is. It doesn't know what happened after the click — not really. Google Ads "conversions" are a rough proxy, not actual order data from your shop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Analytics&lt;/strong&gt; knows how people move through your site. Which pages they visit, where they drop off, how long they stay. It's the middle layer between the ad click and the purchase, but it doesn't know your product margins or your actual order totals.&lt;/p&gt;

&lt;p&gt;Each system tells you a third of the story. Individually, they're useful. Together, they'd be powerful. But they don't talk to each other out of the box, so most shop owners just... guess. They look at Google Ads ROAS, assume it's roughly accurate, and keep spending.&lt;/p&gt;

&lt;p&gt;Sometimes that guess is fine. Sometimes you're dumping EUR 2,000/month into campaigns that drive traffic to your lowest-margin products while your best-margin items sit there organically converting at twice the rate. You can't know until the data is connected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why traditional dashboards don't solve this
&lt;/h2&gt;

&lt;p&gt;The standard advice is "set up a BI dashboard." Looker Studio, Power BI, Tableau — take your pick. And yeah, they can pull from multiple data sources.&lt;/p&gt;

&lt;p&gt;But there's a gap between "can" and "does." Setting up a proper BI dashboard that joins e-shop orders with Google Ads campaigns and Analytics sessions requires a data engineer, or at least someone who thinks like one. You need to define the data model, build the ETL pipelines, maintain the connections when APIs change, and design the actual reports. For a 5-30 person e-commerce business, that's a project measured in weeks and billed in the tens of thousands.&lt;/p&gt;

&lt;p&gt;So you get a dashboard that answers 12 pre-built questions really well. Question 13? Back to the spreadsheet.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI as the missing bridge
&lt;/h2&gt;

&lt;p&gt;What if you could just ask?&lt;/p&gt;

&lt;p&gt;"Which Google Ads campaigns drove the most revenue last quarter, broken down by product category and margin?"&lt;/p&gt;

&lt;p&gt;"Compare my organic traffic conversion rate to my paid traffic conversion rate for the last 90 days."&lt;/p&gt;

&lt;p&gt;"Which products am I advertising that have a negative ROI after ad spend?"&lt;/p&gt;

&lt;p&gt;That's what I built for this client. An AI chatbot connected to all three data sources, querying them directly and cross-referencing the results. No pre-built reports. No fixed dashboards. You ask a question in plain language, and it figures out which databases to query, writes the SQL, runs it, and gives you an answer.&lt;/p&gt;

&lt;p&gt;The setup works like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E-shop data&lt;/strong&gt; stays where it is — in your MySQL (or whatever your platform uses). The AI gets a read-only connection. Live data, always current, no exports going stale in a folder somewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Ads data&lt;/strong&gt; gets exported daily to BigQuery through Google's built-in export. Why BigQuery instead of the Google Ads API? Because the API requires an approval process that involves application forms, compliance reviews, and weeks of waiting. I've been through it. The BigQuery export takes an afternoon to set up and runs automatically from that point forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Analytics&lt;/strong&gt; also exports to BigQuery — GA4 has a native integration. There are some quirks with historical data and export granularity, but for most e-commerce analytics questions, it's more than sufficient.&lt;/p&gt;

&lt;p&gt;Once all three data sources are queryable, the AI gets instructions that describe the schema — what's in each table, how the tables relate to each other, which fields are the join keys. From there, it writes queries on the fly based on whatever you ask.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the AI gets right (and where to double-check)
&lt;/h2&gt;

&lt;p&gt;I want to be straight about this: the AI chatbot is a thinking tool, not an audited financial report. It writes queries on the fly, and it's good at it — I've been impressed by how well it handles complex joins across three databases. But it can occasionally misinterpret a column, double-count rows because of a table relationship it didn't anticipate, or make an assumption about your data that doesn't hold.&lt;/p&gt;

&lt;p&gt;For daily decision-making — spotting trends, comparing campaigns, finding underperforming products — it's excellent. Fast, flexible, and it asks the questions you wouldn't have thought to build a dashboard for.&lt;/p&gt;

&lt;p&gt;For numbers going into a board presentation or a tax filing? Sanity-check against your admin panel. This is true of every AI analytics tool on the market right now. It'll get better. But today, trust-and-verify is the right approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this actually costs
&lt;/h2&gt;

&lt;p&gt;For a typical e-commerce business with one shop, Google Ads, and Analytics, setup runs EUR 4,000-7,000 depending on database complexity and how many integrations are involved. Ongoing cost is mostly the AI subscription (EUR 20-100/month depending on the tool) plus BigQuery, which is usually negligible — often within Google Cloud's free tier.&lt;/p&gt;

&lt;p&gt;Compare that to a BI dashboard project (EUR 15,000-40,000 for proper implementation) or hiring a data analyst (EUR 40,000-60,000/year). The AI approach costs less, answers more types of questions, and doesn't quit after six months to join a startup.&lt;/p&gt;

&lt;p&gt;The payback math is simple. If you're spending EUR 3,000/month on Google Ads and you can't tell which campaigns are profitable, you're flying blind with real money. Even a 15% improvement in ad allocation — killing the losers, doubling down on the winners — pays for the entire setup in the first month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is this right for your shop?
&lt;/h2&gt;

&lt;p&gt;Not always. If you're spending EUR 200/month on ads and your product catalog is 30 items, the gut-feel approach is probably fine. You don't need AI to tell you which of your three campaigns is working.&lt;/p&gt;

&lt;p&gt;But if you're running hundreds of SKUs, spending a few thousand a month on ads, and your answer to "which campaigns drive the highest-margin sales?" is a shrug — that blind spot is costing you real money every month. You just can't see how much.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.lobsterpack.com/blog/e-shop-data-three-places/" rel="noopener noreferrer"&gt;lobsterpack.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ecommerce</category>
      <category>ai</category>
      <category>data</category>
      <category>analytics</category>
    </item>
    <item>
      <title>The prompt is the product</title>
      <dc:creator>Linas Valiukas</dc:creator>
      <pubDate>Fri, 03 Apr 2026 22:35:23 +0000</pubDate>
      <link>https://dev.to/nesisuksibedarbis/the-prompt-is-the-product-2ka1</link>
      <guid>https://dev.to/nesisuksibedarbis/the-prompt-is-the-product-2ka1</guid>
      <description>&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Don't ask AI to do the thing directly. Ask it to interview you first — what are your constraints, what have you not thought of, what would you recommend? Collect those answers into a brief. Use that brief as your real prompt. This works for anything: websites, business plans, marketing campaigns, internal tools, hiring processes. The example below is a website, but the method is universal. A 350-word brain dump became a 1,200-word spec, and the result wasn't even in the same category.&lt;/p&gt;




&lt;p&gt;You've got an idea for a website. You open Claude or ChatGPT and type something like:&lt;/p&gt;

&lt;p&gt;"I'm starting a surf lesson business in Portugal. Can you build me a website?"&lt;/p&gt;

&lt;p&gt;And the AI will do it. It'll give you a homepage, maybe a contact section, some copy about how your services are "tailored to your needs." It works. Technically.&lt;/p&gt;

&lt;p&gt;But it's thin. It's missing things you didn't know to ask for — SEO tags, legal pages, a contact strategy, cookie consent, schema markup. You never mentioned them, and the AI didn't want to bother you with questions.&lt;/p&gt;

&lt;p&gt;This isn't a website problem. It's a prompting problem. Ask AI to "write me a marketing plan" and you'll get five generic bullet points. Ask it to "draft an employee handbook" and you'll get boilerplate. Ask it to "plan my product launch" and you'll get a timeline that could apply to literally any product. The pattern is always the same: vague input, vague output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The quality of what you get out is almost entirely determined by the quality of what you put in.&lt;/strong&gt; But to write a detailed prompt, you need to know what details matter. If you already knew that, you wouldn't need the AI's help. It's like walking into an architect's office and saying "build me a house" — you'll get a house, but not the one you actually wanted.&lt;/p&gt;

&lt;p&gt;So what do you do?&lt;/p&gt;

&lt;h2&gt;
  
  
  Make the AI interview you first
&lt;/h2&gt;

&lt;p&gt;Instead of asking the AI to build the thing directly, you ask it to help you figure out what to ask for. Thinking partner first, builder second.&lt;/p&gt;

&lt;p&gt;I do this with clients all the time. Before I touch any automation, I spend the first few sessions just asking questions. What breaks when you're on vacation? Where do you lose money to slowness? The answers shape everything that comes after. You can do the same thing with AI — for free.&lt;/p&gt;

&lt;p&gt;Say you've got a rough idea for a corporate surf retreat business called "Salt &amp;amp; Suit." If you dump all of it into an AI and say "build me a website," you'll get one page, some blue colors, and generic copy. No SEO strategy, no legal compliance, no plan for how anyone will find it.&lt;/p&gt;

&lt;p&gt;But if you say this instead:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I have a business idea and I want you to eventually build me a website. But not yet. First, help me think through what the website actually needs. Here's my rough idea: [your brain dump]. Ask me the questions I haven't thought of. Tell me if I'm missing something obvious. As we talk, build up a detailed brief that captures all our decisions. That brief becomes the build prompt later.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now the AI starts asking things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where will you host this? (It might suggest Astro + Tailwind for static sites with good SEO.)&lt;/li&gt;
&lt;li&gt;How will people contact you? A form? A Calendly link? Each has trade-offs.&lt;/li&gt;
&lt;li&gt;What about legal stuff — privacy policy, GDPR cookie consent?&lt;/li&gt;
&lt;li&gt;Will the site be English only, or Portuguese too?&lt;/li&gt;
&lt;li&gt;Do you have photos? If not, where will the visuals come from?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You answer these. After three or four rounds, you've got a document that's no longer a vague idea — it's a proper brief. And that brief is your prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  The template
&lt;/h2&gt;

&lt;p&gt;If you want to try this yourself, adapt this to whatever you're building:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I have an idea for [what you're building] and I eventually want you to help me build it. But not yet.&lt;/p&gt;

&lt;p&gt;First, I want you to be my thinking partner. Here's my rough idea: [your brain dump — be as messy as you want].&lt;/p&gt;

&lt;p&gt;Before we build anything:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask me questions I haven't thought of. Explain why each one matters, and suggest what you'd recommend if I'm not sure.&lt;/li&gt;
&lt;li&gt;If something in my plan is a bad idea, tell me directly.&lt;/li&gt;
&lt;li&gt;Think about this from the end user's perspective. What would they expect? What would make them trust this?&lt;/li&gt;
&lt;li&gt;After each round of questions, update a running brief that captures all our decisions. This brief becomes the build prompt.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ask me questions in small batches so I don't get overwhelmed. Don't build anything yet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You go back and forth a few times. The brief grows. When you're done, you've got a prompt that's five or ten times more detailed than what you started with — not because you spent weeks researching web development, but because you had a conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The worked example: before and after
&lt;/h2&gt;

&lt;p&gt;Here's the full process from the surf retreat scenario. The rough idea on day one, then what came out the other end.&lt;/p&gt;

&lt;h3&gt;
  
  
  The starting prompt (what's in your head)
&lt;/h3&gt;

&lt;p&gt;Elena had a ~350 word brain dump about her corporate surf retreat business "Salt &amp;amp; Suit" in Portugal. Good energy, clear value proposition — but zero implementation detail. She mentioned the business idea, her background (finance in London, surfing in Ericeira), her business partner Marco (ISA-certified surf instructor), and the tone she wanted (playful but professional).&lt;/p&gt;

&lt;h3&gt;
  
  
  The refined prompt (what came out the other end)
&lt;/h3&gt;

&lt;p&gt;After several rounds of AI-assisted questioning, Elena's 350 words became 1,200. The extra words aren't fluff — they're decisions about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hosting&lt;/strong&gt;: Astro + Tailwind on Cloudflare Pages for static SEO&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contact strategy&lt;/strong&gt;: Calendly vs forms vs phone number, with trade-offs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legal&lt;/strong&gt;: Privacy policy, terms, GDPR cookie consent, Portuguese/EU jurisdiction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SEO&lt;/strong&gt;: Sitemaps, robots.txt, canonical tags, OpenGraph, JSON-LD, topical maps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics&lt;/strong&gt;: PostHog for tracking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content strategy&lt;/strong&gt;: Google E-E-A-T compliance, AI-quotable content formatting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multilingual&lt;/strong&gt;: English first, Portuguese later with a disabled language switch&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-containment&lt;/strong&gt;: No external resources unless they add marketing value&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go-to-market&lt;/strong&gt;: A full cold-start promotion strategy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual identity&lt;/strong&gt;: SVG logo combining a surfboard and necktie, ocean blues and coral accents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same AI. Same person. Wildly different output.&lt;/p&gt;

&lt;h2&gt;
  
  
  The prompt is the product
&lt;/h2&gt;

&lt;p&gt;People treat prompts as throwaway inputs — type something, get a result, move on. But for anything that's not trivial, the prompt &lt;em&gt;is&lt;/em&gt; the product. It's the spec. The blueprint. You wouldn't build a house from a napkin sketch.&lt;/p&gt;

&lt;p&gt;The example above is a website, but I use this exact method for everything. Designing an automation workflow for an accounting firm? Interview first, build second. Planning a content strategy? Same thing. Migrating a client's data pipeline? You'd better believe we're spending the first hour on questions, not code. The two-step process works wherever the gap between "what you know to ask for" and "what you actually need" is wide — which is most places.&lt;/p&gt;

&lt;p&gt;I spend the first chunk of every client engagement just asking questions and building the brief before anyone touches a keyboard. But you can get 80% of that value on your own, for free, by running this two-step process with any AI chatbot. The AI won't know your industry as well as a consultant would, but it'll catch the 30 things you forgot to think about. That's usually enough.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.lobsterpack.com/blog/prompt-is-the-product/" rel="noopener noreferrer"&gt;lobsterpack.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>What software engineering got wrong for decades, you're about to repeat with AI</title>
      <dc:creator>Linas Valiukas</dc:creator>
      <pubDate>Fri, 03 Apr 2026 22:27:13 +0000</pubDate>
      <link>https://dev.to/nesisuksibedarbis/what-software-engineering-got-wrong-for-decades-youre-about-to-repeat-with-ai-h6d</link>
      <guid>https://dev.to/nesisuksibedarbis/what-software-engineering-got-wrong-for-decades-youre-about-to-repeat-with-ai-h6d</guid>
      <description>&lt;p&gt;I've been a software engineer for 20 years. Current AI coding tools — OpenClaw, Claude Code, Claude Cowork — are designed, in a way, to replace people like me. They write code. They run commands. They debug their own mistakes (sometimes).&lt;/p&gt;

&lt;p&gt;And honestly? I get it. Engineers are expensive. We take forever. We still ship bugs. We're weird in meetings. If you could skip us and just tell a computer what you want, why wouldn't you?&lt;/p&gt;

&lt;p&gt;But here's the thing about those 20 years. Most of what I learned wasn't about code. It was about how to think about complex tools, how to avoid traps that look like shortcuts, and when to spend money versus when to save it. Those lessons translate directly to using AI — whether you're a developer or a business owner who's never written a line of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Keep it simple, stupid
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/KISS_principle" rel="noopener noreferrer"&gt;KISS principle&lt;/a&gt; — "keep it simple, stupid" — came from Kelly Johnson, the lead engineer at Lockheed Skunk Works in the 1960s. He designed spy planes. His rule was that any jet engine should be repairable by an average mechanic in field conditions with basic tools. If the design was too clever for that, the design was wrong.&lt;/p&gt;

&lt;p&gt;Sixty years later, every engineer still goes through the same arc. You start out doing simple things because you don't know any better. Then you discover all these exciting tools and techniques and you go deep — design patterns, microservices, orchestration frameworks, the works. You feel smart. Your systems are &lt;em&gt;sophisticated&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Then they break at 2 AM and you can't figure out why because there are fourteen moving parts and you built six of them in a weekend.&lt;/p&gt;

&lt;p&gt;Eventually, you come back to simple. Not because you can't do complex — because you've learned that keeping things simple is actually harder and almost always better. There's a whole manifesto about this called &lt;a href="https://grugbrain.dev/" rel="noopener noreferrer"&gt;The Grug Brained Developer&lt;/a&gt; that puts it better than I ever could:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;apex predator of grug is complexity&lt;br&gt;
complexity bad&lt;br&gt;
say again:&lt;br&gt;
complexity very bad&lt;br&gt;
you say now:&lt;br&gt;
complexity very, very bad&lt;br&gt;
given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Same arc happens with AI tools. You try ChatGPT or Claude, it works, you're amazed. Then you discover the "advanced" stuff — skills, custom workflows, automation dashboards, webhook chains. The AI tool marketplaces are full of pre-built "skills" that promise to do specific tasks for you: summarize meetings, write emails in your tone, generate reports in a particular format. Sounds great. You install twelve of them.&lt;/p&gt;

&lt;p&gt;Here's what actually happens. Half of those skills are just prompts with a button on them. Literally a sentence or two of instructions wrapped in a UI that makes it look more sophisticated than it is. You could type the same thing yourself. The other half try to do something clever, but they don't know your business, your context, or what you actually meant — so you spend more time correcting their output than you would've spent just asking the AI directly in plain language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A good foundation model with a clear, specific prompt will figure out how to do the task on its own.&lt;/strong&gt; You don't need a "meeting summarizer skill" — you paste the transcript and say "summarize this meeting, focus on action items and who owns them." Done. The model already knows how to do that. The skill didn't teach it anything. It just added a layer of abstraction between you and the thing that's actually doing the work.&lt;/p&gt;

&lt;p&gt;And then there are the plugins — MCP servers, browser extensions, third-party integrations that connect your AI to other tools. These aren't just unnecessary complexity. They're a real risk. Security researchers at Pynt &lt;a href="https://www.pynt.io/blog/llm-security-blogs/state-of-mcp-security" rel="noopener noreferrer"&gt;found that 10% of MCP plugins are fully exploitable&lt;/a&gt;, and with just 10 installed, there's a 92% probability that at least one can be silently exploited. In 2025 alone, MCP vulnerabilities led to &lt;a href="https://authzed.com/blog/timeline-mcp-breaches" rel="noopener noreferrer"&gt;WhatsApp messages being exfiltrated, GitHub private repos being exposed, and a remote code execution hole in Cursor AI&lt;/a&gt;. These aren't theoretical attacks. They happened.&lt;/p&gt;

&lt;p&gt;So your setup now has skills that are just prompts pretending to be features, plugins that crash and occasionally leak your data, and you're spending your evenings managing all of it instead of doing actual work.&lt;/p&gt;

&lt;p&gt;The best solutions use the fewest moving parts. A well-written prompt in a plain chat window will outperform a Rube Goldberg machine of twelve connected skills and plugins nine times out of ten. You can always add complexity later if the simple version hits a wall — but figure out how far you can get with the core solution first. Most people never hit that wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Don't cheap out on the model
&lt;/h2&gt;

&lt;p&gt;Developers have a reputation for demanding expensive hardware. It annoys everyone — marketing gets by on a ThinkPad while engineering insists on MacBook Pros with 64 GB of RAM and two external monitors. And from the outside, it doesn't even look like they use all that power. They sit there, stare at the screen for hours, and occasionally type something.&lt;/p&gt;

&lt;p&gt;But the math works out. Forrester studied this for Apple and &lt;a href="https://tei.forrester.com/go/apple/tei/?lang=en-us" rel="noopener noreferrer"&gt;found that Macs save $547 per device&lt;/a&gt; over five years despite the higher sticker price — 60% fewer support tickets, 45 fewer minutes per month on startup and updates, 186% ROI. IBM deployed 200,000 Macs and saw 22% more employees exceeding performance expectations. &lt;strong&gt;The expensive tool is often the cheap one when you zoom out far enough.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;People make the opposite mistake with AI. They hear that AI is expensive, so they go to model aggregators, hunt for the cheapest option, try free tiers, chase rumors about which budget model is "almost as good" as the top one. They spend an hour getting a $0.002 response that's wrong, then another hour trying to fix it with follow-up prompts that are also wrong, then give up and conclude that "AI isn't there yet."&lt;/p&gt;

&lt;p&gt;They're not alone. 54% of small businesses tried AI in the last two years, but &lt;a href="https://www.pertamapartners.com/insights/ai-project-failure-statistics-2026" rel="noopener noreferrer"&gt;46% abandoned it within three months&lt;/a&gt;. I can't prove all of those failures were caused by using the wrong model, but I've seen the pattern enough times to have a strong suspicion.&lt;/p&gt;

&lt;p&gt;AI &lt;em&gt;is&lt;/em&gt; there. You just used the cheap stuff.&lt;/p&gt;

&lt;p&gt;The difference between a top-tier model and a budget one isn't incremental. It's the difference between an assistant that understands what you mean and one that produces confident-sounding nonsense. The good models hold context over long conversations, catch nuance, know when to ask for clarification. The cheap ones forget what you said three messages ago and hallucinate the rest.&lt;/p&gt;

&lt;p&gt;And the good ones are getting cheaper fast. The &lt;a href="https://hai.stanford.edu/ai-index/2025-ai-index-report" rel="noopener noreferrer"&gt;Stanford AI Index&lt;/a&gt; found that the cost of running a model at GPT-3.5 level dropped 280-fold between November 2022 and October 2024. What cost $20 per million tokens now costs $0.07. The frontier models are still more expensive, but the trend is clear and it's not reversing.&lt;/p&gt;

&lt;p&gt;If you're a business owner evaluating AI: get a EUR 20/month subscription to Claude or ChatGPT. That's it. Use the best available model for a month. If AI can't help your business after a month of actually good AI, fair enough — maybe it's not the right time. But don't make that judgment based on a free model that was trained on a fraction of the data and runs on a fraction of the compute.&lt;/p&gt;

&lt;p&gt;If you're a developer: same logic, different scale. The $200/month pro tier pays for itself the first time it saves you a day of debugging. I've watched engineers burn entire afternoons wrestling with a cheap model when the expensive one would've nailed it on the first try.&lt;/p&gt;

&lt;p&gt;Either way, the worst outcome isn't spending too much on AI. It's spending too little, having a bad experience, and writing off the whole technology for another year while your competitors don't.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Don't weld yourself to one tool
&lt;/h2&gt;

&lt;p&gt;If you've ever set up a smart home, you know this pain. You bought Philips Hue lights, a Nest thermostat, and Ring cameras. They all worked fine — inside their own apps. Then you wanted them to talk to each other. Turns out Hue talks to Alexa but not Google Home (or it does, but badly). Ring is an Amazon company, so it plays nice with Alexa but fights with everything else. Your thermostat has its own idea of what "away" means. Three years in, you've got four apps on your phone to control one house, and switching to Apple HomeKit would mean replacing half your hardware.&lt;/p&gt;

&lt;p&gt;That's &lt;a href="https://en.wikipedia.org/wiki/Vendor_lock-in" rel="noopener noreferrer"&gt;vendor lock-in&lt;/a&gt;. Engineers learn it the hard way too. We pick a tool or a framework, build everything around it, and then the tool changes its pricing. Or gets abandoned. Or something better shows up. The ones who survive these transitions are the ones who kept their core logic portable — not tangled into one vendor's specific way of doing things.&lt;/p&gt;

&lt;p&gt;With AI tools, the cycle is even faster. &lt;a href="https://firstpagesage.com/reports/top-generative-ai-chatbots/" rel="noopener noreferrer"&gt;ChatGPT's market share dropped from 76% to 60%&lt;/a&gt; in two years. Claude doubled its share in the same period. Grok went from 1.6% to 15.2% &lt;a href="https://fortune.com/2026/02/05/chatgpt-openai-market-share-app-slip-google-rivals-close-the-gap/" rel="noopener noreferrer"&gt;in a single year&lt;/a&gt;. Whatever tool you're using today might not be the best option six months from now, and almost certainly won't be the best option in two years.&lt;/p&gt;

&lt;p&gt;If your entire automation setup only works with one specific AI tool — if your prompts use features unique to that platform, if your workflows depend on that tool's specific API — you'll be starting over when the landscape shifts. And it will shift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix is simple: keep your core instructions in plain text.&lt;/strong&gt; Write your prompts, your process descriptions, your business rules in a format that any AI system can understand. Treat the specific tool as interchangeable plumbing. When the next thing comes along (and it will), you copy your text files over and you're running in an afternoon instead of rebuilding for a month.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Use the tools that already exist
&lt;/h2&gt;

&lt;p&gt;There's a design philosophy from the 1970s that's aged better than almost anything else in computing. It comes from the creators of Unix — Ken Thompson and Dennis Ritchie at Bell Labs — and it says: &lt;a href="https://en.wikipedia.org/wiki/Unix_philosophy" rel="noopener noreferrer"&gt;each tool should do one thing and do it well&lt;/a&gt;. Don't build a Swiss Army knife. Build a knife, a screwdriver, and a can opener, and let people combine them.&lt;/p&gt;

&lt;p&gt;Fifty years later, the result is thousands of command-line tools that are absurdly good at their specific jobs. &lt;code&gt;ffmpeg&lt;/code&gt; converts any media format to any other media format. &lt;code&gt;ImageMagick&lt;/code&gt; resizes, crops, and transforms images. &lt;code&gt;pandoc&lt;/code&gt; turns a Word document into a PDF, or Markdown, or an ebook, or HTML, or about forty other formats. These tools have been maintained for decades, tested by millions of users, and handle edge cases that would take you weeks to discover on your own.&lt;/p&gt;

&lt;p&gt;Here's the problem: people treat AI like it should do everything from scratch. Need to resize 200 product photos? They'll ask the AI to write a Python script with Pillow that loops through a directory, opens each image, calculates the new dimensions, handles different aspect ratios, preserves EXIF data, and saves the output. The AI will happily do it. It'll burn through tokens generating 40 lines of code, maybe miss a couple of edge cases, and you'll spend another few prompts debugging.&lt;/p&gt;

&lt;p&gt;Or you could just run &lt;code&gt;mogrify -resize 800x600 *.jpg&lt;/code&gt;. One command. ImageMagick's been doing this since 1999. It handles every edge case. It's already on most Linux and Mac systems.&lt;/p&gt;

&lt;p&gt;The irony is that AI is &lt;em&gt;great&lt;/em&gt; at using these tools. It can read man pages, construct complex command pipelines, and chain tools together — that's literally what it's best at, since these tools communicate through text. Doug McIlroy, who invented Unix pipes, &lt;a href="https://archive.org/details/bstj57-6-1899/page/n3/mode/2up" rel="noopener noreferrer"&gt;described the philosophy&lt;/a&gt; as "write programs that handle text streams, because that is a universal interface." AI &lt;em&gt;is&lt;/em&gt; a text interface. It's a natural fit.&lt;/p&gt;

&lt;p&gt;But most people don't tell their AI to look for existing tools first. So it doesn't. It defaults to writing code from scratch because that's what you asked it to do — and it's an eager worker. The fix is stupidly simple: tell it. A single line in your AI configuration file (CLAUDE.md for Claude Code, or whatever your tool uses) saying "before writing code for file conversion, image processing, or data transformation, check if a CLI tool already handles it" changes the behavior completely. If the right tool isn't installed, the AI should say so and suggest installing it — not silently reinvent it in Python.&lt;/p&gt;

&lt;p&gt;And you don't need to know what those tools are called. That's the whole point. You don't need to have heard of &lt;code&gt;pandoc&lt;/code&gt; or &lt;code&gt;ffmpeg&lt;/code&gt; or &lt;code&gt;mogrify&lt;/code&gt; — the AI already knows them. Just flip the question around: instead of asking "convert these files for me," ask "what's the best existing tool to convert these files?" The AI will tell you. It'll suggest the tool, explain what it does, and offer to install it if it's not already on your system. You get a battle-tested solution without having to memorize a catalog of command-line utilities. The AI's training data is basically a giant index of every tool ever documented — use that knowledge before you use its code generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This isn't just about saving tokens, though it does save a lot of them. It's about reliability.&lt;/strong&gt; &lt;code&gt;pandoc&lt;/code&gt; has been converting documents since 2006. It's had twenty years of bug reports, edge cases, and fixes. Your AI-generated script has had twenty seconds of existence and zero users besides you. Which one do you trust with your client's invoices?&lt;/p&gt;

&lt;h2&gt;
  
  
  If you remember nothing else
&lt;/h2&gt;

&lt;p&gt;All four of these lessons come down to the same thing: don't let the tooling become the project.&lt;/p&gt;

&lt;p&gt;Keep it simple. Use the good stuff. Stay portable. Use what's already built. Spend your time on the actual problem — automating the process, building the product, serving the customer — not on managing the tools you're using to do it.&lt;/p&gt;

&lt;p&gt;Engineers spent decades learning this. You don't have to.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.lobsterpack.com/blog/software-engineering-lessons-ai-tools/" rel="noopener noreferrer"&gt;lobsterpack.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>OpenClaw in 60 Seconds: How I Built a Zero-Setup Hosted AI Assistant Service</title>
      <dc:creator>Linas Valiukas</dc:creator>
      <pubDate>Wed, 04 Mar 2026 22:23:58 +0000</pubDate>
      <link>https://dev.to/nesisuksibedarbis/openclaw-in-60-seconds-how-i-built-a-zero-setup-hosted-ai-assistant-service-dkc</link>
      <guid>https://dev.to/nesisuksibedarbis/openclaw-in-60-seconds-how-i-built-a-zero-setup-hosted-ai-assistant-service-dkc</guid>
      <description>&lt;p&gt;I love &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;. It's one of the most exciting open-source AI projects out there — a fully autonomous AI assistant that runs 24/7, connects to WhatsApp, Telegram, Discord, Slack, iMessage, and more, with persistent memory and real tool access.&lt;/p&gt;

&lt;p&gt;But here's the thing: &lt;strong&gt;setting it up is not trivial.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need a VPS, Docker, Node.js, API keys, firewall configuration, DNS setup... For developers, that's Tuesday. For everyone else, it's a wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem I Kept Seeing
&lt;/h2&gt;

&lt;p&gt;I spent time in the OpenClaw Discord and kept seeing the same pattern:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I got stuck on the Docker compose step"&lt;br&gt;
"My WhatsApp bridge won't connect"&lt;br&gt;
"What VPS should I use?"&lt;br&gt;
"Can someone just host this for me?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;People were excited about OpenClaw but couldn't get past the setup. The gap between "this sounds amazing" and "I'm actually using it" was too wide for most people.&lt;/p&gt;

&lt;h2&gt;
  
  
  So I Built TryOpenClaw
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.tryopenclaw.ai" rel="noopener noreferrer"&gt;TryOpenClaw&lt;/a&gt; is a hosted OpenClaw service that eliminates the entire setup process.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pick your messaging platform (WhatsApp, Telegram, Discord, Slack, or iMessage)&lt;/li&gt;
&lt;li&gt;Pay $1&lt;/li&gt;
&lt;li&gt;Start chatting with your own AI assistant&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. No VPS. No Docker. No terminal. No config files. Under 60 seconds from landing page to first message.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Get
&lt;/h2&gt;

&lt;p&gt;Each user gets their own isolated OpenClaw instance with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Persistent memory&lt;/strong&gt; — your assistant remembers conversations across sessions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-platform support&lt;/strong&gt; — connect via your favorite messaging app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full OpenClaw features&lt;/strong&gt; — skills, tools, web search, calendar integration, and more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic updates&lt;/strong&gt; — always running the latest OpenClaw version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security hardening&lt;/strong&gt; — properly configured firewall, SSH, and access controls&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Not Just Self-Host?
&lt;/h2&gt;

&lt;p&gt;You absolutely should if you can! OpenClaw is open source and self-hosting gives you maximum control. TryOpenClaw exists for people who:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Want to try OpenClaw before committing to a full setup&lt;/li&gt;
&lt;li&gt;Don't have the technical background for server administration&lt;/li&gt;
&lt;li&gt;Want a managed, always-on instance without maintenance overhead&lt;/li&gt;
&lt;li&gt;Just want to start chatting with their AI assistant &lt;em&gt;right now&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;If you've been curious about OpenClaw but haven't taken the plunge, &lt;a href="https://www.tryopenclaw.ai" rel="noopener noreferrer"&gt;give TryOpenClaw a shot&lt;/a&gt;. $1 gets you started in under a minute.&lt;/p&gt;

&lt;p&gt;I'd love to hear your feedback — what features matter most to you? What messaging platform do you use? Drop a comment below or reach out at &lt;a href="mailto:support@tryopenclaw.ai"&gt;support@tryopenclaw.ai&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;OpenClaw is an open-source project created by Peter Steinberger. TryOpenClaw is an independent hosted service built on top of it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>selfhosted</category>
    </item>
  </channel>
</rss>
