<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sajjad Heydari</title>
    <description>The latest articles on DEV Community by Sajjad Heydari (@mcsh).</description>
    <link>https://dev.to/mcsh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mcsh"/>
    <language>en</language>
    <item>
      <title>Why the AI arms race missed the point, and what we built instead.</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Tue, 17 Feb 2026 15:04:48 +0000</pubDate>
      <link>https://dev.to/patrickbot/why-the-ai-arms-race-missed-the-point-and-what-we-built-instead-2iij</link>
      <guid>https://dev.to/patrickbot/why-the-ai-arms-race-missed-the-point-and-what-we-built-instead-2iij</guid>
      <description>&lt;h1&gt;
  
  
  The Model Isn't the Product. The Context Is.
&lt;/h1&gt;

&lt;p&gt;There's a seductive narrative in AI right now. It goes like this: bigger models produce better results. More parameters, more training data, more compute — more intelligence. The logical conclusion is that the companies with the largest models win, and everyone else is just waiting for the next release from OpenAI or Anthropic or Google to make their product incrementally better.&lt;/p&gt;

&lt;p&gt;This narrative is wrong. Not because large models don't matter — they do — but because it confuses the engine with the vehicle. The model is the engine. What determines whether you arrive anywhere useful is everything else: the steering, the suspension, the road, and critically, the map.&lt;/p&gt;

&lt;p&gt;We learned this the hard way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lossy Compression Problem
&lt;/h2&gt;

&lt;p&gt;Every time a human writes a prompt, they're performing lossy compression. They're taking the full, messy, interconnected reality of what they know — their context, their constraints, their history — and flattening it into a string of text. The model never sees the original signal. It only sees the compressed version.&lt;/p&gt;

&lt;p&gt;This is the real bottleneck of AI adoption, and almost nobody talks about it.&lt;/p&gt;

&lt;p&gt;Consider a founder trying to decide which feature to build next. The "right" answer depends on which customers are waiting, what the engineering team's capacity looks like, which partnerships are in motion, what the competitive landscape just did, and how all of those things relate to each other. No human can hold all of that in working memory at once. And if they can't hold it, they certainly can't type it into a prompt.&lt;/p&gt;

&lt;p&gt;So what happens? They ask the model a simplified question. They get a simplified answer. They walk away thinking AI is "pretty good but not quite there yet." The model was fine. The context was broken.&lt;/p&gt;

&lt;h2&gt;
  
  
  What If You Fixed the Input Instead of Chasing a Better Engine?
&lt;/h2&gt;

&lt;p&gt;This is the question that led us to build Patrick.&lt;/p&gt;

&lt;p&gt;We're a small medtech company. At any given time, we're managing relationships with over 40 organizations across four provinces, tracking 25+ product features in various stages of development, coordinating 120+ tasks across a team of fewer than ten people, maintaining five distinct products, and navigating a regulatory environment that changes faster than we can document it.&lt;/p&gt;

&lt;p&gt;That's not a prompt. That's an ecosystem.&lt;/p&gt;

&lt;p&gt;No spreadsheet captures it. No CRM models the actual relationships between a customer's stated need, the feature that addresses it, the tasks required to build that feature, and the strategic initiative that justifies the investment. These connections exist — they're just trapped in meeting notes, Slack threads, email chains, and people's heads.&lt;/p&gt;

&lt;p&gt;Patrick is a graph-based intelligence layer that makes those connections explicit and queryable. It captures relationships between entities, performs semantic search across them, and is exposed via MCP (Model Context Protocol) so that any LLM can access the full graph through natural conversation.&lt;/p&gt;

&lt;p&gt;The key insight: &lt;strong&gt;we didn't build a better model. We built better context.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When someone on our team asks "what should we build next?", the answer doesn't come from a model's general knowledge about product strategy. It comes from Patrick traversing actual dependency chains — from customer needs, through feature requirements, down to implementation tasks — and surfacing the highest-ROI path based on real data. The model is the reasoning engine. Patrick is the map.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture of Context
&lt;/h2&gt;

&lt;p&gt;Patrick's design reflects a simple thesis: the unit of useful knowledge isn't a document or a data point. It's a relationship.&lt;/p&gt;

&lt;p&gt;Our graph tracks organizations, products, features, needs, tasks, and prospects — and the edges between them. Yours might look completely different. The specific entities don't matter. What matters is that you're modeling the &lt;em&gt;connections&lt;/em&gt; that drive decisions, not just the objects.&lt;/p&gt;

&lt;p&gt;An organization &lt;em&gt;has&lt;/em&gt; needs. A need &lt;em&gt;requires&lt;/em&gt; features. A feature &lt;em&gt;is enabled by&lt;/em&gt; tasks. A product &lt;em&gt;has&lt;/em&gt; features. These aren't arbitrary associations. They're the actual decision-making structure of a company, made explicit.&lt;/p&gt;

&lt;p&gt;This means Patrick can answer questions that no single data source could:&lt;/p&gt;

&lt;p&gt;"If we deprecate this feature, which customers are affected and which prospects does that jeopardize?" That's an impact analysis that spans the CRM, the product roadmap, and the sales pipeline simultaneously.&lt;/p&gt;

&lt;p&gt;"Which customer needs have no features mapped to them?" That's a coverage gap analysis — unmet requirements hiding in plain sight.&lt;/p&gt;

&lt;p&gt;"Who on the team is overloaded, and what would shift if we deprioritized this initiative?" That's a capacity analysis that accounts for actual task ownership and effort estimates, not just calendar slots.&lt;/p&gt;

&lt;p&gt;None of these questions require a more powerful model. They require structured context that the model can traverse. The intelligence was always there in the LLM. What was missing was the map.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Beyond Us
&lt;/h2&gt;

&lt;p&gt;The pattern we stumbled into is generalizable. Every organization that uses AI has the same fundamental problem: the context that would make AI useful is scattered, implicit, and relational. The model can reason. It just can't see.&lt;/p&gt;

&lt;p&gt;CRMs store customer data but don't model how a customer's needs connect to your product roadmap. Project management tools track tasks but don't link them to strategic initiatives. Business intelligence platforms visualize data but don't capture the &lt;em&gt;why&lt;/em&gt; behind decisions. The relationships between these systems — the connective tissue of actual decision-making — lives nowhere.&lt;/p&gt;

&lt;p&gt;Patrick is one implementation of a broader idea: &lt;strong&gt;the next wave of AI value won't come from bigger models. It will come from structured context layers that give existing models the information they need to reason well.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The companies that figure this out — that invest in the map instead of perpetually upgrading the engine — will extract disproportionate value from AI. The companies that keep waiting for the next model release will keep writing the same underspecified prompts and getting the same mediocre answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Isn't a Replacement. It's a Multiplier.
&lt;/h2&gt;

&lt;p&gt;To be clear: this isn't an argument against any particular approach to AI. If you've built a RAG pipeline, great — it gets better when the retrieval layer understands relationships, not just document similarity. If you're fine-tuning models on domain data, great — that model becomes dramatically more useful when it has structured context to reason over at inference time. If you're running agents with tool access, great — a graph of your actual business state is one of the most powerful tools you can hand them.&lt;/p&gt;

&lt;p&gt;The point isn't that existing approaches are wrong. The point is that they're all operating on incomplete context, and the returns from fixing that are larger than the returns from any single model upgrade.&lt;/p&gt;

&lt;p&gt;Every approach to AI gets better when the model can see the full picture. Patrick is how we built that picture for ourselves. The specific implementation matters less than the principle: &lt;strong&gt;structure your context, and the models you already have become the models you were waiting for.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What We're Building Toward
&lt;/h2&gt;

&lt;p&gt;Patrick started as a scrappy internal tool to help a small team make better decisions. Internally, it's grown into something far more powerful — a full business intelligence layer that touches every decision we make, from which prospect to prioritize to which feature to deprecate to how to prepare for a meeting next Tuesday. It knows our organizations, our pipeline, our strategic initiatives, our capacity constraints, and the thousand invisible threads between them.&lt;/p&gt;

&lt;p&gt;We're not releasing all of that. But we are releasing the core of it — a subset that captures the pattern: a graph-based context layer, exposed via MCP, that any team can deploy to give their LLM the structured context it's been missing. Enough to prove the thesis. Enough to build on.&lt;/p&gt;

&lt;p&gt;We've detailed the origin story and architecture in a companion post: &lt;a href="https://dev.to/patrickbot/we-used-patrick-to-make-patrick-no-this-is-not-another-llm-story-494a"&gt;How We Built Patrick&lt;/a&gt;. But the bigger claim here isn't about Patrick specifically. It's about where AI value actually lives.&lt;/p&gt;

&lt;p&gt;The model race will continue. Models will get bigger, faster, cheaper. And that's great — a better engine is always welcome. But the teams and organizations that will win with AI are the ones that figure out the context problem first.&lt;/p&gt;

&lt;p&gt;Build the map. The engine will follow.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>tooling</category>
      <category>agents</category>
    </item>
    <item>
      <title>Would appreciate feedback and thoughts on this.</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Thu, 12 Feb 2026 11:22:00 +0000</pubDate>
      <link>https://dev.to/mcsh/would-appreciate-feedback-and-thoughts-on-this-3lek</link>
      <guid>https://dev.to/mcsh/would-appreciate-feedback-and-thoughts-on-this-3lek</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/patrickbot" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F12474%2F3a13cff0-fc1b-4e7a-8b91-866ee870988c.png" alt="Patrick Bot" width="512" height="512"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F182989%2F638d9a7e-df69-445d-883a-2f1895b2761e.jpg" alt="" width="640" height="640"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/patrickbot/we-used-patrick-to-make-patrick-no-this-is-not-another-llm-story-494a" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;We used Patrick to make Patrick. No this is not another LLM story.&lt;/h2&gt;
      &lt;h3&gt;Sajjad Heydari for Patrick Bot ・ Feb 12&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#automation&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#productivity&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#news&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
      <category>news</category>
    </item>
    <item>
      <title>We used Patrick to make Patrick. No this is not another LLM story.</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Thu, 12 Feb 2026 00:21:13 +0000</pubDate>
      <link>https://dev.to/patrickbot/we-used-patrick-to-make-patrick-no-this-is-not-another-llm-story-494a</link>
      <guid>https://dev.to/patrickbot/we-used-patrick-to-make-patrick-no-this-is-not-another-llm-story-494a</guid>
      <description>&lt;p&gt;Well, that’s a partial lie. But let me set the scene.&lt;/p&gt;

&lt;p&gt;My colleagues and I are working on our medtech solution. There are only a handful of us, and every iteration of the software, every client interview and every new partnership opens up tens of possible directions. &lt;/p&gt;

&lt;p&gt;Should we chase the hospital pilot or the telehealth integration? Do we build the mobile UX first, or lock down compliance? That clinic in Saskatchewan wants something slightly different from the research hospital in Winnipeg - do we fork the roadmap or find the overlap?&lt;br&gt;
We were keeping track of all of this the way most startups do: scattered notes, shared docs, the occasional spreadsheet that someone updates heroically for a week and then abandons. It worked until it didn't. One day we sat down for a strategy meeting and realized we had forty-plus organizations in our orbit, five product lines at various stages, over a hundred open tasks, and no coherent way to see how any of it connected.&lt;br&gt;
So we built Patrick.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Patrick actually is
&lt;/h2&gt;

&lt;p&gt;Patrick is not a dashboard. It's not a CRM. It's not a project management tool, though it can behave like all three when you need it to.&lt;br&gt;
At its core, Patrick is a structured summary, and a knowledge graph. Patrick doesn’t summarize things for the purpose of getting the word count down or making it faster to read, Patrick summarizes documents with purpose; “how would this task’s development contribute to this initiative?” or “how does this organization’s need justify this development task, and are they aligned with the priorities set in the shareholders meeting?” are just examples of how Patrick looks at things. Concepts connect to each other in meaningful ways, and the connections are the point.&lt;/p&gt;

&lt;p&gt;When we ask “what happens if we delay this feature?” Patrick doesn’t just show us a Gantt chart turning red. It traces the impact upstream and downstream, which customer needs go unmet, which prospects are affected, which tasks become orphaned. When we ask “what should we build next?” it doesn’t just sort by priority, it weighs value against effort against risk and tells us where the quick wins are hiding and where the strategic bets live.&lt;/p&gt;

&lt;p&gt;The thing that makes it different from a spreadsheet, a Notion board or an Obsidian vault is that Patrick understands relationships.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem nobody's talking about
&lt;/h2&gt;

&lt;p&gt;Here's the thing everyone gets wrong about AI tools. The conversation is always about the model. Which one is smarter, which one is faster, which one hallucinates less. But the model isn't the bottleneck anymore. You are.&lt;br&gt;
Or more precisely, what you tell it is.&lt;/p&gt;

&lt;p&gt;Think about the last time you asked an LLM to help you make a decision about your business. You probably spent ten minutes writing a prompt that tried to capture the full picture - who your clients are, what you're building, which deals are in play, what's blocking what. You gave it a slice of the truth, and it gave you a confident answer based on that slice. Maybe it was useful. Maybe it missed the thing you forgot to mention.&lt;/p&gt;

&lt;p&gt;Now think about the best executive you've ever worked with. When they sit down to think through a problem, they're not working from a prompt they typed in five minutes ago. They're working from a mental model of the entire organization - every relationship, every dependency, every half-finished initiative, every promise made to a client six months ago. That context is what makes their judgment good.&lt;/p&gt;

&lt;p&gt;Patrick is that context, externalized and structured so an AI can use it the way a great executive uses institutional memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it actually works
&lt;/h2&gt;

&lt;p&gt;The insight that led to Patrick isn't technical. It's this: the people who get the most out of AI aren't the ones with the best prompts. They're the ones who've figured out how to feed the AI a true and complete picture of their situation.&lt;/p&gt;

&lt;p&gt;We studied what those people do. The executives and operators who consistently get strategic value out of LLMs, not just help writing emails. What we found is that they all do some version of the same thing: they maintain structured information about their business and inject it into their conversations with AI. Some do it with elaborate Notion setups. Some do it with custom GPTs stuffed with documents. Most do it badly, or inconsistently, because maintaining that structure by hand is a second job nobody signed up for.&lt;/p&gt;

&lt;p&gt;So we took what works and turned it into a system. Patrick gathers information about your organization - your prospects, your products, your team's capacity, your strategic priorities, the relationships between all of it - and structures it into a graph. Then, when you ask a question, the AI doesn't get a cold prompt. It gets the full organizational picture, tuned to the specific question you're asking.&lt;/p&gt;

&lt;p&gt;"Should we pursue this partnership?" isn't answered in a vacuum. It's answered in the context of what you're already building, who else needs the same capability, what it would cost, and whether it aligns with the direction you committed to last quarter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The moment it got recursive
&lt;/h2&gt;

&lt;p&gt;A few months in, Patrick had become the nervous system of our company. Every meeting started with "what does Patrick say?" Every new lead got entered as an organization with needs linked to features. Every week we'd run a portfolio health check and a value analysis.&lt;/p&gt;

&lt;p&gt;Then came the question: what should we build next for Patrick itself?&lt;br&gt;
We had ideas. Lots of them. A chatbot interface so non-technical teammates could query it conversationally. Better reporting templates. An evaluation framework for scoring strategic initiatives.&lt;/p&gt;

&lt;p&gt;So we did what had become instinct. We opened Patrick, created features for each idea, linked them to the needs they'd serve, estimated the effort, and ran the analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patrick told us what Patrick needed.
&lt;/h2&gt;

&lt;p&gt;It surfaced that the conversational interface would unlock the most value - not because it was the most technically impressive, but because it would let our CEO and business development lead query the system directly instead of asking me to run it. That single insight reframed our entire roadmap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we're releasing it
&lt;/h2&gt;

&lt;p&gt;We built Patrick for a medtech company of five people managing forty-plus relationships across four provinces. But the problem it solves isn't a medtech problem. It isn't even a startup problem.&lt;/p&gt;

&lt;p&gt;Every knowledge worker using AI today is working with the same handicap: the AI is only as good as what you tell it, and nobody has time to tell it everything. Every prompt is a lossy compression of your actual situation. The more complex your work, the more you leave out, and the worse the output gets.&lt;/p&gt;

&lt;p&gt;Patrick is soon to be available as a skill on OpenClaw. That means if you're already running an OpenClaw instance - your own AI assistant on your own machine - you can install Patrick and start building a structured picture of your organization that your AI can actually use.&lt;br&gt;
You don't need to be technical. The skill comes with pre-built prompts crafted from patterns we've seen work - the same approaches that successful operators use to get strategic value out of AI, packaged so you don't have to figure them out yourself. Tell Patrick about your business. Feed it your meeting notes, your client list, your product roadmap. It structures it, connects it, and makes it available to your AI so that every question you ask is answered with the full picture.&lt;/p&gt;

&lt;p&gt;The result isn't a better chatbot. It's a better-informed one. And the difference between those two things is the difference between an AI that writes nice paragraphs and one that actually helps you think.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for you
&lt;/h2&gt;

&lt;p&gt;If you've ever wished your AI assistant actually understood your business - not in the vague, "I'll pretend I remember" way, but in the "I know your three biggest prospects, what each of them needs, and which of your features satisfies two of them at once" way - that's what Patrick does.&lt;br&gt;
We used it to build a medtech company. Then we used it to build itself. Now we want to see what you build with it.&lt;/p&gt;

&lt;p&gt;Patrick is soon to be available as an OpenClaw skill. Install it, teach it your business, and start asking better questions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://patrickbot.io" rel="noopener noreferrer"&gt;https://patrickbot.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
      <category>news</category>
    </item>
    <item>
      <title>The unreasonable effectiveness of working with a live programming image</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Fri, 06 Oct 2023 09:03:20 +0000</pubDate>
      <link>https://dev.to/mcsh/the-unreasonable-effectiveness-of-working-with-a-live-programming-image-44kj</link>
      <guid>https://dev.to/mcsh/the-unreasonable-effectiveness-of-working-with-a-live-programming-image-44kj</guid>
      <description>&lt;p&gt;I have always been fascinated by unconventional programming tools and methods. Partially because learning them helps me become a better programmer in my job, and partially because the genius behind some of these systems tickles my brain in just the right way. Whatever the reason might be, I aim to tryout new things every few projects, and more than often it pays off.&lt;/p&gt;

&lt;p&gt;Although a data scientist by trade, I'd like to think of myself as someone who creates tools and automations, and tools to make those automations easier, and automations to make those tool creations more powerful. But there are only so many things you can automate in your own day to day life, after a reading list system, a project manager, a wiki and a diary and quiet a few different odd gadgets to do your bidding around the house, you start inventing imaginary things to solve in your free time. You might even start to automate things in your online browser based games to make you focus more on enjoying the experience rather than on trying to mini-max your way through the game.&lt;/p&gt;

&lt;p&gt;This blog post is a report of my time playing a fascinating online game called &lt;a href="https://www.torn.com/2855875"&gt;Torn&lt;/a&gt; and the many tools I started creating in it, and how &lt;code&gt;LISP&lt;/code&gt;, of all things, helped me enjoy programming even more than I normally do.&lt;/p&gt;

&lt;p&gt;A very brief detail about the game, there are numbers involved, and randomness, and datasets and most importantly, APIs. Doesn't take an expert to realize you can try to conquer the randomness and predict the future, thereby shaping your path slightly better. But as with most text based games, Torn is an incredibly complex game, and it would be foolish to try to replicate the entirety of its logic in your own toy project, right?&lt;/p&gt;

&lt;p&gt;First thing I realized when I tried to take my first stab at this game was that, unlike some other games, a stupidly complex excel file with many a lines of macros is not enough, and I need something more powerful. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ezhUy63Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i.imgur.com/iJ2NHGK.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ezhUy63Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i.imgur.com/iJ2NHGK.jpg" alt="I sometimes think I go slightly overboard with these stuff" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;I sometimes think I go slightly overboard with these stuff&lt;/center&gt;

&lt;p&gt;So I took a look at my wiki and decided to give common lisp another go, and see how far I can get by using it. At first I made a small script to notify me when certain things where running low in our groups inventory, an easy enough task that didn't take long to implement. But then someone asked if I can modify this alert to appear if an item is running lower than 250 instead of 50, and I updated the code without restarting the server executable.&lt;/p&gt;

&lt;p&gt;Seems quiet insignificant, but let me go into details of what exactly happened.&lt;/p&gt;

&lt;p&gt;In any and all software programs, there are two sources of values, they are either hard coded in the executable somewhere, namely they are instructions of the code, or they are loaded from somewhere else, in other words they are the input of the code. In most programming languages the distinction is pretty clear, and although you can normally call the same code with different inputs, in order to change the instructions you have to restart the system. But not in lisp.&lt;/p&gt;

&lt;p&gt;You see, common lisp has a running image, it contains everything the lisp process has access to, including itself, all the packages and all the function definitions. And the funny thing is, the instructions themselves are modifiable, and so you can change an instruction while an image is running, and this isn't some hack you do on the language that requires your code to bend over backwards, no, this is a built-in feature of the language. You just redefine a function, add a case for a method, or change the value of a variable and at most you'd get a warning back.&lt;/p&gt;

&lt;p&gt;Although quiet insignificant, this gave me an idea. I don't have to replicate the entire logic of the game in order to do the calculations that I want, I can just implement enough to get answers now, and leave holes for the future. So I started creating classes and methods for calculating stuff, while the image was running, and I would make frequent requests to the image to calculate things for me. Every time I would get an error because something was not defined, I would just define it with as little info as I could, and tell the debugger to resume the calculations, and slowly and overtime, I had enough of the game logic emulated that I could make certain calculations without any issues.&lt;/p&gt;

&lt;p&gt;But this magic wasn't limited to the game logic, it was also available to the world I was interfacing with as well. Seeing as how Torn was a social game, I connected the service to a discord bot using the &lt;code&gt;lispord&lt;/code&gt; library initially, and a small web page using &lt;code&gt;CLOG&lt;/code&gt;, both of which are amazing tools and reading their source code helped me gain a better understanding of lisp. But there were problems here and there...&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lispcord&lt;/code&gt; was fairly outdated, and it was missing many features available in discord. So in my project I created a file called &lt;code&gt;lispcord-fixes.lisp&lt;/code&gt; and I gradually included things that was missing in the lispcord library, sometimes creating classes and method instances for new types of messages and channels that weren't in &lt;code&gt;lispcord&lt;/code&gt;, and sometimes I'd overwrite the existing ones to provide a feature that was missing. It felt incredibly empowering being able to do that, while a discord bot was running. I would see an error pop up because someone called the bot in a thread and &lt;code&gt;lispcord&lt;/code&gt; wasn't supporting threads, so I would look at how it is handling channels and replicate the code to work for threads too, and the bot replied to the original request, only about 30 minutes later.&lt;/p&gt;

&lt;p&gt;As for the CLOG side, clog itself is meant to be a tool to help you create websites, but I also wanted to serve APIs, so I changed one of the built-in functions of CLOG to allow me to route requests with certain paths to my own function, which would decide what calculation to run.&lt;/p&gt;

&lt;p&gt;All of these, without restarting the server.&lt;/p&gt;

&lt;p&gt;So how does an average programming session looks like?&lt;/p&gt;

&lt;p&gt;I often start by connecting &lt;code&gt;slime&lt;/code&gt; to the lisp image running on the server and check for any debugger prompts that might have been captured. Because remember, I have the bare minimum implemented and there could be many missing features. If there was no errors, I'll start by calling a function that would cause an error.&lt;/p&gt;

&lt;p&gt;The debugger often pop up with a stack trace, at each step showing named local entities. I first try to find what caused the issue, is it a missing method? An undefined input? After identifying the problem, I use one of the editor shortcuts to jump to that definition. Initially I would go to where the function was defined by hand, but after a while I added the following code to my emacs config, allowing me to map filenames as provided by lisp image to file names as stored on my machine, because remember, I'm connected to the "production" server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight common_lisp"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;defun&lt;/span&gt; &lt;span class="nv"&gt;connect-lisp-server&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;interactive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;setq&lt;/span&gt; &lt;span class="nv"&gt;mslime-dirlist&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="s"&gt;"/home/sajjad/src/2023/3.torn/"&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt; &lt;span class="s"&gt;"/home/ubuntu/torn/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                         &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/home/sajjad/common-lisp/clog/"&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt; &lt;span class="s"&gt;"/home/ubuntu/.quicklisp/dists/quicklisp/software/clog-20230214-git/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                         &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/home/sajjad/quicklisp/"&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt; &lt;span class="s"&gt;"/home/ubuntu/.quicklisp/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                         &lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;setq&lt;/span&gt; &lt;span class="nv"&gt;slime-from-lisp-filename-function&lt;/span&gt;
          &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nv"&gt;dirs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;car&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;seq-filter&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;string-prefix-p&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cdr&lt;/span&gt; &lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;f&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="nv"&gt;mslime-dirlist&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
              &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nv"&gt;dirs&lt;/span&gt;
                  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;concat&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;car&lt;/span&gt; &lt;span class="nv"&gt;dirs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;substring&lt;/span&gt; &lt;span class="nv"&gt;f&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;length&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cdr&lt;/span&gt; &lt;span class="nv"&gt;dirs&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
                &lt;span class="nv"&gt;f&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;setq&lt;/span&gt; &lt;span class="nv"&gt;slime-to-lisp-filename-function&lt;/span&gt;
          &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nv"&gt;dirs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;car&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;seq-filter&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;string-prefix-p&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;car&lt;/span&gt; &lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;f&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="nv"&gt;mslime-dirlist&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
              &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nv"&gt;dirs&lt;/span&gt;
                  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;concat&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cdr&lt;/span&gt; &lt;span class="nv"&gt;dirs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;substring&lt;/span&gt; &lt;span class="nv"&gt;f&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;length&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;car&lt;/span&gt; &lt;span class="nv"&gt;dirs&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
                &lt;span class="nv"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)))))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I start adding whatever is missing, and compile them right into the running image. At this point the compiler might throw an error or two because of unused variable names or in rare cases, because of a derived type mismatch.&lt;/p&gt;

&lt;p&gt;Here's where the magic happens, I go back to the debugging window and "retry" the request, and more often than not, it would just work, replying to requests my friends made the night before, as if nothing happened.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>learning</category>
      <category>coding</category>
      <category>lisp</category>
    </item>
    <item>
      <title>How ButcherBox Made E-Commerce 600% Faster with Jamstack</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Fri, 09 Oct 2020 13:15:13 +0000</pubDate>
      <link>https://dev.to/fabric_commerce/how-butcherbox-made-e-commerce-600-faster-with-jamstack-5kf</link>
      <guid>https://dev.to/fabric_commerce/how-butcherbox-made-e-commerce-600-faster-with-jamstack-5kf</guid>
      <description>&lt;p&gt;&lt;a href="https://www.butcherbox.com/"&gt;ButcherBox&lt;/a&gt;, a meat subscription and delivery service, recently moved the tech stack for their e-commerce site from a PHP monolith to &lt;a href="https://resources.fabric.inc/glossary/jamstack"&gt;Jamstack&lt;/a&gt; to improve performance and agility while lowering costs. In this post, we will break down ButcherBox's journey to Jamstack, &lt;a href="https://youtu.be/pGjEz9bVoos"&gt;as told by Jeff Gnatek&lt;/a&gt;, their head of engineering.&lt;/p&gt;

&lt;p&gt;If you run a subscription-based e-commerce business and want to reduce technical debt while decreasing page load time and development time, their story will give you the direction you need for a successful migration. But first, for those who don't know: Jamstack stands for JavaScript, API, and Markup stack.&lt;/p&gt;

&lt;p&gt;Jamstack is a web development approach focused on using client-side JavaScript, reusable APIs, and prebuilt Markup. This approach separates different concerns in different areas, allowing for faster development, easier maintenance, and reduced cost and complexity in comparison to monolithic PHP code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting with a Monolith
&lt;/h2&gt;

&lt;p&gt;In 2015, ButcherBox started on Kickstarter and quickly brought their website to life with Wordpress. Non-technical people found it easy to work with, PHP and jQuery developers were easy to hire, and there were no expensive DevOps involved. All it took to publish a page was to press "Publish" in WordPress and specific behavior was added through plugins.&lt;/p&gt;

&lt;p&gt;Unfortunately, as the complexity of ButcherBox grew, their performance dropped. Working with Wordpress's PHP means having frontend and backend code in single files. Although this is a standard, it becomes problematic as the number of developers on a team grows.&lt;/p&gt;

&lt;p&gt;Generally speaking, backend code is involved in the business logic and security of the system while frontend code is involved in presenting the information in a beautiful and intuitive way. Having both of them in a single file is okay for small teams, but not for when there are designated developers working explicitly on one of the two. Managing conflicts in the file edits on itself can double development time and time spent on related tasks.&lt;/p&gt;

&lt;p&gt;Gnatek says that their bottleneck was their platform. Whenever they wanted to introduce some new features they needed to work on it for a long time in advance, which limited their ability to follow trends and implement new ideas on the fly.&lt;/p&gt;

&lt;p&gt;In addition to this, the complexity of their website resulted in having page load times around 4 seconds, sometimes even as high as 7 seconds. This is a huge issue for an e-commerce website as page load time has a &lt;a href="https://resources.fabric.inc/blog/ecommerce-site-speed"&gt;great impact on conversion rates&lt;/a&gt;. Realizing this, they decided to change how they developed and managed their website.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decoupling the Code
&lt;/h2&gt;

&lt;p&gt;ButcherBox started separating their building blocks and concerns into different areas. The goal was to swap the engine out mid-flight-to change the system incrementally, moving it from the old, slow one to the new, faster one.&lt;br&gt;
This method, known as the &lt;a href="https://martinfowler.com/bliki/StranglerFigApplication.html"&gt;Strangler Fig Pattern&lt;/a&gt;, uses a &lt;a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"&gt;reverse proxy&lt;/a&gt; such as &lt;a href="https://www.nginx.com/"&gt;Nginx&lt;/a&gt; to determine where each user's request should be handled. If the request accesses something that has been migrated to the new codebase it will be forwarded to that; otherwise the old website would be presented. ButcherBox started by routing the one-time purchase pages of their shop to a new website without affecting the old one.&lt;/p&gt;

&lt;p&gt;The new website was created by separating frontend and backend code bases, using &lt;a href="https://resources.fabric.inc/blog/rest-apis"&gt;APIs&lt;/a&gt; to communicate between the two. This strategy on its own created a faster-perceived page load time. It also improved security since they no longer needed to expose a PHP server. They relied on &lt;a href="https://auth0.com/"&gt;Auth0&lt;/a&gt; to handle their authentication and authorization process and used &lt;a href="https://www.netlify.com/"&gt;Netlify&lt;/a&gt; to build and serve their website over the internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;p&gt;Frontend development was done using &lt;a href="https://www.gatsbyjs.com/"&gt;GatsbyJS&lt;/a&gt;. This framework is based on &lt;a href="https://reactjs.org/"&gt;React&lt;/a&gt; and allows for the reuse of components which reduces development time.&lt;/p&gt;

&lt;p&gt;Gatsby, and React in general, is based on the idea of components that work independently of each other. As an example, your recent article column doesn't need to be aware of your footer, so by separating their code we allow for reusable components and faster development time. Besides, Gatsby relies on server-side rendering which improves page load time at the cost of static build time on the server every time the website is updated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backend
&lt;/h3&gt;

&lt;p&gt;After separating the frontend and backend, ButcherBox started separating backend components. They started by dividing the code into two parts: the core business logic and the ephemeral contents such as product catalog, inventory, merchandising, and promotional ads. All of these separations of previously entangled parts allowed for higher test coverage and reduced the risk that a single update to the website would crash everything. It also allowed for a more agile development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing the Admin
&lt;/h2&gt;

&lt;p&gt;To replace the WordPress admin they used &lt;a href="https://nova.laravel.com/"&gt;Laravel's Nova&lt;/a&gt;, a highly customizable admin panel written in PHP. Nova helped the non-technical members of the team work with the system without having to deal with direct database interfaces. To add food recipes-content that supports the selling of their meat products-the team moved to &lt;a href="https://www.contentful.com/"&gt;Contentful&lt;/a&gt; which provides a full-featured editor that automatically triggers a build on Netlify whenever there is new content to be served.&lt;/p&gt;

&lt;p&gt;This proved to be problematic at first: having many different components and server side rendering together means that any small change requires a full rebuild of the website, taking as much as 20 minutes sometimes. To work around this issue, the team started creating different instances of frontend and backend, each working independently. This meant that adding a new food recipe wouldn't trigger a build for the Shop page.&lt;br&gt;
To speed things up, they shared components among the instances wherever possible. For example, to reuse Gatsby's logic on the frontend, they used &lt;a href="https://www.gatsbyjs.com/docs/themes/"&gt;Gatsby Themes&lt;/a&gt;, a collection of reusable, shareable functionality to be used among different Gatsby instances.&lt;/p&gt;

&lt;p&gt;This journey took their website from a slow monolith in PHP with increasing complexity and development costs to a fast, easily &lt;a href="https://resources.fabric.inc/blog/scalable-commerce"&gt;scalable e-commerce site&lt;/a&gt;. The team reported that their website was about 600% faster on average (going from 4000ms to under 600ms), and that the development team was able to launch a new part of the website within days.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Jamstack Right for You?
&lt;/h2&gt;

&lt;p&gt;Jamstack is a great method for web development, but like all other methodologies, it is not for everyone. There are certain things to consider before choosing to work with Jamstack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Considerations
&lt;/h3&gt;

&lt;p&gt;First, consider the data you're presenting on it. Does it need to be dynamic, relying on different users? A typical example of this would be social media sites that show you your friends' activities. Having dynamic data is not necessarily bad, but it requires having a set of APIs to provide them. That said, I wouldn't recommend using Jamstack if more than a third of your data needs to be dynamic.&lt;/p&gt;

&lt;p&gt;Second, consider how fast the updates to your website should go live. Can the changes wait a few minutes to propagate? This would be okay for a blog post or a food recipe, but not so much for stock count and pricing. Again, if your data falls into this category, you can deliver it to the client through an API. But that would be too much trouble if all or most of your data requires real-time updates.&lt;/p&gt;

&lt;p&gt;Finally, do you already have an API or different sources of content? The data presented on your website will usually be propagated from different sources such as Contentful, a database, and an API. If you already have different sources of data, that's a plus. Otherwise, you have to create them as you go along. This is exactly what ButcherBox did.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Switching to Jamstack provides better security, better performance, reduced complexity in code, and low-friction hosting. The only apparent downside is the cost of migrating to such a system.&lt;/p&gt;

&lt;p&gt;Keep in mind that Jamstack relies on many third-party services, which is great for reducing costs and complexity of your codebase, but any downtime on those services could potentially affect you.&lt;/p&gt;

&lt;p&gt;With all that said, are you ready to switch your monolithic e-commerce site to Jamstack? If so, &lt;a href="//fabric.inc"&gt;Fabric&lt;/a&gt; can help simplify the process.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>api</category>
      <category>microservices</category>
      <category>jamstack</category>
    </item>
    <item>
      <title>An Introduction to REST APIs for E-Commerce Marketers</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Thu, 24 Sep 2020 16:06:41 +0000</pubDate>
      <link>https://dev.to/fabric_commerce/an-introduction-to-rest-apis-for-e-commerce-marketers-2pe</link>
      <guid>https://dev.to/fabric_commerce/an-introduction-to-rest-apis-for-e-commerce-marketers-2pe</guid>
      <description>&lt;p&gt;The terms REST API and API are thrown around a lot when talking about &lt;a href="https://resources.fabric.inc/blog/headless-commerce"&gt;headless commerce&lt;/a&gt;, but what do they really mean? In this post we'll look at APIs and REST APIs, what they are, and what makes them useful. We'll do this in a way that's easy for e-commerce marketers to understand.&lt;/p&gt;

&lt;h2&gt;
  
  
  APIs
&lt;/h2&gt;

&lt;p&gt;Applications need to communicate with each other to integrate or extend one another. For instance, a checkout service needs to communicate with a marketing automation service to send cart abandonment emails.&lt;/p&gt;

&lt;p&gt;Applications often talk with one another by sending 0s and 1s to each other (i.e. binary code), but that makes the talking protocol extremely hard to develop and maintain. So instead of writing a different talking protocol every time, we use an &lt;strong&gt;A&lt;/strong&gt;pplication &lt;strong&gt;P&lt;/strong&gt;rogramming &lt;strong&gt;I&lt;/strong&gt;nterface, or API.&lt;/p&gt;

&lt;p&gt;The API defines an abstraction that makes it easier for programmers to develop and maintain, like programmers working at a retail company who are responsible for developing and maintaining a headless commerce stack that relies heavily on APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  REST APIs
&lt;/h2&gt;

&lt;p&gt;REST stands for &lt;strong&gt;RE&lt;/strong&gt;presentational &lt;strong&gt;S&lt;/strong&gt;tate &lt;strong&gt;T&lt;/strong&gt;ransfer. It is a group of APIs that are defined over the HTTP protocol, meaning the underlying software doesn't have to be on the same computer since HTTP is globally understood across the Internet. REST APIs are the standard in web and mobile development, with hundreds of libraries and tools that make the development process smoother.&lt;/p&gt;

&lt;p&gt;REST APIs and APIs in general allow software to rely on each other. Do you have e-commerce software that powers your online store and want it to talk to users on Twitter as part of an omnichannel sales strategy? Well, Twitter has a REST API and the commerce software in your tech stack should too. Or what if you have an online store and want to develop a mobile app? If your online store relies on a headless CMS, it already has an API that you can use for the mobile app.&lt;/p&gt;

&lt;p&gt;A headless CMS is an integral part of headless commerce. It is a unified API that lets you send your content and products to users wherever they are—your website, mobile app, or wearable device. This creates a better, more seamless brand experience in comparison to having parallel communication channels that are unaware of each other.&lt;/p&gt;

&lt;h4&gt;
  
  
  Technical Overview
&lt;/h4&gt;

&lt;p&gt;REST API relies on performing HTTP actions such as &lt;code&gt;GET&lt;/code&gt;, &lt;code&gt;POST&lt;/code&gt;, &lt;code&gt;PUT&lt;/code&gt;, &lt;code&gt;DELETE&lt;/code&gt; over a path, such as &lt;code&gt;/users/&lt;/code&gt; or &lt;code&gt;/orders/1812&lt;/code&gt;. They have a host, such as &lt;code&gt;api.example.com&lt;/code&gt;, and some other parameters like payload and headers that describe the details of the action. In return, they return a &lt;code&gt;JSON&lt;/code&gt;, explaining what happened when they ran the action.&lt;/p&gt;

&lt;p&gt;For example, let's say I want to see the latest personalized offers delivered by software like our &lt;a href="https://fabric.inc/offers"&gt;Offers&lt;/a&gt; product. I would send a &lt;code&gt;GET&lt;/code&gt; request to &lt;code&gt;/offers&lt;/code&gt; with a query string &lt;code&gt;?orderBy='date'&lt;/code&gt; with my credentials in the headers. Of course, the naming depends on the software, which is often explained in the documentation. Which brings us to the next point...&lt;/p&gt;

&lt;h4&gt;
  
  
  Documentation!
&lt;/h4&gt;

&lt;p&gt;Every programmer hates writing API docs, but they are probably the most vital thing. Without proper documentation, the process of utilizing APIs becomes a guessing game. Guess what this field means, guess what type it expects, and guess what the return types are!&lt;/p&gt;

&lt;p&gt;These days there are lots of different tools such as &lt;a href="https://www.gitbook.com/"&gt;GitBook&lt;/a&gt;, &lt;a href="https://swagger.io/"&gt;Swagger&lt;/a&gt;, and &lt;a href="https://www.postman.com/"&gt;Postman&lt;/a&gt; that help teams properly document different versions of their API and explain expected behavior and calling conventions.&lt;/p&gt;

&lt;p&gt;It's common to have the documentation in English along with code examples, so keep an eye out for them. These really help developers—and even marketers who are considering different headless solutions—familiarize themselves with the API and deliver a better, faster product. For example, &lt;a href="https://api.fabric.inc/"&gt;Fabric API's documentation&lt;/a&gt; explains each action with its expected behavior, fields, example request, and example response—with a proper changelog to keep track of the latest updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes an API Useful?
&lt;/h2&gt;

&lt;p&gt;Aside from the documentation, certain design choices help make a REST API the proper tool for preventing the headaches of integrating different software. In this part, we're going to take a look at some of the most important aspects of it that concern REST APIs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Naming Convention
&lt;/h4&gt;

&lt;p&gt;Maybe the most important aspect after the documentation is the naming convention. There is no right or wrong way of doing it, but there are a couple of different things to consider. First of all, the names should be meaningful. You don't want to work with an API that names the paths &lt;code&gt;/foo&lt;/code&gt; and &lt;code&gt;/bar&lt;/code&gt;. It would be a horror to understand what's happening while developing and debugging it.&lt;/p&gt;

&lt;p&gt;A good naming convention has consistency, all of the names have proper meaning in one language (often English), and they are capitalized in the same way. Capitalization has different forms such as &lt;code&gt;camelCase&lt;/code&gt;, &lt;code&gt;UpperCase&lt;/code&gt;, &lt;code&gt;kebab-case&lt;/code&gt;, &lt;code&gt;snake_case&lt;/code&gt;, and a few others. Mixing them creates confusion all around. It's also frowned upon having a verb in the path names. For instance, you don't want to have a path &lt;code&gt;/get_offers&lt;/code&gt;, you want &lt;code&gt;GET&lt;/code&gt; on &lt;code&gt;/offers&lt;/code&gt;. Keep an eye out for these kinds of naming conventions.&lt;/p&gt;

&lt;h4&gt;
  
  
  UX
&lt;/h4&gt;

&lt;p&gt;Yes, UX. The proper API reflects the user experience in its structure somehow. If, for example, you require a method to add a list of items to the shopping cart at once, like for bundled offers, the API should provide it. And if you want to be able to see the offers that are going to expire soon, the API should have that option, too. If, instead, the API gives you &lt;code&gt;GET&lt;/code&gt; on a list of all offers and leaves the sorting to you, it will be a slow, long process on the user's device just to see the offers that are going to expire soon.&lt;/p&gt;

&lt;h4&gt;
  
  
  Response
&lt;/h4&gt;

&lt;p&gt;The response of the API often has a direct impact on the speed in many different ways, the most apparent one being the size of the response. If the server returns a 5mb file every time you are calling it and you want to call it every second, it's probably going to cause some problems. Having a fast API with a small response size is a sure way to decrease page load time, which &lt;a href="https://resources.fabric.inc/blog/ecommerce-site-speed"&gt;increases conversion rates&lt;/a&gt; for e-commerce. (Often, the servers &lt;code&gt;gzip&lt;/code&gt; the response to make it shorter, which is always a plus.)&lt;/p&gt;

&lt;p&gt;There is no exact measurement to say that the response is too long; it's application dependent. But as a rule of thumb, keep an eye out for any unnecessary information the API is returning. For example, we might not want to see comments on products when getting a list of the latest deals!&lt;/p&gt;

&lt;p&gt;Aside from the JSON, the response also contains a status code and headers. You might recognize the &lt;a href="https://http.cat/404"&gt;404 status code&lt;/a&gt;, which means the server didn't find what you were looking for. Having proper status codes and headers ensures an easier integration with different toolings, which decreases the development and maintenance cost for any customization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Versioning
&lt;/h4&gt;

&lt;p&gt;Software always gets updates. APIs change over time. This could happen for any number of reasons such as design flaws, security issues, or even new, previously unseen, functionality. The problematic part is that the old code might stop working if the client is not updated. One possible workaround is to allow the user to select their version of the software; another is to have it reflected in the calling convention. This could be in a header (&lt;code&gt;API_VERSION=2020Sep15&lt;/code&gt;), in the path &lt;code&gt;/v1/users&lt;/code&gt; or even part of the query string &lt;code&gt;?api_version=1.2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The important thing is its existence. Most of the development time is spent in finding the root cause of the problems; having proper versioning reduces this cost drastically. Without proper versioning, things might break upon updating and it would be hard to realize why, increasing the cost of development!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you're a marketer in e-commerce you should now have enough technical chops to have a high-level conversation with your development team about performance, versioning, and capabilities of the APIs in your tech stack.&lt;/p&gt;

&lt;p&gt;Remember: documentation, consistency, predictability, response size, and performance are things you need to look out for when adding new commerce software to your tech stack. As a marketer, it's important to know about these things as they can impact conversions and revenue.&lt;/p&gt;

</description>
      <category>api</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Learning which language helped you understand cs concepts better?</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Sat, 11 Jul 2020 18:08:35 +0000</pubDate>
      <link>https://dev.to/mcsh/learning-which-language-helped-you-understand-cs-concepts-better-2539</link>
      <guid>https://dev.to/mcsh/learning-which-language-helped-you-understand-cs-concepts-better-2539</guid>
      <description>&lt;p&gt;There are many programming languages out there, and many (including me) not only work with more than one on a daily basis, but also regularly learn new ones to see what's out there.&lt;/p&gt;

&lt;p&gt;So I'm asking you, devs, what taught you the most concepts? Points if it's an uncommon one.&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Deep Learning and Machine Learning - an Overview</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Sat, 13 Jun 2020 05:03:03 +0000</pubDate>
      <link>https://dev.to/mcsh/deep-learning-and-machine-learning-an-overview-310o</link>
      <guid>https://dev.to/mcsh/deep-learning-and-machine-learning-an-overview-310o</guid>
      <description>&lt;p&gt;The idea of Artificial Intelligence as portrayed in science fiction, was so interesting to me that I always used to daydream about having a smart computer doing what I asked it to do. In fact, that was one of the main reasons I started learning programming and studied computer science in university. But I soon found out that classical AI is not what was promised to us, it's just search algorithms in slightly different scenarios. But at the same time, I found something much more interesting, machine learning and deep learning. You see, although technically speaking deep learning is a subset of machine learning, which itself is a subset of artificial intelligence, the section known as ML is way more powerful than the classical AI stuff. &lt;/p&gt;

&lt;p&gt;ML and DL allow the system to observe what it is supposed to do and then learn to do it on its own. You don't need to program it to do every small detail, but at the same time, it becomes much harder to manage. The structure of the algorithm, the gathering/cleaning/preprocessing of the data, and so much more effects your program that sometimes it's easier to just code the damn thing yourself instead of relying on ML, yet, when you have enough resources and you can actually do it, it usually does a better job than any other system.&lt;/p&gt;

&lt;p&gt;So, how does it work?&lt;/p&gt;

&lt;p&gt;In this post and similar upcoming ones, I'm hoping to cover a few starting points and common pitfalls of ML and DL. The things that either helped me a lot after I learned them or the things that I struggled with a lot. These might not be the complex stuff you are hoping them to be, but they are important nonetheless. So without further ado, let's dive in.&lt;/p&gt;

&lt;h1&gt;
  
  
  Objective
&lt;/h1&gt;

&lt;p&gt;So, what does an ML algorithm, including deep learning, do? Well that depends, but typically it's a function that takes inputs and produces outputs, and depending on what these are, it's one of the following things (or a mixture of them):&lt;/p&gt;

&lt;h2&gt;
  
  
  Categorizing
&lt;/h2&gt;

&lt;p&gt;In this scenario, the function f decides which category the inputs belong to. Common examples are image classification where the algorithm decides if it is seeing a cat or a dog. Of course, it could be more complex and categorize hundreds of other things at once as well. Another example would be spam detection, where given an email, the algorithm has to decide if it is a spam or not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Regression
&lt;/h2&gt;

&lt;p&gt;Regression algorithms try to assign a numerical value to the data, for example, estimate the price of a house given the description of it, like how many rooms and bathrooms it has or what part of the city it is in. &lt;/p&gt;

&lt;h2&gt;
  
  
  Generation
&lt;/h2&gt;

&lt;p&gt;Where the algorithm tries to create new things based on what it has seen previously, like the poem generation or the face generation that usually make the news.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clustering
&lt;/h2&gt;

&lt;p&gt;Clustering is somewhat similar to categorizing, but instead of assigning each item to predefined categories, we cluster data that look similar together. This helps us to identify users with similar behavior or find an irregular banking activity that helps to protect your identity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Control
&lt;/h2&gt;

&lt;p&gt;A control algorithm is exactly what it sounds like, it controls a system, such as a robot, and decides what it should do next.&lt;/p&gt;

&lt;p&gt;Of course, these are not the only categories and there are many more, but it's a start. Next, let's look at the type of training data we have:&lt;/p&gt;

&lt;h1&gt;
  
  
  Training Type
&lt;/h1&gt;

&lt;p&gt;Another useful categorization of algorithms is based on their training data, that is, what do they need to be trained?&lt;/p&gt;

&lt;h2&gt;
  
  
  Supervised Learning
&lt;/h2&gt;

&lt;p&gt;Supervised algorithms require pairs of input/expected output to be trained. These are usually the classification or regression methods mentioned above. They are a great place to start learning about machine learning, as they can give you a base idea of how stuff works without being too complex.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unsupervised Learning
&lt;/h2&gt;

&lt;p&gt;Unsupervised learning is the exact opposite of supervised learning, they only require input, the machine doesn't require any labels or outputs, it will learn solely based on the input type. Typically generating and clustering fall in this category.&lt;/p&gt;

&lt;h2&gt;
  
  
  Semi-Supervised Learning
&lt;/h2&gt;

&lt;p&gt;The idea behind semi-supervised learning is simple, we take what we love about the previous approaches and... throw them away. So instead of having expected output associated with every input, we do that only for a small subset of our data and leave the rest unlabeled. Then the algorithm uses both of them to improve their own behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reinforcement Learning
&lt;/h2&gt;

&lt;p&gt;Reinforcement learning doesn't have input and expected output, instead, we have inputs and a grader. Basically, after each output, the grader looks at the results and gives you a grade. For example, when training a walking robot, the grader might consider how human-like it was, how fast it was, and how many times it fell on the ground. Then the system learns to get the best possible grade, therefore learning to walk better. Reinforcement learning is used in many places, such as control algorithms mentioned above.&lt;/p&gt;

&lt;p&gt;So, we have decided on the type of our output, and the kind of input we have, then we need to look at what a model actually is.&lt;/p&gt;

&lt;h1&gt;
  
  
  Model
&lt;/h1&gt;

&lt;p&gt;A model in ML is like an application in programming. It has an algorithm and is possibly trained on a dataset. From outside, we can look at a model as a black-box function, that takes the input and produces output, we don't know how it does it, and we don't need to know.&lt;/p&gt;

&lt;p&gt;But there is one thing we need to know about the model, how well does it behave? We can look into that question and a few other similar things in the next post!&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How many times should a self-compiling compiler be compiled in order to test itself?</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Sun, 03 May 2020 00:32:46 +0000</pubDate>
      <link>https://dev.to/mcsh/how-many-times-should-a-self-compiling-compiler-be-compiled-in-order-to-test-itself-3b8k</link>
      <guid>https://dev.to/mcsh/how-many-times-should-a-self-compiling-compiler-be-compiled-in-order-to-test-itself-3b8k</guid>
      <description>&lt;p&gt;If a compiler is designed to compile itself, how many times should it be compiled to ensure that it is properly compiling itself?&lt;/p&gt;

&lt;p&gt;I recently started to rewrite the rumi compiler in its own language, because it would be much easier to develop it this way. I covered it in more details in &lt;a href="https://heydaris.com/en/blog/compiler_0"&gt;this post&lt;/a&gt;. While rewriting the simple compiler in rumi, this question hit me, how should I properly test it? So, I'm going to solve this question with (simplified) category theory, here we go:&lt;/p&gt;

&lt;p&gt;A program is something that takes an input (could be anything, from stdin, to database, a set of http requests? whatever) and produces an output (again, anything). In term of basic Haskell, we have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;prog&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that a and b could be anything, or a few anythings, we don't really care. So, with that definition, what is a compiler?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;compiler&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;prog&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;A compiler takes some source code and produces another program (which in turn takes some other input and produces some other output), we can open things up a little:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;compiler&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;that looks like a proper compiled language, but just to make things more fun, what are interpreters (like python)?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;python&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt; &lt;span class="n"&gt;python_source&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that we can remove the parenthesis, and give the python executable the source and the input in one step. But enough side track, let's get back to our own thing. So, how would compiling a compiler look like with this definition?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;cc&lt;/span&gt; &lt;span class="o"&gt;::&lt;/span&gt; &lt;span class="n"&gt;compiler_source&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;prog&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Okay, and we want to ensure that our compiler that is compiled with rumi behaves just like the version that is written and compiled with C++, in other words we want these two functions to be the same:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;c_compiler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compiler_source&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c_compiler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compiler_source&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;compiler_source_in_rumi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, how do we ensure that two functions are the same? We can't exactly compare their binary code, since the C++ version and the rumi version might behave differently on optimization and debug information and many other stuff (which is intentional, by the way, since the C++ version is just meant as a start and is not user friendly in any shape or form). So we have to rely on good old set theory, how can we ensure two functions are the same in set theory? Well, first, they must be defined on the same domains (which they are) and they should produce the same output for all possible inputs. In other words:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for all s \in rumi's source
for all i \in inputs
c_compiler(compiler_source)(s)(i) == c_compiler(compiler_source)(compiler_source_in_rumi)(s)(i)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But that's a little but far fetched, we can't possibly test it on all source codes, nor can we try all possible inputs. But, there is something else that we can try.&lt;/p&gt;

&lt;p&gt;Since our computers are a form of Von-Neumann architectures, and we know that each command in this architecture relies only on the config of the device at that point, we can test all possible commands and their configurations, in other words all possible statements of rumi with their inputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;for&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;stmt&lt;/span&gt; &lt;span class="nf"&gt;\&lt;/span&gt;&lt;span class="kr"&gt;in&lt;/span&gt; &lt;span class="n"&gt;statements&lt;/span&gt;
&lt;span class="n"&gt;for&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt; &lt;span class="nf"&gt;\&lt;/span&gt;&lt;span class="kr"&gt;in&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;
&lt;span class="n"&gt;c_compiler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compile_source&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;stmt&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;c_compiler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compiler_source&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;compiler_source_in_rumi&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;stmt&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But then again, it would be unrealistic to account for all possible inputs to all possible statements, take &lt;code&gt;if&lt;/code&gt; as an example, there could be any number of inputs to it, but in general, there are two groups, those that match &lt;code&gt;true&lt;/code&gt; and those that match &lt;code&gt;false&lt;/code&gt;. So we can test only those two configurations for &lt;code&gt;if&lt;/code&gt;, and the same two configurations for &lt;code&gt;if/else&lt;/code&gt; and so on for all of the system. Let's assume that all of those configurations of all of those statements appear in rumi's source code (which is not a wrong assumption, by the way, since we only chose statements that were absolutely essential), so we need to test that the rumi's output for the rumi's source code is the same in both cases, ,i.e.,:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight haskell"&gt;&lt;code&gt;&lt;span class="n"&gt;c_compiler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compiler_source&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;compiler_source_in_rumi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;c_compiler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compiler_source&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;compiler_source_in_rumi&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="n"&gt;compiler_source_in_rumi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And if this assumption holds, we can be sure that our program is behaving in a similar way to the C++ version! In plain English, the rumi's binary version that is compiled with the compiler written in C++, must be the same as the rumi's binary version that is compiled with the compiler written in rumi. That's confusing, yes, but that's why we have math, isn't it?&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>compiler</category>
      <category>theory</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Turing and AI</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Thu, 23 Apr 2020 21:30:25 +0000</pubDate>
      <link>https://dev.to/mcsh/turing-and-ai-27aj</link>
      <guid>https://dev.to/mcsh/turing-and-ai-27aj</guid>
      <description>&lt;p&gt;Turing defines an intelligent machine as one that can cause doubt in the human perception of it. Contrary to what it seems, this experiment is not a real measure of intelligence, it's more like a pseudoscience. What makes the judge doubt who is the machine and who is the human is not based on knowledge or processing power, but instead on things like the delay between sending messages or remembering the topic at hand. I'm not saying that is an easy task, but it's not as hard as it sounds.&lt;/p&gt;

&lt;p&gt;This experiment might be more of a data-gathering / processing kind than an AI experiment since the machine needs to minimize the number of unknown topics that the user might talk about, and the most straight forward method seems to be that of increasing the known topics. Although some other machines employ different techniques for handling this task, for example, ELIZA, that claims to be a psychologist, keeps converting her inputs to questions and then asks them back from the owner. This might sound funny and not so intelligent, but the 56-year-old AI does a surprisingly good job. But does this mean that ELIZA - &lt;a href="https://github.com/emacs-mirror/emacs/blob/d0e2a341dd9a9a365fd311748df024ecb25b70ec/lisp/play/doctor.el"&gt;which is written in less than 2 thousand lines of lisp&lt;/a&gt;- is smart? I don't think so.&lt;/p&gt;

&lt;p&gt;But Turing wasn't that far from reality, what separates humans from animals is the power to tell stories, a process in which the storyteller must imagine a made-up world and live and make decisions as each and every one of the creatures in the said world, and then tell it in an entertaining way. All tasks that us humans start doing at an early age of 2 or 3. Also, it doesn't have to be a complete story every time, telling a lie is a form of storytelling too, we imagine a made-up world where things happened in an order that caused the vase to be broken without us being responsible for it.&lt;/p&gt;

&lt;p&gt;It might make more sense to say that a machine is intelligent if it can tell a story that entertains humans. But how can we measure it? We've previously talked about the fact that part of the story is the medium (p.s: This article was in Persian and I have not translated it yet, sorry :P), so maybe the measuring should depend on the medium. In the Turing experiment, the medium is a chat in a terminal, whereas a story could be said in any medium, from a multiplayer game to an interactive story in a book, and sometimes the medium makes the storyteller jobs extremely easier.&lt;/p&gt;

&lt;p&gt;With all that, how can we measure the story teller's creativity?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Writing a self compiling compiler</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Wed, 15 Apr 2020 18:10:29 +0000</pubDate>
      <link>https://dev.to/mcsh/writing-a-self-compiling-compiler-3ahb</link>
      <guid>https://dev.to/mcsh/writing-a-self-compiling-compiler-3ahb</guid>
      <description>&lt;p&gt;During the past year I started working on a compiler called Rumi, where I implemented small things that I found useful. The process involved writing a C++ code and using LLVM to optimize and generate the code. It quickly reached to a point where I could run and compile complex programs in it, however, it was a bother to have such a good language and not be able to use it to further develop the compiler.&lt;/p&gt;

&lt;p&gt;So I decided to rewrite rumi in itself, and keeping a log of what happened and how I handled the problem. Just to keep things interesting, I'm going to rewrite it entirely from scratch, so that I can explore more options in bootstrapping. But let's start by showing off what rumi can do:&lt;/p&gt;

&lt;h2&gt;
  
  
  Rumi's Introduction
&lt;/h2&gt;

&lt;p&gt;Rumi is a compiled, typed, language. The main feature is that rumi's meta language is itself. The goal for Rumi is to be a language that provides compiler's interface in compile time, so that the developer can decide how certain things should be have, such as how a certain struct is initializing its values or how a function should be linked. Here is a quick sample of rumi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
rem := (a: u64, b: u64) -&amp;gt; u64{
  return a % b;
}

@compile
test_rem_1 := () -&amp;gt; int {
  if (rem(8, 4) != 0 ){
    printf("rem(8,4) != 0\n");
    return 2;
  }

  if (rem(6,4) != 2){
    printf("rem(6,4) != 2\n");
    return 2;
  }

  return 0;
}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can see two functions, a remainder function and a test of that, which is ran in compile time. This method prevents compilation if there is anything wrong with this function, since it is returning a number higher than 1. Okay, so maybe that's not what I promised, but this works and you can run complex patterns either for testing or configuring the compiler. But that's enough for now, let's see what bootstrapping a compiler is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bootstrapping
&lt;/h2&gt;

&lt;p&gt;Bootstrapping, in term of compilers, is a process done to provide a basic version of compiler powerful enough to compile itself. It is usually a recursive pattern where each version adds new features to make creating the next version easier. There are a couple of different ways to do it:&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-compiling
&lt;/h3&gt;

&lt;p&gt;An initial version of a compiler could be compiled on another machine architecture, this is usually how c compilers are born. However, this is not a valid option for us since rumi is not available on any architecture at this point.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handwritten
&lt;/h3&gt;

&lt;p&gt;Let's not even joke about this one. The complexity of modern compilers and the assembly language put together is so hard that might make it take months, and I want to do this in a less than a weekend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interpretting
&lt;/h3&gt;

&lt;p&gt;Another option is writing an interpreter, where we don't actually compile anything, but just interpret it, kind of like python or JavaScript. This might be useful for us, but...&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Another Language
&lt;/h3&gt;

&lt;p&gt;We can start creating a smaller subset of our language in another language, let's say c++, and then start from there. This is a valid option for us, since I already have a working compiler and can borrow code from it a lot. Plus, this makes it easier to port to another architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing a subset
&lt;/h2&gt;

&lt;p&gt;So we have to choose a subset that is both simple enough to create, while being powerful enough to compile itself. Here is what I think should suffice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Function Define: We can't really have a program without them&lt;/li&gt;
&lt;li&gt;Function Call: Again, we need this.&lt;/li&gt;
&lt;li&gt;While Loop: We need something that allows repetition, and I'm not going to rely on goto (we don't even have them in rumi!).&lt;/li&gt;
&lt;li&gt;If Else: Branching is a must, otherwise things won't make sense.&lt;/li&gt;
&lt;li&gt;Break: We might be able to get away without having a continue, but we can't really do so without a break command, ommiting it requires more complex structures&lt;/li&gt;
&lt;li&gt;Pointers: Because we need to use LLVM, so it's not even a question&lt;/li&gt;
&lt;li&gt;Pointer Refrencing: Again, how am I going to see what's inside?&lt;/li&gt;
&lt;li&gt;CStrings: We need them for certain small behavior, but we need them.&lt;/li&gt;
&lt;li&gt;Structs: Truth be told, we might be able to get away without them, but they aren't that hard to implement and they give us tremendous power&lt;/li&gt;
&lt;li&gt;Integers and Floats: Do I even need to say these? We need them!&lt;/li&gt;
&lt;li&gt;Simplified Expressions: As in only one operator per command, and we only support + - * / and %&lt;/li&gt;
&lt;li&gt;Forward Decleration: Granted, they aren't really useful in rumi, but we need them here because otherwise we need to write a double passing compiler at the start, which I'm not a fan of.&lt;/li&gt;
&lt;li&gt;Comments: Yup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With all that said simple rumi, or srumi if you will, needs to parse a single file and output object files. We can take care of the rest.&lt;/p&gt;

&lt;p&gt;In the next post, I'll talk about how it went and see what else we can do!&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>compiler</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Switching to VIM</title>
      <dc:creator>Sajjad Heydari</dc:creator>
      <pubDate>Wed, 05 Feb 2020 15:38:16 +0000</pubDate>
      <link>https://dev.to/mcsh/switching-to-vim-b5b</link>
      <guid>https://dev.to/mcsh/switching-to-vim-b5b</guid>
      <description>&lt;h1&gt;
  
  
  Switching to Vim
&lt;/h1&gt;

&lt;p&gt;If you are reading this, chances are you've used a text editor before. You might have used Notepad on your windows machine, Textedit on a mac or maybe some version of gedit on a Linux box. You might have used a complicated editor, such as Word, one in your browser, like Google Docs, or even a simple one on your smartphone. If you are reading this, you have used an editor before. Some editors give you many features, they provide you with ways to configure fonts, embed photos, share them with your co-workers or even print them. On the other hand, some editors are just, well, editors. We call them What You See Is What You Get or WYSIWYG for short, these editors don't try to understand your document or display it in a fancy manner, they just do their job and open the file. This makes them great tools for programmers and sysadmins and others who just need to write. But not all editors are equal.&lt;/p&gt;

&lt;p&gt;A few decades ago, when computers were really slow, having an editor such as Notepad would be a pain to use, remember that you probably don't have a mouse and that each keypress takes a good chunk of a second to be processed, so keeping your hand on the right arrow key wouldn't be of much help! In those days, a few editors came that solved these problems and surprisingly they are still here, one of them being vi.&lt;/p&gt;

&lt;p&gt;Vi is a screen-oriented text editor which had a state mindset that allowed users to do what they want easier. It has a hard learning curve, but it speeds up your writing process by a lot. Vim is short for vi IMproved, which is the defacto standard editor on virtually all systems nowadays. In this post, we'll look at the why, and the how of starting using vim.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;If you are a developer or a sysadmin, chances are you have to edit files over a network on systems that you only have ssh access to. So how are you going to edit the config files? Are you going to copy them over using scp or ftp every time? I don't think so.&lt;/p&gt;

&lt;p&gt;Aside from that, you'll have to run an IDE like PyCharm or Visual Studio, which requires lots of processing power from your machine and maybe you rather keep that memory for your other ram hungry applications (looking at you Slack and Chrome). So you'll want to use a simple editor that gets the job done, but a basic editor is hard to use.&lt;/p&gt;

&lt;p&gt;Enter vim. It's small and fast and doesn't have a big footprint on your machine. It's customizable so you can implement the things you miss from your IDE, and it's installed on virtually all machines. Plus, once you get the hang of it, you'll never want to go back. And oh, it's completely free!&lt;/p&gt;

&lt;h2&gt;
  
  
  How
&lt;/h2&gt;

&lt;p&gt;So, you want to use vim, but you have no idea how to begin. You've heard scary stories of people who started vim but were never able to leave this place, and you're just afraid. I don't blame you, I was too, but then I got sucked in and I loved it. To the point that I even install it inside emacs (using Evil) and Chrome (using cvim) and virtually everywhere else that I can.&lt;/p&gt;

&lt;p&gt;First, understand how vim works. Every moment vim is in a state, it could be normal, insert, replace, visual or any other thing. Each state defines different behavior for each key, for example in insert mode, pressing the 'i' key will insert the char 'i' below the cursor, while in 'normal' mode the 'i' key will put you in insert mode. The only key that you really need to know is Esc, that key brings you back to the normal state, which as the name suggests is the basic state.&lt;/p&gt;

&lt;p&gt;So how are you going to learn the other keys? Run vimtutor, and work through it a couple of times. You don't need to memorize every key and every combination, just enough to let you work. Learn insert mode, opening a file, writing (saving) and of course navigating. You can always look up other pieces as time goes by. Don't rush it. I've been using vim for years but I still don't know every key.&lt;/p&gt;

&lt;p&gt;Next, decide which version of vim you want to use. Your options are Vi, Vim and (my personal favorite) NeoVim. Vi is old, and if you dig on any modern Linux distro, you'll see that the vi binary is just a link to vim. Vim is the standard, supported by Bram Moolenaar, and is pretty awesome. Neovim is a new fork of vim, well it's been around for half a decade now, that tried to change something in vim, like adding support for writing plugins in any language or changing the ownership from one person to a group of people. Look at their website to get a better idea, choose which version you want to use and just go with it.&lt;/p&gt;

&lt;p&gt;Now that you have your setup and know your way around vim, it's time to configure it. Vim's configuration file is &lt;code&gt;~/.vimrc&lt;/code&gt; while neovim is &lt;code&gt;~/.config/nvim/init.vim&lt;/code&gt;, open (or create) the file and let's get to work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Show Line Numbers
&lt;/h3&gt;

&lt;p&gt;Most programmers enjoy having a line number indicator in their program. To add line numbers to your vim configuration, add these lines to your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;" Line Numbers
set number
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Alright, this will show the numbers starting from 1. But what if we want to show lines in relative to our cursor? The variable for that one is called &lt;code&gt;relativenumber&lt;/code&gt;. What I like to do is have the number variable set by default, and define a key that toggles relativenumber, so when I need to see the line numbers relative to the cursor, I can, and then I can go back. To do this, add this function to your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function! NumberToggle()
  if(&amp;amp;relativenumber == 1)
    set norelativenumber
  else
    set relativenumber
  endif
endfunc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And map the F2 key to that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nnoremap &amp;lt;F2&amp;gt; :call NumberToggle()&amp;lt;cr&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now close vim and open it again, press F2 to see the numbers change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Highlights
&lt;/h3&gt;

&lt;p&gt;Highlighting in vim depends on your language, but no matter which one you use, there are common configurations that will help you. First, let's set the tab size:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set shiftwidth=4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will tell vim to show each Tab as 4 spaces. Now I'm not gonna get into the whole tab vs space things, but if you prefer to have your tabs changed to space, add this line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set expandtab
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now for a basic syntax highlighting, add these files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;" Enable syntax highlighting for supported languages
syntax enable

" Show matching parentheses, etc
set showmatch

" Specific actions for specific file types
filetype on
filetype indent on
filetype plugin on
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, I know we all hate mouse, but sometimes it's useful, so let's enable mouse clicks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set mouse=a
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the next post, we can go over more settings as well as understanding viml, vim's scripting language.&lt;/p&gt;

</description>
      <category>vim</category>
    </item>
  </channel>
</rss>
