<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: L.M MinKing.</title>
    <description>The latest articles on DEV Community by L.M MinKing. (@mkstudio).</description>
    <link>https://dev.to/mkstudio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mkstudio"/>
    <language>en</language>
    <item>
      <title>I made a product called Prompt to save and optimize.</title>
      <dc:creator>L.M MinKing.</dc:creator>
      <pubDate>Tue, 19 Aug 2025 05:31:00 +0000</pubDate>
      <link>https://dev.to/mkstudio/i-made-a-product-called-prompt-to-save-and-optimize-9pa</link>
      <guid>https://dev.to/mkstudio/i-made-a-product-called-prompt-to-save-and-optimize-9pa</guid>
      <description>&lt;p&gt;In the AI era, I consider prompts to be the gateway to natural language programming. Personally, I often find prompts incapable of accurately understanding our content.&lt;/p&gt;

&lt;p&gt;UI is currently AI's strongest area, but for ClaudeCode, I'm fed up with the default blue-purple color scheme of TailwindCSS training data. So, I'd love a place to store my prompts that's easy to customize.&lt;/p&gt;

&lt;p&gt;For text-based work, you can also save your favorite prompts and generate and publish them in batches!&lt;/p&gt;

&lt;p&gt;I've also created an MCP plugin that makes it easy to save and access prompts directly in a cursor or other MCP-supported client!&lt;/p&gt;

&lt;p&gt;I think it's quite interesting! What do you think of it?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51m7nn91qrgvy7yldpkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51m7nn91qrgvy7yldpkc.png" alt=" " width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>promptengineering</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>Are prompts dead? I think in the short term we still need professional prompts to achieve accurate vibes!</title>
      <dc:creator>L.M MinKing.</dc:creator>
      <pubDate>Fri, 01 Aug 2025 01:02:57 +0000</pubDate>
      <link>https://dev.to/mkstudio/are-prompts-dead-i-think-in-the-short-term-we-still-need-professional-prompts-to-achieve-accurate-3dce</link>
      <guid>https://dev.to/mkstudio/are-prompts-dead-i-think-in-the-short-term-we-still-need-professional-prompts-to-achieve-accurate-3dce</guid>
      <description>&lt;p&gt;For example, front-ends are all using a blue gradient color scheme right now, and icons all have a single style. Therefore, I think the importance of prompts is self-evident.&lt;/p&gt;

&lt;p&gt;Although OpenAI recently stated that prompts are dead, I believe it's unlikely that they will be eliminated in the short term. Building efficient and precise prompts is a very useful approach.&lt;/p&gt;

&lt;p&gt;From low-level assembly language to high-level languages like Java, prompts can now be formatted to allow AI to generate what we want.&lt;/p&gt;

&lt;p&gt;What do you think of the role of prompts? Is it better to be specialized, or is it more of a weak, self-explanatory description? For example: ChatGPT: "Help me generate a very dynamic, beautiful, and simple interface." Honestly, in real life, front-end developers would beat you to death, hahahahaha&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>coding</category>
      <category>webdev</category>
    </item>
    <item>
      <title>AI helps you log into the server and locate and fix problems. It’s cool, but you need to pay attention to setting the rules!</title>
      <dc:creator>L.M MinKing.</dc:creator>
      <pubDate>Wed, 30 Jul 2025 01:07:11 +0000</pubDate>
      <link>https://dev.to/mkstudio/ai-helps-you-log-into-the-server-and-locate-and-fix-problems-its-cool-but-you-need-to-pay-3gkg</link>
      <guid>https://dev.to/mkstudio/ai-helps-you-log-into-the-server-and-locate-and-fix-problems-its-cool-but-you-need-to-pay-3gkg</guid>
      <description>&lt;p&gt;After recently vibe-ing code, I've encountered some server issues, including but not limited to deployment, locating Docker container errors, fixing server time zones, insufficient disk space, and other issues.&lt;/p&gt;

&lt;p&gt;In the past, I had to log in through SSH and troubleshoot each one myself. For example, when my disk space was insufficient, I used to run df -h and then troubleshoot each one, ultimately discovering that the Docker cache and some unused images were taking up too much space. Now, using the cursor with appropriate prompts, you can have it log in directly to your server, perform these locating operations, and then help you clean up disk space. While this is great, I've also experienced setbacks. I saw the cursor execute the command "docker system prune -f," which doesn't delete running containers, but it executed some extra commands that caused them to be deleted. Fortunately, I had persisted the data, and this was my test environment.&lt;/p&gt;

&lt;p&gt;So, we need to add some rules to the cursor to prevent the AI from executing seriously incorrect commands and causing losses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpchr4mq7xj3ujn42cf0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpchr4mq7xj3ujn42cf0.png" alt=" " width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But overall, I think the efficiency improvement is quite impressive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6g8tnjpymaes93r7vns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6g8tnjpymaes93r7vns.png" alt=" " width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is indeed very cool when auto run is turned on, but we still need to take a closer look to prevent serious problems&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Can memOS solve the long memory problem of LLM? The current long context is indeed not enough</title>
      <dc:creator>L.M MinKing.</dc:creator>
      <pubDate>Fri, 25 Jul 2025 01:51:10 +0000</pubDate>
      <link>https://dev.to/mkstudio/can-memos-solve-the-long-memory-problem-of-llm-the-current-long-context-is-indeed-not-enough-2jll</link>
      <guid>https://dev.to/mkstudio/can-memos-solve-the-long-memory-problem-of-llm-the-current-long-context-is-indeed-not-enough-2jll</guid>
      <description>&lt;p&gt;As a full-stack independent developer, I recently encountered a limitation when using AI IDEs powered by LLMs: the context window is often capped at 64K or 128K tokens. While this may seem large, it falls short for use cases involving complex or large-scale projects. As a result, LLMs often "forget" important information, and we’re forced to write overly verbose prompts, which consume a significant number of tokens and hinder productivity.&lt;/p&gt;

&lt;p&gt;After looking into the transformer-based architecture that underlies most LLMs, I realized that the models are trained once and then essentially "frozen"—they don’t update themselves post-deployment. To address this, many current systems use RAG (Retrieval-Augmented Generation) to simulate long-term memory by segmenting and indexing knowledge externally. However, this is still fundamentally prompt-based, and doesn't constitute true memory in the sense of persistent internal learning.&lt;/p&gt;

&lt;p&gt;At one point, I wondered whether it would be possible to continuously feed data into the model and retrain it incrementally, similar to how human memory works. But from what I’ve learned, this is nearly infeasible with current architectures. Retraining or fine-tuning models, even incrementally, is extremely costly—requiring reprocessing of the data into vector representations and adjustment of learned weights, which is not practical for most real-world applications.&lt;/p&gt;

&lt;p&gt;So, the challenge isn’t just about cost—though that’s a big part—it’s also about how current architectures are fundamentally not designed for continuous learning or dynamic memory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9snsiy2t816cidayiwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9snsiy2t816cidayiwe.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently, I came across an open-source project called memOS, which claims to support long-term contextual memory. At first glance, it seemed promising. However, I'm still unsure how it's implemented under the hood and whether it's genuinely different from RAG-based systems. I’d be curious to hear if others have used it and whether it truly offers long-term memory capabilities beyond traditional retrieval.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3eu3ngwigzejnz8fc1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3eu3ngwigzejnz8fc1g.png" alt=" " width="800" height="319"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;It is also clearly stated here that the API-based LLM call does not support the parameter memory function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F551ktrqfl4pqtnvxu1lj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F551ktrqfl4pqtnvxu1lj.png" alt=" " width="800" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reference Links：&lt;br&gt;
&lt;a href="https://memos.openmem.net/" rel="noopener noreferrer"&gt;https://memos.openmem.net/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://charitydoing.com/building-memory-aware-ai-an-introduction-to-memos-in-memos/" rel="noopener noreferrer"&gt;https://charitydoing.com/building-memory-aware-ai-an-introduction-to-memos-in-memos/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I made a QR code website, but no one visits it. Can anyone give me some suggestions? What went wrong?</title>
      <dc:creator>L.M MinKing.</dc:creator>
      <pubDate>Thu, 24 Jul 2025 01:39:55 +0000</pubDate>
      <link>https://dev.to/mkstudio/i-made-a-qr-code-website-but-no-one-visits-it-can-anyone-give-me-some-suggestions-what-went-1eip</link>
      <guid>https://dev.to/mkstudio/i-made-a-qr-code-website-but-no-one-visits-it-can-anyone-give-me-some-suggestions-what-went-1eip</guid>
      <description>&lt;p&gt;Hey devs 👋&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65p4fsy0uzt60349fbns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65p4fsy0uzt60349fbns.png" alt=" " width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently launched a small side project:&lt;br&gt;
👉 &lt;a href="https://www.qrcodehub.net/" rel="noopener noreferrer"&gt;QRCodeHub.net&lt;/a&gt; – a fast, free, no-login QR code generator.&lt;/p&gt;

&lt;p&gt;I built it with care, designed a clean UI, made sure it's fast, responsive, and has no ads.&lt;br&gt;
But after launching it... almost no one is visiting.&lt;/p&gt;

&lt;p&gt;I originally wanted to make some dynamic QR codes, or some funny QR codes with pictures as the background or some QR codes with different shapes.&lt;/p&gt;

&lt;p&gt;I think the current qrcode track is too crowded, there are too many people, and the functions implemented are great. As an ordinary developer, the qrcode I am currently making has no special features.&lt;/p&gt;

</description>
      <category>website</category>
      <category>webdev</category>
      <category>sideprojects</category>
      <category>discuss</category>
    </item>
    <item>
      <title>The AI era is really conducive to our independent development</title>
      <dc:creator>L.M MinKing.</dc:creator>
      <pubDate>Tue, 22 Jul 2025 06:10:19 +0000</pubDate>
      <link>https://dev.to/mkstudio/the-ai-era-is-really-conducive-to-our-independent-development-25h0</link>
      <guid>https://dev.to/mkstudio/the-ai-era-is-really-conducive-to-our-independent-development-25h0</guid>
      <description>&lt;p&gt;I was originally working on JavaWeb, real-time stream processing of big data flink Storm, and my foundation in front-end, Android and iOS was very weak, but the arrival of AI armed me, and I can achieve many possibilities by myself!&lt;/p&gt;

&lt;p&gt;I tried to use chatgpt to make an iOS app in 2023. At that time, there was no AI agent for IDE, and I could only copy and paste code snippets to expand and modify! The efficiency was very slow, but at least it was done. Now I can quickly build a project in one night with cursor, claudeCode or trae, and realize basic functions! It's incredible!&lt;/p&gt;

&lt;p&gt;At present, I use cursor as the main force at work, and the subscription of the enterprise organization is 500 times + 40 dollars, but the quota limit is not enough according to the number of times, unlike claudeCode's token billing, which is very cost-effective and durable. But overall, tokens are still relatively expensive, and I hope to compete for cheaper prices in the future!&lt;/p&gt;

&lt;p&gt;In short, I am very happy to be here and hope to meet more developers!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
