<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Seal</title>
    <description>The latest articles on DEV Community by Seal (@seal-io).</description>
    <link>https://dev.to/seal-io</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/seal-io"/>
    <language>en</language>
    <item>
      <title>Building Your Private ChatGPT and Knowledge Base with AnythingLLM and GPUStack</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Tue, 12 Nov 2024 04:27:09 +0000</pubDate>
      <link>https://dev.to/seal-software/building-your-private-chatgpt-and-knowledge-base-with-anythingllm-and-gpustack-5fgg</link>
      <guid>https://dev.to/seal-software/building-your-private-chatgpt-and-knowledge-base-with-anythingllm-and-gpustack-5fgg</guid>
      <description>&lt;p&gt;&lt;strong&gt;AnythingLLM&lt;/strong&gt; [&lt;a href="https://github.com/Mintplex-Labs/anything-llm" rel="noopener noreferrer"&gt;https://github.com/Mintplex-Labs/anything-llm&lt;/a&gt;] is an all-in-one AI application that runs on Mac, Windows, and Linux. Its goal is to enable the local creation of a &lt;strong&gt;personal ChatGPT&lt;/strong&gt; using either commercial or open-source LLMs along with vector database solutions. AnythingLLM goes beyond being a simple chatbot by including Retrieval-Augmented Generation (RAG) and Agent capabilities. These features allow it to perform a variety of tasks, such as fetching website information, generating charts, summarizing documents, and more.&lt;/p&gt;

&lt;p&gt;AnythingLLM can integrate various types of documents into different workspaces, enabling users to reference document content during chats. This provides a easy way to organize workspaces for different tasks and documents.&lt;/p&gt;

&lt;p&gt;In this article, we will introduce how to build a personal ChatGPT with knowledge base using &lt;strong&gt;AnythingLLM + GPUStack&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run models with GPUStack
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GPUStack is an open-source GPU cluster manager for running large language models (LLMs)&lt;/strong&gt;. It enables you to create a unified cluster from GPUs across various platforms, including Apple MacBooks, Windows PCs, and Linux servers. Administrators can deploy LLMs from popular repositories like Hugging Face, allowing developers to access these models as easily as they would access public LLM services from providers such as OpenAI or Microsoft Azure.&lt;/p&gt;

&lt;p&gt;Unlike Ollama, &lt;strong&gt;GPUStack&lt;/strong&gt; is a cluster solution designed to aggregate GPU resources from multiple devices to run models.&lt;/p&gt;

&lt;p&gt;To deploy the &lt;strong&gt;Chat Model&lt;/strong&gt; and &lt;strong&gt;Embedding Model&lt;/strong&gt; on &lt;strong&gt;GPUStack&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;​ • &lt;strong&gt;Chat Model&lt;/strong&gt;: &lt;strong&gt;llama3.1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;​ • &lt;strong&gt;Embedding Model&lt;/strong&gt;: &lt;strong&gt;bge-m3&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ja76tpzv298cm4rbmbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ja76tpzv298cm4rbmbg.png" alt="image-20241105171908268" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And you need to create an API key. This key will be used by &lt;strong&gt;AnythingLLM&lt;/strong&gt; to authenticate when accessing the models API deployed on &lt;strong&gt;GPUStack&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install and configure AnythingLLM
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AnythingLLM&lt;/strong&gt; offers packages for &lt;strong&gt;Mac, Windows, and Linux&lt;/strong&gt;, you can download from &lt;a href="https://anythingllm.com/download" rel="noopener noreferrer"&gt;https://anythingllm.com/download&lt;/a&gt;. After installation, open AnythingLLM to begin the setup process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure LLM Provider
&lt;/h3&gt;

&lt;p&gt;First, configure the chat model. Search for &lt;strong&gt;OpenAI&lt;/strong&gt;, select &lt;strong&gt;Generic OpenAI&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nwdo25q7yjkjt1cfgvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nwdo25q7yjkjt1cfgvf.png" alt="image-20241105163235972" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And fill in the details for the model deployed on &lt;strong&gt;GPUStack&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56l0jenlvsbsjcrt3f3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56l0jenlvsbsjcrt3f3j.png" alt="image-20241105163253668" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save and configure embedding model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Embedding Provider
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AnythingLLM&lt;/strong&gt; includes a lightweight embedding model, &lt;strong&gt;all-MiniLM-L6-v2&lt;/strong&gt;, which offers limited performance and context length. For more powerful embedding capabilities, you can either opt for public embedding services or run open-source embedding models. Here, we’ll configure the embedding model &lt;strong&gt;bge-m3&lt;/strong&gt;, which is running on &lt;strong&gt;GPUStack&lt;/strong&gt;. Set the embedding provider to &lt;strong&gt;Generic OpenAI&lt;/strong&gt; and fill in the relevant configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpk4o2vigwew0gx9j62m8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpk4o2vigwew0gx9j62m8.png" alt="image-20241105162753929" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then create a workspace, and we can use AnythingLLM after it's completed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use AnythingLLM
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Chat with LLM
&lt;/h3&gt;

&lt;p&gt;Select a workspace, create a new thread, and send your question to the LLM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7elc5gv56ax1c58nu6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7elc5gv56ax1c58nu6c.png" alt="image-20241105163657917" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Fetch website content
&lt;/h3&gt;

&lt;p&gt;Click the upload button next to the workspace, enter the website URL in the &lt;strong&gt;Fetch website&lt;/strong&gt; box, and fetch the website content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglm84zqsx96x94ismm4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglm84zqsx96x94ismm4d.png" alt="image-20241105164159767" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The fetched website content will be sent to the embedding model for vectorization and then stored in the vector database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uhgfy1a5j2xtatyeo0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uhgfy1a5j2xtatyeo0q.png" alt="image-20241105164252415" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the content fetched from the website:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4h5iuarbzfsvbv6powo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4h5iuarbzfsvbv6powo.png" alt="image-20241105164801193" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Documents embedding
&lt;/h3&gt;

&lt;p&gt;Click the upload button next to the workspace, then click the upload box and upload a document. The document will be sent to the embedding model for vectorization and then stored in the vector database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptocyq1wx46igsbpb621.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptocyq1wx46igsbpb621.png" alt="image-20241105164914343" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the content of embedded documents:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhx706cbhdrtmwdtde4sr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhx706cbhdrtmwdtde4sr.png" alt="image-20241105165047935" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information, please read the &lt;code&gt;AnythingLLM&lt;/code&gt; documentation: &lt;a href="https://docs.anythingllm.com/" rel="noopener noreferrer"&gt;https://docs.anythingllm.com/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we have introduced how to use &lt;code&gt;AnythingLLM + GPUStack&lt;/code&gt; to aggregate GPUs across multiple devices and build an all-in-one AI application for RAG and AI Agents.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GPUStack&lt;/code&gt; provides a standard OpenAI-compatible API, which can be quickly and smoothly integrated with various LLM ecosystem components. Wanna give it a go? Try to integrate your tools/frameworks/software with &lt;code&gt;GPUStack&lt;/code&gt; now and share with us!&lt;/p&gt;

&lt;p&gt;If you encounter any issues while integrating GPUStack with third parties, feel free to join &lt;a href="https://discord.gg/VXYJzuaqwD" rel="noopener noreferrer"&gt;GPUStack Discord Community&lt;/a&gt; and get support from our engineers.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building Free GitHub Copilot Alternative with Continue + GPUStack</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Fri, 23 Aug 2024 17:00:00 +0000</pubDate>
      <link>https://dev.to/seal-software/building-free-github-copilot-alternative-with-continue-gpustack-2djh</link>
      <guid>https://dev.to/seal-software/building-free-github-copilot-alternative-with-continue-gpustack-2djh</guid>
      <description>&lt;p&gt;&lt;a href="https://seal.io/building-free-github-copilot-alternative-with-continue-and-gpustack/" rel="noopener noreferrer"&gt;Click here to read original post&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/continuedev/continue" rel="noopener noreferrer"&gt;&lt;code&gt;Continue&lt;/code&gt;&lt;/a&gt; is an open-source alternative to &lt;code&gt;GitHub Copilot&lt;/code&gt;, this is an open-source AI coding assistant that allows to connect various large language models(LLMs) within &lt;code&gt;VS Code&lt;/code&gt; and &lt;code&gt;JetBrains&lt;/code&gt; to build custom code autocompletion and chat capabilities. It supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code parsing
&lt;/li&gt;
&lt;li&gt;Code autocompletion&lt;/li&gt;
&lt;li&gt;Code optimization suggestions
&lt;/li&gt;
&lt;li&gt;Code refactoring &lt;/li&gt;
&lt;li&gt;Code implementations Inquiring
&lt;/li&gt;
&lt;li&gt;Documentation online searching&lt;/li&gt;
&lt;li&gt;Terminal errors parsing
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and more. It assists developers in coding and enhancing their development efficiency.&lt;/p&gt;

&lt;p&gt;In this tutorial, we are going to use &lt;strong&gt;&lt;code&gt;Continue + GPUStack&lt;/code&gt;&lt;/strong&gt; to build a free GitHub Copilot locally, providing developers with an AI-paired programming experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Models with GPUStack
&lt;/h2&gt;

&lt;p&gt;First, we will deploy the models on &lt;code&gt;GPUStack&lt;/code&gt;. There are three model types recommended by &lt;code&gt;Continue&lt;/code&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chat model&lt;/strong&gt;: select &lt;code&gt;llama3.1&lt;/code&gt;, this is the latest open-source model trained by Meta.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autocompletion model&lt;/strong&gt;: select &lt;code&gt;starcoder2:3b&lt;/code&gt;, a highly advanced autocompletion model trained by Hugging Face.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embedding model&lt;/strong&gt;: select &lt;code&gt;nomic-embed-text&lt;/code&gt;, which supports a context length of 8192 tokens, it outperforms OpenAI ada-002 and text-embedding-3-small models for both short and long context tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lJmdCjxo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822143650047.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lJmdCjxo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822143650047.png" alt="image 1" width="800" height="353"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;After deploying the models, you are also required to create an &lt;code&gt;API key&lt;/code&gt; in the API Keys section for authentication when &lt;code&gt;Continue&lt;/code&gt; accesses the models deployed on &lt;code&gt;GPUStack&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing and Configuring Continue
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Continue&lt;/code&gt; provides extensions for both &lt;code&gt;VS Code&lt;/code&gt; and &lt;code&gt;JetBrains&lt;/code&gt;. In this article, we will use &lt;code&gt;VS Code&lt;/code&gt; as an example. Install &lt;code&gt;Continue&lt;/code&gt; from the &lt;code&gt;VS Code&lt;/code&gt; extension store:&lt;/p&gt;

&lt;p&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7YAwG3bw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144006940.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7YAwG3bw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144006940.png" alt="image 2" width="800" height="393"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Once installed, drag the &lt;code&gt;Continue&lt;/code&gt; extension to the right panel to avoid conflict with the file explorer:&lt;/p&gt;

&lt;p&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V5RTjRFc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822143946949.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V5RTjRFc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822143946949.png" alt="image 3" width="800" height="423"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Then, select the settings button in the bottom-right corner to edit &lt;code&gt;Continue&lt;/code&gt;'s configuration and connect to the models deployed on &lt;code&gt;GPUStack&lt;/code&gt;. Replace the sections for &lt;code&gt;"models"&lt;/code&gt;, &lt;code&gt;"tabAutocompleteModel"&lt;/code&gt;, and &lt;code&gt;"embeddingsProvider"&lt;/code&gt; with your own GPUStack-generated API Key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"models"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Llama 3.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"llama3.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"apiBase"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://192.168.50.4/v1-openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpustack_f58451c1c04d8f14_c7e8fb2213af93062b4e87fa3c319005"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tabAutocompleteModel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Starcoder 2 3b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"starcoder2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"apiBase"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://192.168.50.4/v1-openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpustack_f58451c1c04d8f14_c7e8fb2213af93062b4e87fa3c319005"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"embeddingsProvider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nomic-embed-text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"apiBase"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://192.168.50.4/v1-openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpustack_f58451c1c04d8f14_c7e8fb2213af93062b4e87fa3c319005"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PaSNQp7S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144033667.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PaSNQp7S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144033667.png" alt="image 4" width="800" height="440"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4fWKwaFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144055057.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4fWKwaFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144055057.png" alt="image 5" width="800" height="439"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Get to Use Continue
&lt;/h2&gt;

&lt;p&gt;After configuring &lt;code&gt;Continue&lt;/code&gt; to connect to the GPUStack-deployed models, go to the top-right corner of the &lt;code&gt;Continue&lt;/code&gt; plugin interface and select &lt;code&gt;Llama 3.1&lt;/code&gt; model. Now you are able to use the features we mentioned at the beginning of this tutorial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Code Parsing&lt;/strong&gt;: Select the code, press &lt;code&gt;Cmd/Ctrl + L&lt;/code&gt;, and enter a prompt to let the local LLM parse the code:  &lt;/p&gt;

&lt;p&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JFkUTpoF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822145951464.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JFkUTpoF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822145951464.png" alt="image 6" width="800" height="430"&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Code Autocompletion&lt;/strong&gt;: While coding, press &lt;code&gt;Tab&lt;/code&gt; to let the local LLM attempt to autocomplete the code:  &lt;/p&gt;

&lt;p&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uxNMqP1I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144132354.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uxNMqP1I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144132354.png" alt="image 7" width="800" height="500"&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Code Refactoring&lt;/strong&gt;: Select the code, press &lt;code&gt;Cmd/Ctrl + I&lt;/code&gt;, and enter a prompt to let the local LLM attempt to optimize the code:  &lt;/p&gt;

&lt;p&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y_5V-xQ4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822145544825.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y_5V-xQ4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822145544825.png" alt="image 8" width="800" height="429"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;The LLM will provide suggestions, and you can decide whether to accept or reject them:  &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tcmwjwj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144207805.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tcmwjwj2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144207805.png" alt="image 9" width="800" height="549"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Inquire About Code Implementation&lt;/strong&gt;: You can try &lt;code&gt;@Codebase&lt;/code&gt; to ask questions about the codebase, such as how a certain feature is implemented:  &lt;/p&gt;

&lt;p&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Jxf_qzk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822151421841.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Jxf_qzk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822151421841.png" alt="image 10" width="800" height="429"&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Documentation Search&lt;/strong&gt;: Use &lt;code&gt;@Docs&lt;/code&gt; and select the document site you wish to search for and ask your questions, enabling you to find the results you need:&lt;/p&gt;

&lt;p&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jJADXV4A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144718627.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jJADXV4A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://gpustack-blogs.oss-cn-hongkong.aliyuncs.com/undefinedimage-20240822144718627.png" alt="image 11" width="800" height="428"&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information, please read the official &lt;code&gt;Continue&lt;/code&gt; documentation: &lt;a href="https://docs.continue.dev/how-to-use-continue" rel="noopener noreferrer"&gt;https://docs.continue.dev/how-to-use-continue&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we have introduced how to use &lt;code&gt;Continue + GPUStack&lt;/code&gt; to build a free local GitHub Copilot, offering AI-paired programming capabilities at no cost to developers.  &lt;/p&gt;

&lt;p&gt;&lt;code&gt;GPUStack&lt;/code&gt; provides a standard OpenAI-compatible API, which can be quickly and smoothly integrated with various LLM ecosystem components. Wanna give it a go? Try to integrate your tools/frameworks/software with &lt;code&gt;GPUStack&lt;/code&gt; now and share with us!&lt;/p&gt;

&lt;p&gt;If you encounter any issues while integrating GPUStack with third parties, feel free to join &lt;a href="https://discord.gg/VXYJzuaqwD" rel="noopener noreferrer"&gt;GPUStack Discord Community&lt;/a&gt; and get support from our engineers.&lt;/p&gt;

</description>
      <category>gpustack</category>
      <category>githubcopilot</category>
      <category>ai</category>
    </item>
    <item>
      <title>Introducing GPUStack: An open-source GPU cluster manager for running LLMs</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Fri, 26 Jul 2024 15:36:39 +0000</pubDate>
      <link>https://dev.to/seal-software/introducing-gpustack-an-open-source-gpu-cluster-manager-for-running-llms-2kol</link>
      <guid>https://dev.to/seal-software/introducing-gpustack-an-open-source-gpu-cluster-manager-for-running-llms-2kol</guid>
      <description>&lt;h2&gt;
  
  
  What is GPUStack?
&lt;/h2&gt;

&lt;p&gt;We are thrilled to launch GPUStack, an open-source GPU cluster manager for running Large Language Models (LLMs). Even though LLMs are widely available as public cloud services, organizations cannot easily host their own LLM deployments for private use. They need to install and manage complex clustering software such as Kubernetes and then figure out how to install and manage the AI tool stack on top. Popular ways to run LLMs locally, such as LMStudio and LocalAI, works on a single machine.&lt;/p&gt;

&lt;p&gt;GPUStack allows you to create a unified cluster from any brand of GPUs in Apple MacBooks, Windows PCs, and Linux servers. Administrators can deploy LLMs from popular repositories such as Hugging Face. Developers can then access LLMs just as easily as accessing public LLM services from vendors like OpenAI or Microsoft Azure.&lt;/p&gt;

&lt;p&gt;For more details about GPUStack, visit:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;GitHub repo: &lt;a href="https://github.com/gpustack/gpustack" rel="noopener noreferrer"&gt;https://github.com/gpustack/gpustack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;User guide: &lt;a href="https://docs.gpustack.ai" rel="noopener noreferrer"&gt;https://docs.gpustack.ai&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why GPUStack?
&lt;/h2&gt;

&lt;p&gt;Today, organizations who want to host LLMs on a cluster of GPU servers have to do a lot of work to integrate a complex software stack. By using GPUStack, organizations no longer need to worry about cluster management, GPU optimization, LLM interference engines, usage and metering, user management, API access, and dashboard UI. GPUStack is a complete software platform for building your own LLM-as-a-Service (LLMaaS).&lt;/p&gt;

&lt;p&gt;As the following figure illustrates, the admin deploys models into GPUStack from a repository like HuggingFace, and then developers can connect to GPUStack to use these models in their applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fllmaas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fllmaas.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key features of GPUStack
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GPU cluster setup and resource aggregation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GPUStack aggregates all GPU resources within a cluster. It is designed to support all GPU vendors, including Nvidia, Apple, AMD, Intel, Qualcomm, and others. GPUStack is compatible with a laptops, desktops, workstations, and servers running MacOS, Windows, and Linux.&lt;/p&gt;

&lt;p&gt;The initial release of GPUStack supports Windows PCs and Linux servers with Nvidia graphics cards, and Apple Macs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment and Inference for Models&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GPUStack supports distributed deployment and inference of LLMs across a cluster of GPU machines.&lt;/p&gt;

&lt;p&gt;GPUStack selects the best inference engine for running the given LLM on the given GPU. The first LLM inference engine supported by GPUStack is LLaMA.cpp, which allows GPUStack to support GGUF models from Hugging Face and all models listed in the Ollama library (&lt;a href="https://ollama.com/library" rel="noopener noreferrer"&gt;ollama.com/library&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;You can run any model on GPUStack by first converting it to GGUF format and uploading it to Hugging Face or Ollama library.&lt;/p&gt;

&lt;p&gt;Support of other inference engines, such as vLLM, is on our roadmap and will be provided in the future.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; GPUStack will automatically schedule the model you select to run on machines with appropriate resources, relieving you of manual intervention. If you want to assess the resource consumption of your chosen model, you can use our GGUF Parser project: &lt;a href="https://github.com/gpustack/gguf-parser-go" rel="noopener noreferrer"&gt;https://github.com/gpustack/gguf-parser-go&lt;/a&gt;. We intend to provide more detailed tutorials in the future.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Although GPU acceleration is recommended for inference, we also support CPU inference, though the performance isn't as good as GPU. Alternatively, using a mix of GPU and CPU for inference can maximize resource utilization, which is particularly useful in edge or resource-constrained environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy integration with your applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GPUStack offers OpenAI-compatible APIs and provides an LLM playground along with API keys. The playground enables AI developers to experiment with and customize your LLMs, and seamlessly integrate them into AI-enabled applications.&lt;/p&gt;

&lt;p&gt;Additionally, you can use the metrics GPUStack provides to understand how your AI applications utilize various LLMs. This helps administrators manage GPU resource consumption effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability metrics for GPUs and LLMs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GPUStack provides comprehensive metrics performance, utilization, and status monitoring.&lt;/p&gt;

&lt;p&gt;For GPUs, administrators can use GPUStack to monitor real-time resource utilization and system status. Based on these metrics:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Administrators perform scaling, optimization, and other maintenance operations.&lt;/li&gt;
&lt;li&gt;GPUStack adjusts its model scheduling algorithm.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For LLMs, developers can use GPUStack to access metrics like token throughput, token usage, and API request throughput. These metrics help developers evaluate model performance and optimize their applications. GPUStack plans to support auto-scaling based on these inference performance metrics in future releases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication and access control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GPUStack also provides authentication and role-based access control (RBAC) for enterprises. Users on the platform can have either admin or regular user roles. This guarantees that only authorized administrators can deploy and manage LLMs and that only authorized developers can utilize them.&lt;/p&gt;

&lt;h2&gt;
  
  
  GPUStack Use Cases
&lt;/h2&gt;

&lt;p&gt;GPUStack unlocks a world of possibilities for running LLMs on any GPU vendors. Here are just a few examples of what you can achieve with GPUStack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Aggregate existing MacBooks, Windows PCs, and other GPU resources to offer a low-cost LLMaaS for a development team.&lt;/li&gt;
&lt;li&gt;In limited resource environments, aggregate multiple edge nodes to provide LLMaaS on CPU resources.&lt;/li&gt;
&lt;li&gt;Create your own enterprise-wide LLMaaS in your own data center for highly sensitive workloads that cannot be hosted in a cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with GPUStack
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Linux or MacOS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GPUStack provides a script to install it as a service on systemd or launchd based systems. To install GPUStack using this method, execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.gpustack.ai | sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have deployed and started the GPUStack server, which serves as the first worker node. You can access the GPUStack page via &lt;a href="http://myserver" rel="noopener noreferrer"&gt;http://myserver&lt;/a&gt; (Replace with the IP address or domain of the host you installed)&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Log in to GPUStack with username &lt;code&gt;admin&lt;/code&gt; and the default password. You can run the following command to get the password for the default setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /var/lib/gpustack/initial_admin_password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To add additional worker nodes and form a GPUStack cluster, please run the following command on each worker node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.gpustack.ai | sh - &lt;span class="nt"&gt;--server-url&lt;/span&gt; http://myserver &lt;span class="nt"&gt;--token&lt;/span&gt; mytoken
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;strong&gt;&lt;code&gt;http://myserver&lt;/code&gt;&lt;/strong&gt; with your GPUStack server URL and &lt;strong&gt;&lt;code&gt;mytoken&lt;/code&gt;&lt;/strong&gt; with your secret token for adding workers. To retrieve the token in the default setup from the GPUStack server, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /var/lib/gpustack/token
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or follow the instructions on GPUStack to add workers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fadd-worker.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fadd-worker.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Windows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run PowerShell as administrator, then run the following command to install GPUStack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Invoke-Expression&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Invoke-WebRequest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Uri&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://get.gpustack.ai"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-UseBasicParsing&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Content&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access the GPUStack page via &lt;a href="http://myserver" rel="noopener noreferrer"&gt;http://myserver&lt;/a&gt; (Replace with the IP address or domain of the host you installed)&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Log in to GPUStack with username &lt;code&gt;admin&lt;/code&gt; and the default password. You can run the following command to get the password for the default setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Get-Content&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Join-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;APPDATA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-ChildPath&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpustack\initial_admin_password"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Raw&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Optionally, you can add extra workers to form a GPUStack cluster by running the following command on other nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Invoke-Expression&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;amp; { &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Invoke-WebRequest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Uri&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://get.gpustack.ai"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-UseBasicParsing&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;.Content) } -ServerURL http://myserver -Token mytoken"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the default setup, you can run the following to get the token used for adding workers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Get-Content&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Join-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;APPDATA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-ChildPath&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpustack\token"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Raw&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For other installation scenarios, please refer to our installation documentation at: &lt;a href="https://gpustack.github.io/docs/quickstart" rel="noopener noreferrer"&gt;https://docs.gpustack.ai/docs/quickstart&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Serving LLMs
&lt;/h3&gt;

&lt;p&gt;As an LLM administrator, you can log in to GPUStack as the default system admin, navigate to &lt;strong&gt;&lt;code&gt;Resources&lt;/code&gt;&lt;/strong&gt; to monitor your GPU status and capacities, and then go to &lt;strong&gt;&lt;code&gt;Models&lt;/code&gt;&lt;/strong&gt;  to deploy any open-source LLM into the GPUStack cluster. This enables you to provide these LLMs to regular users for integration into their applications. This approach helps you to efficiently utilize your existing resources and deliver stable LLM services for various needs and scenarios.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access GPUStack to deploy the LLMs you need. Choose models from Hugging Face (only GGUF format is currently supported) or Ollama Library, download them to your local environment, and run the LLMs:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fdeploy-model.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fdeploy-model.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GPUStack will automatically schedule the model to run on the appropriate Worker:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fmodel-list.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fmodel-list.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can manage and maintain LLMs by checking API requests, token consumption, token throughput, resource utilization status, and more. This helps you decide whether to scale up or upgrade LLMs to ensure service stability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fdashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fdashboard.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating with your applications
&lt;/h3&gt;

&lt;p&gt;As an AI application developer, you can log in to GPUStack as a regular user and navigate to &lt;strong&gt;&lt;code&gt;Playground&lt;/code&gt;&lt;/strong&gt; from the menu. Here, you can interact with the LLM using the UI playground.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fplayground.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgpustack.ai%2Fwp-content%2Fuploads%2F2024%2F07%2Fplayground.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, visit &lt;strong&gt;&lt;code&gt;API Keys&lt;/code&gt;&lt;/strong&gt; to generate and save your API key. Return to &lt;strong&gt;&lt;code&gt;Playground&lt;/code&gt;&lt;/strong&gt; to customize your LLM by adjusting the system prompt, adding few-shot learning examples, or resizing prompt parameters. When you're done, click &lt;strong&gt;&lt;code&gt;View Code&lt;/code&gt;&lt;/strong&gt; and select your preferred code format (curl, Python, Node.js) along with the API key. Use this code in your applications to enable communication with your private LLMs.&lt;/p&gt;

&lt;p&gt;you can access the OpenAI-compatible API now, for example, use curl as the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GPUSTACK_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;myapikey
curl http://myserver/v1-openai/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer &lt;/span&gt;&lt;span class="nv"&gt;$GPUSTACK_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "model": "llama3",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "stream": true
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Join Our Community
&lt;/h2&gt;

&lt;p&gt;Please find more information about GPUStack at: &lt;a href="https://gpustack.ai" rel="noopener noreferrer"&gt;https://gpustack.ai&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you encounter any issues or have suggestions for GPUStack, feel free to join our &lt;a href="https://discord.gg/VXYJzuaqwD" rel="noopener noreferrer"&gt;Community&lt;/a&gt; for support from the GPUStack team and to connect with fellow users globally.&lt;/p&gt;

&lt;p&gt;We are actively enhancing the GPUStack project and plan to introduce new features in the near future, including support for multimodal models, additional accelerators like AMD ROCm or Intel oneAPI, and more inference engines. Before getting started, we encourage you to follow and star our project on GitHub at &lt;a href="https://github.com/gpustack/gpustack" rel="noopener noreferrer"&gt;gpustack/gpustack&lt;/a&gt; to receive instant notifications about all future releases. We welcome your contributions to the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Us
&lt;/h2&gt;

&lt;p&gt;GPUStack is brought to you by Seal, Inc., a team dedicated to enabling AI access for all. Our mission is to enable enterprises to use AI to conduct their business, and GPUStack is a significant step towards achieving that goal.&lt;/p&gt;

&lt;p&gt;Quickly build your own LLMaaS platform with GPUStack! Start experiencing the ease of creating GPU clusters locally, running and using LLMs, and integrating them into your applications.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>news</category>
      <category>genai</category>
    </item>
    <item>
      <title>How to Enhance Developer Productivity with Platform Engineering</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Wed, 24 Apr 2024 15:07:00 +0000</pubDate>
      <link>https://dev.to/seal-software/how-to-enhance-developer-productivity-with-platform-engineering-571k</link>
      <guid>https://dev.to/seal-software/how-to-enhance-developer-productivity-with-platform-engineering-571k</guid>
      <description>&lt;p&gt;As the cloud computing, and GenAI technologies continue to evolve, the software industry faces increasingly fierce competition. Simultaneously, software development has become more complex. Developers need to acquire more knowledge and skills while dealing with additional problems and risks.&lt;/p&gt;

&lt;p&gt;To address these challenges, development teams must deliver valuable software products quickly, efficiently, and &lt;a href="https://www.seal.io/resource/blog/6-strategies-optimize-k8s-cost" rel="noopener noreferrer"&gt;cost-effectively&lt;/a&gt;. They must also approach problem-solving with simplicity, optimization, and innovation. This is precisely why discussions around development efficiency have become a hot topic.&lt;/p&gt;

&lt;p&gt;In this article, we will explore the definition and challenges of developer productivity, as well as how platform engineering can help organizations improve their efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Developer Productivity?
&lt;/h2&gt;

&lt;p&gt;Developer productivity is defined as the ability of a development team to deliver higher-quality, more reliable, and sustainable business value in a more efficient manner. This is a critical focus area for both emerging technology companies and traditional software enterprises because it directly impacts competitiveness and innovation.&lt;/p&gt;

&lt;p&gt;As the market changes rapidly, organizations that fail to adapt their development efficiency risk falling behind competitors and eventually being phased out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in Improving Developer Productivity
&lt;/h2&gt;

&lt;p&gt;However, enhancing developer productivity is a challenging endeavor. With the continuous growth in software scale and complexity, expanding development team sizes, and accelerating business requirements and market changes, the path to improving development efficiency faces several challenges:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Technical Complexity:&lt;/strong&gt; As technology evolves, the technical complexity of products increases, thereby raising the technical bar for development. Modern software architectures consist of multiple layers, technologies, and services, demanding end-to-end understanding from developers. This complexity adds &lt;a href="https://www.seal.io/resource/blog/reduce-cognitive-load" rel="noopener noreferrer"&gt;cognitive load&lt;/a&gt; and increases the risk of errors and inefficiencies. Overcoming technical complexity requires substantial resource investment to ensure both efficiency and quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Project Management Difficulty:&lt;/strong&gt; The complexity and scale of projects inevitably lead to greater project management challenges. Enterprises require robust project management systems and tools to coordinate and manage the activities of various development teams and project timelines. Additionally, fostering efficient teamwork and communication is crucial to ensuring projects are completed on time and with high quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Technical Debt:&lt;/strong&gt; Many organizations encounter difficulties in adopting DevOps, cloud-native technologies, and other advanced approaches due to the challenges posed by legacy systems and outdated practices. These difficulties result in technical debt and skill gaps, which in turn impede the delivery of software at a faster and more optimal pace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Lack of Standardization:&lt;/strong&gt; Enterprises often have multiple development teams using different tools and configurations for their applications and infrastructure. This lack of standardization creates silos and inconsistencies, making collaboration, sharing best practices, and ensuring quality and security more challenging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Low Productivity:&lt;/strong&gt; Developers spend significant time on non-value-added tasks such as environment setup, tool configuration, and debugging. This reduces their productivity and focus on delivering customer value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Lack of Continuous Improvement and Feedback Loop:&lt;/strong&gt;The improvement of development efficiency is a long-term project that requires continuous optimization. Without effective mechanisms and a culture of improvement and feedback within the organization, it is difficult to achieve sustained development efficiency gains.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Platform Engineering Boosts Developer Productivity
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.seal.io/resource/blog/platform-engineering-101" rel="noopener noreferrer"&gt;Platform engineering&lt;/a&gt; is a systematic approach aimed at improving software development efficiency and quality. By building reusable, scalable software platforms, platform engineering provides development teams with standardized development frameworks and tools. It optimizes collaboration and communication, enhances software testability and maintainability, and supports rapid iteration and innovation. Let's explore these aspects in more details:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standardized Development Frameworks and Tools:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Platform engineering offers standardized development frameworks and tools, including code libraries, components, and templates. These enable teams to develop high-quality software more quickly, reducing developers' workload and time costs.&lt;/p&gt;

&lt;p&gt;Consistent frameworks and tools ensure everyone follows the same best practices and standards, improving efficiency, reducing errors, and minimizing technical disparities among team members. For specific industries, teams can leverage existing platforms and components without redeveloping all infrastructure, allowing developers to focus on core business logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimized Team Collaboration and Communication:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Platform engineering provides standardized development processes and specifications, unifying team development methods and approaches. This reduces communication and coordination costs, enhancing collaboration efficiency.&lt;/p&gt;

&lt;p&gt;Centralized communication and coordination platforms (such as shared task lists, code repositories, documentation, and team discussions) allow developers to better understand each other's progress and challenges. This facilitates quick collaboration and issue resolution, ultimately improving team communication and efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improved Software Testability and Maintainability:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Platform engineering employs a range of techniques, including automated testing, code refactoring, and performance monitoring, with the objective of enhancing the testability and maintainability of software. This approach has the potential to reduce the burden on developers and the incidence of errors, thereby improving the efficiency, quality, and reliability of software development.&lt;/p&gt;

&lt;p&gt;Common code libraries and documentation provided by platform engineering help teams maintain and upgrade software effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support for Rapid Iteration and Innovation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The provision of reusable templates and components by platform engineering enables development teams to implement new ideas and features in a more expeditious manner. Furthermore, it facilitates rapid iteration and updates, thereby enabling businesses to gain a deeper understanding of user needs and behavior. This, in turn, enhances the user experience and market competitiveness.&lt;/p&gt;

&lt;p&gt;Additionally, platform engineering enhances traceability and transparency in the development process. Developers gain clearer insights into their tasks and goals, as well as the overall development status. By enabling rapid innovation and progress within teams, platform engineering facilitates enhanced development efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, platform engineering offers several advantages for improving developer productivity and is a crucial approach for businesses seeking to enhance their development processes. As digital transformation continues, it can be expected that platform engineering will play an increasingly important role in enterprise development. In the future, platform engineering will continue to evolve and find applications in various areas, including multi-cloud environments, automation, and AI technology integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/seal-io/walrus" rel="noopener noreferrer"&gt;Walrus&lt;/a&gt; is an open-source application management platform based on IaC, that helps platform engineers build golden paths for developers and empowers developers with self-service capabilities. Its &lt;a href="https://seal-io.github.io/docs/operation/resource-definition" rel="noopener noreferrer"&gt;abstraction layers&lt;/a&gt; allow developers to leverage standardized and reusable &lt;a href="https://seal-io.github.io/docs/operation/template" rel="noopener noreferrer"&gt;IaC templates&lt;/a&gt; for self-service resource provisioning and deployments without being infrastructure expertise.&lt;/p&gt;

&lt;p&gt;If you want to discuss more with us, welcome to join &lt;a href="https://discord.gg/fXZUKK2baF" rel="noopener noreferrer"&gt;Seal Discord&lt;/a&gt; to share your thoughts and feedback.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>productivity</category>
      <category>learning</category>
    </item>
    <item>
      <title>Platform as a Product: Why do we need?</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Thu, 18 Apr 2024 15:00:00 +0000</pubDate>
      <link>https://dev.to/seal-software/platform-as-a-product-why-do-we-need-28dd</link>
      <guid>https://dev.to/seal-software/platform-as-a-product-why-do-we-need-28dd</guid>
      <description>&lt;p&gt;In today's fast-paced digital age, businesses are constantly seeking innovative ways to deliver value and drive growth. The concept of Platform as a Product (PaaP) has gained widespread attention.&lt;/p&gt;

&lt;p&gt;With the advancement of technology, traditional product-centric approaches are being replaced by more comprehensive, platform-based strategies. This article aims to delve into the concept of Platform as a Product, exploring its meaning, characteristics, advantages, and challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Platform as a Product?
&lt;/h2&gt;

&lt;p&gt;Platform as a Product refers to a business model where a company creates and provides a platform that allows various stakeholders, including developers, third-party providers, and end-users, to build, customize, and distribute their own products or services. Unlike traditional products designed for end-users, PaaP serves as a foundation upon which others can develop and deliver their own products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Characteristics of Platform as a Product
&lt;/h2&gt;

&lt;p&gt;Platform as a Product represents a shift in how businesses create value and interact with developers, partners, and users. By leveraging the characteristics of Platform as a Product, businesses can gain a competitive edge. Here, we summarize the five key characteristics of Platform as a Product.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure and Technology Stack
&lt;/h3&gt;

&lt;p&gt;At the core of Platform as a Product is a robust infrastructure and technology stack. This includes the hardware, software frameworks, APIs, and developer tools that constitute the foundation for building and operating the platform.&lt;/p&gt;

&lt;p&gt;The infrastructure must be scalable, reliable, and capable of handling the diverse and growing needs of a varied user base and ecosystem. The technology stack facilitates seamless integration, enabling developers to leverage existing platform features and providing a consistent and secure environment for application development and deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open and Collaboration
&lt;/h3&gt;

&lt;p&gt;One of the key features of Platform as a Product is that it is open to external developers, partners, and users. Open fosters collaboration, knowledge sharing, and innovation within the platform ecosystem. Companies provide accessible APIs, SDKs, and developer communities and encourage participation and contribution. By embracing open, the platform nurtures a vibrant ecosystem where developers and partners can build on top of the platform, extend its functionalities, and create value-added products and services. Collaboration within the ecosystem amplifies the platform's overall value proposition and enhances its competitive advantage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mvl5rszcvdr67utn821.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mvl5rszcvdr67utn821.png" alt="collaboration" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability and Flexibility
&lt;/h3&gt;

&lt;p&gt;Scalability is also a key characteristic of a successful PaaP model. The platform's design must be able to handle exponential growth, accommodate an increasing number of users, and support a wide range of applications and services. Scalability ensures that the platform can meet the evolving needs of its user base without impacting performance or user experience.&lt;/p&gt;

&lt;p&gt;Flexibility is another important aspect of PaaP. The platform should offer customization options, allowing developers to tailor the platform's functionalities according to their specific requirements. Customization enhances the platform's appeal, improves user satisfaction, and supports the creation of unique applications and services that cater to different needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources and Support for Developers
&lt;/h3&gt;

&lt;p&gt;To attract developers, successful PaaP models provide comprehensive developer empowerment and support. This includes documentation, tutorials, sample code, and developer communities that facilitate knowledge exchange, troubleshooting, and collaboration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bmyvjvjvkcjnrpgg9yx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bmyvjvjvkcjnrpgg9yx.png" alt="resources" width="800" height="474"&gt;&lt;/a&gt;&lt;br&gt;
Organizations that prioritize providing resources and support for developers create an atmosphere where developers can thrive, experiment, and innovate. By providing the necessary resources and tools, the platform can attract top talent, accelerate development cycles, and drive ecosystem growth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages and Potential of Platform as a Product
&lt;/h2&gt;

&lt;p&gt;The adoption of the PaaP model changes the way businesses create value, attract developers and other stakeholders, and provide innovative solutions. By adopting the PaaP model, businesses stand to gain 3 major advantages and potential.&lt;/p&gt;

&lt;p&gt;One of the advantages that PaaP brings to businesses is tha*&lt;em&gt;t it fosters accelerated innovation&lt;/em&gt;*. By providing platform infrastructure, tools, and APIs, developers and partners can focus on building innovative products and services. This approach allows developers to leverage existing platform features and reduces the time and effort required to develop core functionalities. As a result, PaaP enables companies to quickly bring new products to market, iterate based on user feedback, and maintain a competitive edge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxwj8h90rkj8y9wrn5op.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxwj8h90rkj8y9wrn5op.png" alt="innovation" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A platform as a product has the ability to attract a diverse array of developers, partners, and users, thereby &lt;strong&gt;creating a vibrant ecosystem&lt;/strong&gt;. These ecosystems can foster collaboration, knowledge exchange, and the creation of value-added services. By opening their platforms to external contributors, companies can harvest and leverage a broader range of talent, ideas, and resources. The expanded ecosystem not only enhances the platform's functionalities but also opens new revenue streams, drives user engagement, and fosters community awareness.&lt;/p&gt;

&lt;p&gt;Platform as a Product aims to deliver an excellent user experience. By integrating various services, features, and applications into a unified platform, users can access a comprehensive solution that simplifies their interactions, streamlines processes, and reduces friction. Through seamless integration, intuitive interfaces, and personalized experiences, &lt;strong&gt;PaaP improves user satisfaction and loyalty&lt;/strong&gt;. Additionally, as the ecosystem expands, users benefit from the continuous innovation and enrichment brought about by the contributions of developers and partners within the platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y2l8phdbj2clhstbpk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y2l8phdbj2clhstbpk0.png" alt="user-satisfaction" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Faced by Platform as a Product
&lt;/h2&gt;

&lt;p&gt;While PaaP models offer many benefits and transformative opportunities for organizations, they also present unique challenges that must be strategically addressed. Implementing and managing a successful PaaP requires careful planning, continuous adjustment, and a customer-centric approach.&lt;/p&gt;

&lt;p&gt;PaaP platforms typically &lt;strong&gt;involve complex technical architectures, integration challenges, and scalability requirements&lt;/strong&gt;. Building and maintaining a robust and scalable infrastructure requires substantial resources and expertise. Companies need to invest in skilled technical teams, adopt agile development methodologies, and leverage cloud-based technologies to successfully overcome technical complexity. Collaborating with developers and partners can also help address technical challenges and ensure compatibility and interoperability within the ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82su7dnn9pke9i04kvbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82su7dnn9pke9i04kvbh.png" alt="expertise" width="800" height="474"&gt;&lt;/a&gt;&lt;br&gt;
Another major challenge for PaaP is &lt;strong&gt;establishing effective governance and regulatory mechanisms&lt;/strong&gt;. As platforms open to external developers, partners, and users, ensuring fair competition, content quality, data privacy, and ethical standards becomes paramount. Companies must establish policies, guidelines, and mechanisms to effectively monitor and regulate the platform ecosystem. This includes content review, dispute resolution, enforcing compliance, and maintaining user trust. Therefore, striking a balance between platform openness and governance of responsibilities is a key challenge to be addressed.&lt;/p&gt;

&lt;p&gt;As PaaP becomes more popular, competition among platform providers will intensify. Businesses wanting to stand out in the industry need to attract and retain users and developers. Establishing a strong brand, providing an excellent user experience, and offering comprehensive developer support are basic strategies for maintaining a leading position, so &lt;strong&gt;businesses face the challenge of creating platform stickiness&lt;/strong&gt;. Businesses can try to cultivate a vibrant ecosystem, foster loyalty through incentives or rewards, and continuously innovate to provide new valuable features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Platform as a Product has emerged as a transformative business model. By creating a foundation on which developers and users can build their own products and services, PaaP fosters innovation, ecosystem expansion, and enhanced user experience. While challenges exist, the benefits of PaaP are undeniable, and these benefits bring advantages to businesses, making PaaP increasingly attractive to enterprises across various industries. With the continuous advancement of technology, we can expect the continued development and adoption of Platform as a Product, becoming a key driver of digital transformation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you are interested in Platform engineering, Welcome to our community:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Discord: &lt;a href="https://discord.gg/fXZUKK2baF" rel="noopener noreferrer"&gt;https://discord.gg/fXZUKK2baF&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Twitter/X: &lt;a href="https://twitter.com/Seal_io" rel="noopener noreferrer"&gt;https://twitter.com/Seal_io&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/company/seal-io" rel="noopener noreferrer"&gt;https://www.linkedin.com/company/seal-io&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Youtube: &lt;a href="https://www.youtube.com/@Seal-io" rel="noopener noreferrer"&gt;https://www.youtube.com/@Seal-io&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>coding</category>
      <category>design</category>
    </item>
    <item>
      <title>Walrus vs. Terraform Enterprise: Expand your IaC to Kubernetes ecosystem!</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Mon, 08 Apr 2024 13:47:40 +0000</pubDate>
      <link>https://dev.to/seal-io/walrus-vs-terraform-enterprise-expand-your-iac-to-kubernetes-ecosystem-2f38</link>
      <guid>https://dev.to/seal-io/walrus-vs-terraform-enterprise-expand-your-iac-to-kubernetes-ecosystem-2f38</guid>
      <description>&lt;p&gt;Terraform, with its Enterprise and Cloud offerings, has been a reliable platform for organizations to deploy and manage infrastructure in the dynamic world of Infrastructure as Code (IaC).&lt;/p&gt;

&lt;p&gt;However, as user needs and preferences shift, organizations are exploring alternatives to Terraform Cloud/Enterprise driven by cost concerns, flexibility requirements, and the imperative to streamline complexity.&lt;/p&gt;

&lt;p&gt;While Terraform Cloud/Enterprise remains a prevalent choice, it possesses certain limitations that Walrus effectively addresses. This article outlines the main differences between the two tools based on the following factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Abstraction and Flexibility&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Environments management&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ecosystems and extensibility&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Licensing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Walrus?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.seal.io/product[](url)" rel="noopener noreferrer"&gt;Walrus&lt;/a&gt; is an open-source application platform based on IaC tools, including OpenTofu, Terraform and others, that ochestrates your entire application systems (including application services and resource dependencies) and enables developers with self-service capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8is6fs4o8kqld76trkmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8is6fs4o8kqld76trkmd.png" alt="walrus-architecture" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Walrus simplifies Kubernetes and other infrastructure management by using a unified abstraction layer called &lt;a href="https://seal-io.github.io/docs/operation/resource-definition" rel="noopener noreferrer"&gt;Resource Definition.&lt;/a&gt; DevOps teams create this layer, which combines IaC templates, matching rules, pre-set parameters, and UI Schema. This enables developers to self-deploy infrastructures that meet cost, operational, and security requirements. With this feature, you can deploy an application to multiple infrastructures or environments simultaneously.&lt;/p&gt;

&lt;p&gt;One of the main advantages of Walrus is its ability to simplify environment and resource management in a single pane of glass. Specifically, Walrus offers a clear &lt;a href="https://seal-io.github.io/docs/application/graph" rel="noopener noreferrer"&gt;dependency graph&lt;/a&gt; within the system and collects resource operations in a single view. This eliminates the need to switch between different windows, improving the overall experience and making it easier to handle complex resource management tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform Cloud/Enterprise?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/terraform/cloud-docs" rel="noopener noreferrer"&gt;Terraform Cloud&lt;/a&gt; is a platform developed by Hashicorp that helps with managing your Terraform code. It is used to enhance collaboration between developers and DevOps engineers, simplify your workflow, and improve overall security around the product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/terraform/enterprise" rel="noopener noreferrer"&gt;Terraform Enterprise&lt;/a&gt; is a self-hosted distribution of Terraform Cloud. It offers enterprises a private instance of the Terraform Cloud application, with no resource limits and with additional enterprise-grade architectural features like audit logging and SAML single sign-on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Walrus and Terraform Enterprise Differences
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Abstraction and Flexibility
&lt;/h3&gt;

&lt;p&gt;Walrus has two layers of abstraction: &lt;a href="https://seal-io.github.io/docs/operation/template" rel="noopener noreferrer"&gt;IaC templates (including Terraform modules)&lt;/a&gt; and &lt;code&gt;Resource Definition&lt;/code&gt;, whereas Terraform Cloud/Enterprise offers a single abstraction layer, Terraform module.&lt;/p&gt;

&lt;p&gt;With Walrus, operators can set up IaC templates that developers can leverage for self-service resource provisioning and deployments. Furthermore, &lt;code&gt;Resource Definition&lt;/code&gt; empowers operators to establish and enforce corporate policies, dictating the usage, configuration, and deployment permissions of cloud resources. Consequently, developers are freed from the intricacies of deploying suitable infrastructure for their applications, enabling them to concentrate on coding.&lt;/p&gt;

&lt;p&gt;This architecture allows Walrus to maintain Terraform code DRY (Don't Repeat Yourself), ensuring a unified abstraction across platforms and clouds without exposing technical intricacies to end-users. We will post a blog to give more details on how this architecture works. Stay tuned!&lt;/p&gt;

&lt;p&gt;What's more, Walrus provides UI schema and HCL validation as input controls, which is an advantage over Terraform Cloud/Enterprise's HCL-only provision. Customizing the UI schema helps personalize the user interface and reduce configuration complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environments Management
&lt;/h3&gt;

&lt;p&gt;Unlike Terraform Cloud/Enterprise's workspace, which is confined to a single Terraform module, Walrus provides a more flexible approach to infrastructure management. Walrus introduces the concept of &lt;a href="https://seal-io.github.io/docs/application/environment" rel="noopener noreferrer"&gt;Environment&lt;/a&gt;, which allows resources to exist as individual Terraform modules managed independently within an environment. This setup enables users to orchestrate multiple modules, sidestepping the complexity associated with managing large modules and facilitating infrastructure management akin to microservices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ecosystems and Extensibility
&lt;/h3&gt;

&lt;p&gt;Walrus has a flexible architecture and an extensive ecosystem. It supports Terraform and OpenTofu, as well as seamlessly integrating various CD tools for GitOps implementation, such as ArgoCD and FluxCD. Furthermore, it officially &lt;a href="https://www.seal.io/resource/blog/walrus-v04-release" rel="noopener noreferrer"&gt;supports Argo Workflow as a workflow engine&lt;/a&gt;, making it easy to integrate within diverse environments, including the Kubernetes ecosystem. Walrus can assist in expanding your IaC management to the Kubernetes ecosystem and utilizing its capabilities to achieve your goals.&lt;/p&gt;

&lt;p&gt;Conversely, Terraform Cloud/Enterprise, being vendor-locked within the HashiCorp ecosystem, confines GitOps solutions to proprietary realms, limiting workflow flexibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  Licensing
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.seal.io/resource/blog/walrus-opensource" rel="noopener noreferrer"&gt;Walrus is 100% open source&lt;/a&gt;, following the Apache 2.0 license. In contrast, the community edition of Terraform &lt;a href="https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license" rel="noopener noreferrer"&gt;changed its license&lt;/a&gt; to BSL last year, and the enterprise/cloud edition is proprietary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Walrus offers more than just Terraform capabilities. It follows the methodology of platform engineering to &lt;a href="https://www.seal.io/resource/blog/walrus-introduction" rel="noopener noreferrer"&gt;simplify complexity for both developers and operations&lt;/a&gt;. In the future, Walrus will support Kubernetes architecture, heralding a paradigm shift whereby Kubernetes CRD will serve as a unified abstraction layer for deploying applications across infrastructures or environments at once. For cloud-native engineers, this enhancement holds the promise of significantly reducing cognitive loads.&lt;/p&gt;

&lt;p&gt;It is 100% open source, welcome to give it a try!&lt;/p&gt;

&lt;p&gt;If you are interested in Walrus, Welcome to our community:&lt;/p&gt;

&lt;p&gt;Discord: &lt;a href="https://discord.gg/fXZUKK2baF" rel="noopener noreferrer"&gt;https://discord.gg/fXZUKK2baF&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Twitter/X: &lt;a href="https://twitter.com/Seal_io" rel="noopener noreferrer"&gt;https://twitter.com/Seal_io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/company/seal-io" rel="noopener noreferrer"&gt;https://www.linkedin.com/company/seal-io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Youtube: &lt;a href="https://www.youtube.com/@Seal-io" rel="noopener noreferrer"&gt;https://www.youtube.com/@Seal-io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>iac</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>[Video]Give a try to deploy your own Super Mario with Walrus!</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Fri, 15 Mar 2024 03:39:14 +0000</pubDate>
      <link>https://dev.to/seal-io/give-a-try-to-deploy-your-own-super-mario-with-walrus-13ai</link>
      <guid>https://dev.to/seal-io/give-a-try-to-deploy-your-own-super-mario-with-walrus-13ai</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/nHPbKxiLT6c?start=70"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Deploy MySQL across Multiple Infrastructures easily with Walrus</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Thu, 14 Mar 2024 03:52:07 +0000</pubDate>
      <link>https://dev.to/seal-io/how-to-deploy-mysql-across-multiple-infrastructures-easily-with-walrus-n6e</link>
      <guid>https://dev.to/seal-io/how-to-deploy-mysql-across-multiple-infrastructures-easily-with-walrus-n6e</guid>
      <description>&lt;p&gt;In &lt;a href="https://www.seal.io/product" rel="noopener noreferrer"&gt;Walrus&lt;/a&gt;, operators declare the types of resources to be provided in the &lt;code&gt;Resource Definition&lt;/code&gt;. They then apply different resource deployment templates to various types of environments and projects by setting up matching rules. Developers do not need to focus on the specific implementation of the underlying layer. By creating &lt;code&gt;Resource&lt;/code&gt; objects to declare the types of resources needed and their basic information, they can flexibly automate the creation of required resources and use them in various environments. This shields the complexity of the infrastructure of different environments and reduces the cognitive burden on researchers and developers.&lt;/p&gt;

&lt;p&gt;This tutorial will use a MySQL database as an example and show how to quickly deploy an application to different environments with Walrus by configuring two API objects: &lt;code&gt;Resource Definition&lt;/code&gt; and &lt;code&gt;Resource&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Resource Definition?
&lt;/h2&gt;

&lt;p&gt;To begin, let's clarify some of the concepts involved.&lt;code&gt;Resource Definitions&lt;/code&gt; are central to Walrus' approach to creating a unified abstraction of the upper layers of a multi-cloud, hybrid infrastructure. This approach simplifies deployment configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm15vrspnxw4rixdoyxf9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm15vrspnxw4rixdoyxf9.PNG" alt="what-is-resource-definition" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similar to the concept of multi-architecture in containers, this allows the same image to contain different architectures of the image &lt;code&gt;Manifest&lt;/code&gt; configuration. The image is acquired according to the actual environment, automatically selecting the appropriate image, enabling seamless switching of containers on different hardware.&lt;/p&gt;

&lt;p&gt;Walrus &lt;a href="https://seal-io.github.io/docs/operation/resource-definition" rel="noopener noreferrer"&gt;Resource Definition&lt;/a&gt; serves as the &lt;code&gt;Manifest&lt;/code&gt; in the deployment process. It includes the configuration of various rules and automatically selects the appropriate deployment template based on the environment during deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7sh5mb41qrvvaid0qji.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7sh5mb41qrvvaid0qji.PNG" alt="4-parts-resource-definition" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Resource Definition consists of 4 parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Template:&lt;/strong&gt; The configuration required to create resources through customization or abstraction using open source templates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Matching rule:&lt;/strong&gt; Define the matching conditions for each rule and the template to be used when the conditions are met.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Admin Predefined Config:&lt;/strong&gt; Simplify user configuration at deployment time by adding predefined configurations such as administrative configurations, best practices, etc. under matching rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;UI Schema:&lt;/strong&gt; Hide complexity with customized user interface styles based on requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's take a look at how you can leverage resource definitions to shield complexity and deploy applications across multiple infrastructures.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Deploy Applications across multiple infrastructures
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before you begin, prepare the appropriate resources, and complete the following configuration tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Connector configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;code&gt;Default Project &amp;gt; Connectors &amp;gt; New Connector&lt;/code&gt;, enter the name &lt;code&gt;aws&lt;/code&gt;, select the connector of the &lt;code&gt;cloud provider&lt;/code&gt; type, select &lt;code&gt;AWS&lt;/code&gt; for the type, select &lt;code&gt;Production&lt;/code&gt; for the applicable environment type, and enter other information to complete the configuration.&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;New Connector&lt;/code&gt; again, enter the name &lt;code&gt;alibaba&lt;/code&gt;, select &lt;code&gt;Alibaba&lt;/code&gt; for the type, select &lt;code&gt;Production&lt;/code&gt; for the applicable environment type, and enter other information to complete the configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Prepare the environments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;code&gt;Default Project &amp;gt; Environment &amp;gt; New Environment&lt;/code&gt;, enter &lt;code&gt;production&lt;/code&gt; as the name, and associate the newly created connector named &lt;code&gt;aws&lt;/code&gt; with the production deployment environment.&lt;/p&gt;

&lt;p&gt;Create another &lt;code&gt;new environment&lt;/code&gt;, enter the name &lt;code&gt;dr&lt;/code&gt;, and associate it with the connector named &lt;code&gt;alibaba&lt;/code&gt;, which will be used as the cloud disaster recovery environment. Together with the local environment that comes with the default project, we now have a total of three environments: &lt;code&gt;local&lt;/code&gt;, &lt;code&gt;production&lt;/code&gt;, and &lt;code&gt;dr&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhms0w0p1z24x9dbmi196.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhms0w0p1z24x9dbmi196.PNG" alt="3-type-envs" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://seal-io.github.io/docs/cli" rel="noopener noreferrer"&gt;3. Download Walrus CLI&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Resource Definition
&lt;/h3&gt;

&lt;p&gt;This is an example of deploying a MySQL database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Resource Definition Matching Rules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;code&gt;Operations &amp;gt; Resource Definition&lt;/code&gt; and select &lt;code&gt;New Resource Definition&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Enter the name as &lt;code&gt;demo-mysql&lt;/code&gt; and select the type as &lt;code&gt;MySQL&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;New Matching Rule&lt;/code&gt;. &lt;code&gt;dev&lt;/code&gt; is utilized to specify the match rules and templates for the development environment. Please add a selector and select the environment name as &lt;code&gt;local&lt;/code&gt;. Use the latest version of &lt;code&gt;builtin/kubernetes-mysql&lt;/code&gt; for the template and match the CPU and memory resources in the predefined configuration section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzhfvni490mtrwvzeq9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzhfvni490mtrwvzeq9e.png" alt="matching-rule-dev" width="800" height="875"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;production&lt;/code&gt; one represents the production environment. To add a selector, select the environment name and enter production. The template utilizes the latest version of &lt;code&gt;builtin/aws-rds-mysql&lt;/code&gt;. In the predefined configuration section, match the &lt;code&gt;Vpc Id&lt;/code&gt; so that you don't need to fill it in when creating resources using this resource definition. Other configurations should be based on the actual situation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfjij6l3k8fw8b6m5ilt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfjij6l3k8fw8b6m5ilt.png" alt="matching-rule-prod" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The matching rule &lt;code&gt;dr&lt;/code&gt; represents the disaster recovery environment. To select this environment, add a selector and enter dr as the environment name. Use the latest version of &lt;code&gt;builtin/alicloud-rds-mysql&lt;/code&gt; for the template. Match the &lt;code&gt;Vpc Id&lt;/code&gt; in the predefined configuration section and configure the remaining settings according to the actual situation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpkzrnp5iw09p7dttqjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpkzrnp5iw09p7dttqjw.png" alt="matching-rule-dr" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Customize UI Schema&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After configuring the rules, Walrus will automatically generate UI schema based on the configuration and predefined settings. Administrators can then customize the UI schema to their specific needs.&lt;/p&gt;

&lt;p&gt;To view the generated UI schema, go to &lt;code&gt;Operations &amp;gt; Resource Definition&lt;/code&gt;, find the corresponding &lt;code&gt;Resource Definition&lt;/code&gt; and preview UI Schema. We have simplified the configurations by removing the complex ones and only including the common configurations. This will help users spin up quickly. Below is the finalized UI Schema:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faquya46rmvuea2gx06zn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faquya46rmvuea2gx06zn.png" alt="ui-schema" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy MySQL across Multiple Environments
&lt;/h3&gt;

&lt;p&gt;The configured resource definitions will aid in application deployment across the infrastructure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;code&gt;Applications&amp;gt; Local Environment &amp;gt; New Resource&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the resource name, select the resource type as &lt;code&gt;MySQL&lt;/code&gt;, specify the architecture, database version, and other configurations, and then click &lt;code&gt;Save and Deploy&lt;/code&gt; to complete the deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;code&gt;production&lt;/code&gt; environment and select New Resource. Choose the &lt;code&gt;MySQL&lt;/code&gt; resource type and enter the configuration details.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Repeat step 3 for the &lt;code&gt;DR&lt;/code&gt; environment.&lt;/p&gt;

&lt;p&gt;Verify that all three environments now have MySQL resources deployed. The Kubernetes connector is used for the &lt;code&gt;local&lt;/code&gt; environment, the &lt;code&gt;AWS&lt;/code&gt; connector is used for the &lt;code&gt;production&lt;/code&gt; environment, and the &lt;code&gt;Alibaba&lt;/code&gt; connector is used for the &lt;code&gt;DR&lt;/code&gt; environment. This allows for dynamic creation of corresponding resources based on the current environment.&lt;/p&gt;

&lt;p&gt;In the Kubernetes cluster, the MySQL container is created for the local environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5kxkj7atq99xcial5g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5kxkj7atq99xcial5g4.png" alt="local-env-k8s" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;production&lt;/code&gt; environment creates the &lt;code&gt;rds&lt;/code&gt; service in the &lt;code&gt;aws&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkls2jdv3risgmgzn1qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkls2jdv3risgmgzn1qw.png" alt="prod-env-rds" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguolkhoobabq97kah19q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fguolkhoobabq97kah19q.png" alt="create-aws-rds" width="800" height="246"&gt;&lt;/a&gt;&lt;br&gt;
The &lt;code&gt;dr&lt;/code&gt; environment has created the corresponding &lt;code&gt;rds&lt;/code&gt; service in the &lt;code&gt;alibaba&lt;/code&gt; cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3veepq6yja3h8q7wsam.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3veepq6yja3h8q7wsam.png" alt="dr-rds-ali" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm78pimcu1vv22i7p1ko7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm78pimcu1vv22i7p1ko7.png" alt="ali-rds" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy Application across infrastructure via Walrus File
&lt;/h3&gt;

&lt;p&gt;The Walrus file can be used to deploy an application, in addition to using the UI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prepare &lt;code&gt;app.yaml&lt;/code&gt; as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  version: v1
  resources:
    - name: mysql
     type: mysql
     attributes:
       architecture: standalone
        database: mydb
       engine_version: "8.0"
       username: rdsuser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the command to deploy MySQL to different environments.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # deploy to local environment
  walrus apply -f app.yaml -p default -e local

  # deploy to production environment
  walrus apply -f app.yaml -p default -e production

  # deploy to dr environment
  walrus apply -f app.yaml -p default -e dr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CLI deployment allows for the reuse of the same Walrus file for multiple environments.&lt;/p&gt;

&lt;p&gt;We have simplified application deployment by configuring resource definitions and deploying them to multiple environments, reducing infrastructure complexity and workload for developers and operators.&lt;/p&gt;

&lt;p&gt;With XaC (Everything is Code), &lt;a href="https://github.com/seal-io/walrus" rel="noopener noreferrer"&gt;Walrus&lt;/a&gt; unifies the application lifecycle from provisioning underlying infrastructure resources to releasing upper tier applications. It also integrates with CI tools to &lt;a href="https://www.seal.io/resource/blog/automate-cicd-with-walrus-gitlab" rel="noopener noreferrer"&gt;automate CI/CD pipeline delivery&lt;/a&gt;. If you are tired of dealing with complex infrastructure provisioning or want to streamline your application management and deployment process, consider installing and using Walrus.&lt;/p&gt;

&lt;p&gt;If you are interested in Walrus, Welcome to our community:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/fXZUKK2baF" rel="noopener noreferrer"&gt;https://discord.gg/fXZUKK2baF&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Twitter/X: &lt;a href="https://twitter.com/Seal_io" rel="noopener noreferrer"&gt;https://twitter.com/Seal_io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;LinkedIn: &lt;a href="https://www.linkedin.com/company/seal-io" rel="noopener noreferrer"&gt;https://www.linkedin.com/company/seal-io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Youtube: &lt;a href="https://www.youtube.com/@Seal-io" rel="noopener noreferrer"&gt;https://www.youtube.com/@Seal-io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tutorial</category>
      <category>opensource</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Take 6 Actions to Make Your Platform Engineering Strategies happen!</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Fri, 01 Mar 2024 06:56:13 +0000</pubDate>
      <link>https://dev.to/seal-io/take-6-actions-to-make-your-platform-engineering-strategies-happen-3oob</link>
      <guid>https://dev.to/seal-io/take-6-actions-to-make-your-platform-engineering-strategies-happen-3oob</guid>
      <description>&lt;div class="ltag__user ltag__user__id__1144545"&gt;
    &lt;a href="/seal-io" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1144545%2F48c013bc-d2a9-4b67-aff6-50c531825ba4.png" alt="seal-io image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/seal-io"&gt;Seal&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/seal-io"&gt;Manage GPU clusters for running LLMs&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--byLXxl4i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/Aleegra/seallogo/main/imgMake%2520Platform%2520Engineering%2520Happen_%25E7%2594%25BB%25E6%259D%25BF%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--byLXxl4i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/Aleegra/seallogo/main/imgMake%2520Platform%2520Engineering%2520Happen_%25E7%2594%25BB%25E6%259D%25BF%25201.png" alt="platform-engineering-actions" width="800" height="2115"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensouce</category>
      <category>infographic</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>How to Automate CI/CD Pipeline with Walrus and GitLab</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Tue, 27 Feb 2024 16:00:00 +0000</pubDate>
      <link>https://dev.to/seal-io/how-to-automate-cicd-pipeline-with-walrus-and-gitlab-cfc</link>
      <guid>https://dev.to/seal-io/how-to-automate-cicd-pipeline-with-walrus-and-gitlab-cfc</guid>
      <description>&lt;p&gt;&lt;code&gt;Walrus file&lt;/code&gt; is a new feature released in &lt;a href="https://github.com/seal-io/walrus/releases/tag/v0.5.1" rel="noopener noreferrer"&gt;Walrus 0.5&lt;/a&gt;. It allows you to describe applications and configure infrastructure resources using a concise &lt;code&gt;YAML&lt;/code&gt;. You can then execute walrus apply in the Walrus CLI or import it on the Walrus UI. This will submit the Walrus file to the Walrus server, which will deploy, configure, and manage applications and infrastructure resources. This makes it easy to reuse them across multiple environments.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will demonstrate how to integrate Walrus CLI with GitLab CI and release applications via Walrus file to improve CI/CD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before getting started, please prepare:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new project on GitLab and import &lt;a href="https://github.com/seal-demo/2048" rel="noopener noreferrer"&gt;our demo project&lt;/a&gt; into it. First, ensure that you have the GitHub type &lt;code&gt;Import Project&lt;/code&gt; permission enabled. If not, refer to the snapshot below and enable it in the &lt;code&gt;Admin Area&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgcreate-new-project.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgcreate-new-project.png" alt="create-new-project"&gt;&lt;/a&gt;&lt;br&gt;
Alternatively, you can manually &lt;code&gt;git clone&lt;/code&gt; the project and push a new GitLab project into it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Walrus via docker run, and expose additional ports &lt;code&gt;30000~30100&lt;/code&gt; for NodePort of built-in k3s cluster workload, for more information: &lt;a href="https://seal-io.github.io/docs/deploy/standalone" rel="noopener noreferrer"&gt;https://seal-io.github.io/docs/deploy/standalone&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker run -d --privileged --restart=always -p 80:80 -p 443:443 -p 30000-30100:30000-30100 --name walrus sealio/walrus:v0.5.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Access Walrus. In the latest version, Walrus automatically creates a &lt;code&gt;local&lt;/code&gt; environment in the &lt;code&gt;default&lt;/code&gt; project and adds built-in K3s or other K8s clusters in the Walrus container as a connector in this environment. For demonstration purposes, this example will use the K3s cluster.&lt;/p&gt;

&lt;p&gt;4.Create an &lt;code&gt;API key&lt;/code&gt; on Walrus to authenticate for communication between the Walrus CLI and Walrus Server in following steps. Here is the guidance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;code&gt;Account&amp;gt;User center&amp;gt;API Keys&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;New Key&lt;/code&gt;, name it and set up expiration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After configuring, copy the generated key and save it. If the key is lost in the future, it can be regenerated for replacement.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgcreate-API-Key.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgcreate-API-Key.png" alt="create-API-Key"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Configure Walrus CLI and Integrate it with GitLab CI
&lt;/h2&gt;

&lt;p&gt;In this section, we will demonstrate an example from CI to CD. Follow the steps below to integrate Walrus CLI with GitLab CI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Access GitLab and navigate to &lt;code&gt;Admin Area &amp;gt; Settings &amp;gt; CI/CD &amp;gt; Variables&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add following variables, configure the sensitive information that is required by GitLab CI execution:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CI_REGISTRY_USERNAME&lt;/code&gt;: The username of the CI build container image to push to Docker Hub, refer to docker login.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CI_REGISTRY_PASSWORD&lt;/code&gt;: The password of the CI build container image to push to Docker Hub, refer to docker login.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CD_WALRUS_SERVER&lt;/code&gt;: The URL that Walrus accesses, whose format is &lt;code&gt;https://domain:port&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CD_WALRUS_TOKEN&lt;/code&gt;: API Keys of Walrus for authentication.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgconfig-gitlab-secrets.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgconfig-gitlab-secrets.png" alt="config-gitlab-secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Create &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file in the GitLab project (exists by default in the sample project), which will define your CI/CD workflow. Below is a sample &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file for deploying the sample project &lt;code&gt;Game 2048&lt;/code&gt;. You can copy and modify it as needed, such as changing the image &lt;code&gt;sealdemo/game2048&lt;/code&gt; to your own image name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
  - compile
  - build
  - deploy

variables:
  CI_PROJECT_DIR: ./
  CI_IMAGE_NAME: sealdemo/game2048
  CD_WALRUS_PROJECT: default
  CD_WALRUS_PROJECT_ENV: local

compile:
  stage: compile
  image: maven:3-openjdk-8
  artifacts:
    paths:
      - target/
  script:
    - mvn clean package

build:
  dependencies:
  - compile
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  artifacts:
    paths:
      - target/
  before_script:
    - mkdir -p /kaniko/.docker
    - echo "{\"auths\":{\"https://index.docker.io/v1/\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USERNAME}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" &amp;gt; /kaniko/.docker/config.json
  script:
    - /kaniko/executor
      --context "${CI_PROJECT_DIR}"
      --dockerfile "${CI_PROJECT_DIR}/Dockerfile"
      --destination "${CI_IMAGE_NAME}:${CI_COMMIT_SHORT_SHA}"

deploy:
  stage: deploy
  image: alpine
  before_script:
    - wget -O walrus --no-check-certificate "${CD_WALRUS_SERVER}/cli?arch=amd64&amp;amp;os=linux"
    - chmod +x ./walrus
  script:
    - ./walrus login --insecure --server ${CD_WALRUS_SERVER} --token ${CD_WALRUS_TOKEN}
    - ./walrus apply -f ./walrus-file.yaml -p ${CD_WALRUS_PROJECT} -e ${CD_WALRUS_PROJECT_ENV}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Check out &lt;code&gt;walrus-file.yaml&lt;/code&gt;, which Walrus uses to deploy applications (already exists by default in the sample project). &lt;code&gt;Walrus file&lt;/code&gt; is a concise &lt;code&gt;YAML&lt;/code&gt; structure that describes the application's deployment configuration. You can make any necessary changes to this file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: v1
resources:
- name: game2048
  type: containerservice
  attributes:
    containers:
    - profile: run
      image: ${CI_IMAGE_NAME}:${CI_COMMIT_SHORT_SHA}
      ports:
      - schema: http
        external: 8080
        internal: 8080
        protocol: tcp
      resources:
        cpu: 0.25
        memory: 512
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.Access GitLab, navigate to &lt;code&gt;Admin Area &amp;gt; CI/CD &amp;gt; Runners&lt;/code&gt;, check that GitLab Runner is online properly(&lt;a href="https://docs.gitlab.com/runner/install/" rel="noopener noreferrer"&gt;how to install GitLab Runner&lt;/a&gt;), which is used to run the CI/CD pipeline defined by &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimggitlab-runner.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimggitlab-runner.png" alt="gitlab-runner"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.Navigate to &lt;code&gt;2048 Project &amp;gt; Build &amp;gt; Pipelines&lt;/code&gt;, select &lt;code&gt;Run pipeline&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgrun-pipelines.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgrun-pipelines.png" alt="run-pipelines"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for the pipeline to finish running and check the results of the pipeline run: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgresult-pipepline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgresult-pipepline.png" alt="result-pipepline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View the pipeline's running logs:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgpipeline-logs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgpipeline-logs.png" alt="pipeline-logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The logs indicate that the pipeline ran successfully. GitLab CI completed the CI/CD steps of &lt;code&gt;Maven&lt;/code&gt; build, container image build upload, and application deployment to the K3s cluster using the Walrus CLI, in that specific order.&lt;/p&gt;

&lt;p&gt;7.After successfully running the pipelines, you can visit Walrus to view the deployed &lt;code&gt;game2048&lt;/code&gt; application!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgwalrus-game2048.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimgwalrus-game2048.png" alt="walrus-game2048"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Access the 2048 game by using automatically recognized endpoints and the &lt;code&gt;/2048&lt;/code&gt; path. Here is the full URL: &lt;code&gt;http://domain:port/2048&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimggame-2048.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FAleegra%2Fseallogo%2Fmain%2Fimggame-2048.png" alt="game-2048"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we have successfully integrated Walrus CLI with GitLab CI. By using the &lt;code&gt;Walrus file&lt;/code&gt; in Walrus 0.5.x, developers can now automate the deployment of applications in a more user-friendly way when committing application code to GitLab.&lt;/p&gt;

&lt;p&gt;With XaC (Everything is Code), Walrus can unify and manage the entire application lifecycle, from the provisioning infrastructure resources to releasing the upper tier application. This tutorial only covers one scenario. If you are interested in learning more, you can explore other scenarios with Walrus, &lt;strong&gt;such as provisioning Kubernetes clusters, creating cloud RDS databases, and configuring LB policies.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are interested in Walrus, Welcome to our community:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/fXZUKK2baF" rel="noopener noreferrer"&gt;https://discord.gg/fXZUKK2baF&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Twitter/X: &lt;a href="https://twitter.com/Seal_io" rel="noopener noreferrer"&gt;https://twitter.com/Seal_io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;LinkedIn: &lt;a href="https://www.linkedin.com/company/seal-io" rel="noopener noreferrer"&gt;https://www.linkedin.com/company/seal-io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Youtube: &lt;a href="https://www.youtube.com/@Seal-io" rel="noopener noreferrer"&gt;https://www.youtube.com/@Seal-io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any questions about Walrus, please feel free to let me know, I am here to help you :)&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cicd</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Our Talk has been selected for OpenTofu Day on KubeCon</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Mon, 19 Feb 2024 03:17:09 +0000</pubDate>
      <link>https://dev.to/seal-io/our-talk-has-been-selected-for-opentofu-day-on-kubecon-171m</link>
      <guid>https://dev.to/seal-io/our-talk-has-been-selected-for-opentofu-day-on-kubecon-171m</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GqlurCSn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/Aleegra/seallogo/main/imgspeech_%25E7%2594%25BB%25E6%259D%25BF%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GqlurCSn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/Aleegra/seallogo/main/imgspeech_%25E7%2594%25BB%25E6%259D%25BF%25201.png" alt="lightning-talk-seal" width="800" height="1695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join us for an exclusive analysis uncovering where OpenTofu could address needs that Terraform has yet to fulfill, backed by quantitative insights, presented by Lawrence Li. Don't miss this exciting session – Seal you there💡&lt;/p&gt;

&lt;p&gt;For more information about this talk, please check out: &lt;a href="https://colocatedeventseu2024.sched.com/event/1YFgo/cl-lightning-talk-alias-terraformtofu-jobs-done-now-what-lawrence-li-seal" rel="noopener noreferrer"&gt;https://colocatedeventseu2024.sched.com/event/1YFgo/cl-lightning-talk-alias-terraformtofu-jobs-done-now-what-lawrence-li-seal&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudcomputing</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>Seal Launches Walrus 0.5: Revamps Workflow for an Out-of-the-Box Deployment Experience</title>
      <dc:creator>Seal</dc:creator>
      <pubDate>Mon, 29 Jan 2024 17:00:00 +0000</pubDate>
      <link>https://dev.to/seal-io/seal-launches-walrus-05-revamps-workflow-for-an-out-of-the-box-deployment-experience-4hhh</link>
      <guid>https://dev.to/seal-io/seal-launches-walrus-05-revamps-workflow-for-an-out-of-the-box-deployment-experience-4hhh</guid>
      <description>&lt;p&gt;Application management platform based on IaC &lt;a href="https://github.com/seal-io/walrus/releases/tag/v0.5.0" rel="noopener noreferrer"&gt;Walrus 0.5&lt;/a&gt; is officially released!&lt;/p&gt;

&lt;p&gt;Walrus 0.5 builds upon the new application model introduced in Walrus 0.4.This new model significantly reduces repetitive configuration work and shields development teams from the complexities of cloud-native and infrastructure management.&lt;/p&gt;

&lt;p&gt;With Walrus 0.5, &lt;strong&gt;the workflow has been revamped, and abstraction capabilities have been enhanced to create an out-of-the-box product experience.&lt;/strong&gt; This release further optimizes application deployment and delivery through a platform engineering approach.&lt;/p&gt;

&lt;p&gt;"Multi-cloud and hybrid cloud have become the mainstream IT infrastructure architecture for enterprises. The complexity of managing heterogeneous infrastructure increases as businesses scale. Nowadays, reducing management costs and enhancing delivery efficiency have become two of the top priorities for enterprises," said George Qin, Co-founder and CEO of Seal. "&lt;strong&gt;Walrus is dedicated to simplifying application management using a platform engineering approach&lt;/strong&gt;, alleviating the cognitive load on development and operations to address the current complex IT challenges."&lt;/p&gt;

&lt;h2&gt;
  
  
  Refactor Workflow and Enhance Abstraction for Simplified Resource Management
&lt;/h2&gt;

&lt;p&gt;Walrus 0.5 upgrades the UI, optimizing the management interaction for resources and their definitions, providing an intuitive and streamlined management experience.&lt;/p&gt;

&lt;p&gt;Services and resources are unified into a single resource view. In this view, various operations for resource details, such as managing resources and underlying components, viewing logs, executing to the terminal for debugging, and getting access URLs for services, are all supported.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7jp0oncbwomwcngigtq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7jp0oncbwomwcngigtq.png" alt="resource-view" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By integrating various management functions within a single pane of glass, you no longer need to switch between different windows, enhancing the overall experience and making it easier to handle complex resource management tasks.&lt;/p&gt;

&lt;p&gt;Resource Definition is the core of Walrus for building a unified abstraction layer on top of multi-cloud and hybrid infrastructure. Walrus 0.5 further enhances Resource Definitions by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enrich built-in resource definitions while optimize the creation of matching rules for resource definitions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a management view for resource definitions associated with resources, facilitate unified management for operations or architecture teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Declare multiple resource definitions of the same type, facilitating flexible rule matching for different teams.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Polymorphic Support from Docker to Kubernetes
&lt;/h2&gt;

&lt;p&gt;Walrus 0.4 introduced the core feature of "configure once, run polymorphically." In Walrus 0.5, we've extended that magic to support both Docker and Kubernetes polymorphically.&lt;/p&gt;

&lt;p&gt;This feature allows users to create and execute applications in a Docker environment on their personal computer. &lt;strong&gt;The same YAML application definition can be used to deploy the application to both staging and production Kubernetes environments without requiring an understanding of the configuration differences between the two.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers can deploy applications locally in Docker using Walrus's built-in resource types, such as &lt;code&gt;containerservice&lt;/code&gt;, &lt;code&gt;mysql&lt;/code&gt;, &lt;code&gt;postgresql&lt;/code&gt;, and &lt;code&gt;redis&lt;/code&gt;. The same application definition can be deployed to another environment underlying Kubernetes without any modification. Resource definitions can also be used to extend corresponding resource types.&lt;/p&gt;

&lt;p&gt;If you're all about Docker, Walrus 0.5 has your back.&lt;/p&gt;

&lt;p&gt;During the Walrus 0.5 installation, container applications can be deployed using Docker alone. To install Walrus in a local Docker environment, developers can use the Walrus CLI command &lt;code&gt;walrus local install&lt;/code&gt; in a Docker extension manner. This means that Walrus can run without depending on an external K8s or the built-in K3s.&lt;/p&gt;

&lt;p&gt;During the application deployment phase, the new version includes Docker connectors and Docker application templates. These features assist developers in deploying applications to Docker environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Extending Deployment Flexibility
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Add OpenTofu as a Deployer Option
&lt;/h3&gt;

&lt;p&gt;OpenTofu was introduced as an open-source alternative to Terraform due to changes in its license. In Walrus 0.5, formal support for &lt;a href="https://opentofu.org/blog/opentofu-is-going-ga/" rel="noopener noreferrer"&gt;OpenTofu 1.6.0&lt;/a&gt; is provided, replacing &lt;a href="https://www.seal.io/resource/blog/integrate-opentofu" rel="noopener noreferrer"&gt;the previous manual switch&lt;/a&gt; option for Terraform.&lt;/p&gt;

&lt;p&gt;Navigate to &lt;code&gt;System Settings &amp;gt; Deployment Management &amp;gt; Basic Settings &amp;gt; Deployer Image&lt;/code&gt;, you can switch the default Deployer from &lt;code&gt;Terraform&lt;/code&gt; to &lt;code&gt;OpenTofu&lt;/code&gt; (image: &lt;code&gt;sealio/opentofu-deployer:v1.6.0-seal.1&lt;/code&gt;). This improvement aims to provide more options and avoid vendor lock-in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Walrus File: Code-Defined Deployment of Resources
&lt;/h3&gt;

&lt;p&gt;The new version of Walrus introduces the Walrus File feature, which is a YAML file used to deploy Walrus resources.&lt;/p&gt;

&lt;p&gt;Similar to a Docker Compose file, this file provides a concise definition of application services and infrastructure resources. However, it can be used to create application services and resources for various multi-cloud and hybrid infrastructures, not limited to Docker or Kubernetes.&lt;/p&gt;

&lt;p&gt;The Walrus CLI &lt;code&gt;walrus apply/delete -f&lt;/code&gt; allows you to apply or delete Walrus resources described in the Walrus File. You can also integrate the Walrus File with existing CI/CD tools and processes using the Walrus CLI, making Walrus more flexible to meet various deployment requirements.&lt;/p&gt;

&lt;p&gt;To make learning easier, we recommend using the Walrus File Hub as a reference. You can find relevant YAML examples at &lt;a href="https://github.com/seal-io/walrus-file-hub" rel="noopener noreferrer"&gt;https://github.com/seal-io/walrus-file-hub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  100% Open Source, Easy to Get Started
&lt;/h2&gt;

&lt;p&gt;We're true to the open-source spirit! Walrus is fully open source based on the Apache 2.0 license, and you can deploy it on a computer with Docker installed with just one command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker run -d --privileged --restart=always -p 80:80 -p 443:443 -p 30000-30100:30000-30100 --name walrus sealio/walrus:v0.5.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check us out on &lt;a href="https://github.com/seal-io/walrus" rel="noopener noreferrer"&gt;Github&lt;/a&gt; and give us a star if you dig what we're doing!&lt;/p&gt;

&lt;p&gt;Cheers to simplified app management with Walrus 0.5!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>deployment</category>
      <category>opensource</category>
      <category>platformengineering</category>
    </item>
  </channel>
</rss>
