<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: StackFoss</title>
    <description>The latest articles on DEV Community by StackFoss (@stackfoss).</description>
    <link>https://dev.to/stackfoss</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/stackfoss"/>
    <language>en</language>
    <item>
      <title>PCSX2: The Free and Open-Source PlayStation 2 Emulator</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Sun, 25 Jun 2023 21:52:43 +0000</pubDate>
      <link>https://dev.to/stackfoss/pcsx2-the-free-and-open-source-playstation-2-emulator-3d39</link>
      <guid>https://dev.to/stackfoss/pcsx2-the-free-and-open-source-playstation-2-emulator-3d39</guid>
      <description>&lt;h1&gt;
  
  
  PCSX2: The Free and Open-Source PlayStation 2 Emulator
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--my-T1g7Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://pcsx2.net/assets/images/feature-AetherSX2sm-b444ca8e9630e6afe9d81010a47054de.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--my-T1g7Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://pcsx2.net/assets/images/feature-AetherSX2sm-b444ca8e9630e6afe9d81010a47054de.webp" alt="PCSX2" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PCSX2 is a remarkable piece of software that allows you to play PlayStation 2 (PS2) games on your PC. As a free and open-source emulator, PCSX2 aims to replicate the PS2's hardware functionality using a combination of interpreters, recompilers, and a virtual machine. With PCSX2, you can enjoy a vast library of PS2 games, including popular titles like Final Fantasy X and Devil May Cry 3, right on your computer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Details
&lt;/h2&gt;

&lt;p&gt;The PCSX2 project has been in development for over twenty years. Initially, early versions of PCSX2 could only run a few public domain game demos. However, thanks to the dedication and hard work of the developers, PCSX2 has evolved into a mature emulator capable of running most PS2 games at full speed. With more than 2500 titles tested, PCSX2 has become a go-to choice for PS2 enthusiasts worldwide.&lt;/p&gt;

&lt;p&gt;To stay up-to-date with game compatibility, you can visit the official &lt;a href="https://pcsx2.net/compat/"&gt;PCSX2 compatibility list&lt;/a&gt; or seek assistance from the vibrant community in the &lt;a href="https://forums.pcsx2.net/"&gt;official forums&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The latest stable version of PCSX2 is 1.6.0, offering improved performance, compatibility, and a range of new features. You can download both stable and development builds of PCSX2 from the &lt;a href="https://pcsx2.net/downloads/"&gt;official website&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  System Requirements
&lt;/h2&gt;

&lt;p&gt;To enjoy smooth gameplay and optimal performance with PCSX2, it's essential to ensure your system meets the following minimum requirements:&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimum System Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Operating System:

&lt;ul&gt;
&lt;li&gt;Windows 10 21H2 (1809 or later) (64-bit)&lt;/li&gt;
&lt;li&gt;Ubuntu 20.04/Debian or newer, Arch Linux, or other distro (64-bit)&lt;/li&gt;
&lt;li&gt;macOS 10.14&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;CPU:

&lt;ul&gt;
&lt;li&gt;Supports SSE4.1&lt;/li&gt;
&lt;li&gt;PassMark Thread Performance rating near or greater than 1800&lt;/li&gt;
&lt;li&gt;Two physical cores with hyperthreading&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;GPU:

&lt;ul&gt;
&lt;li&gt;Direct3D10 support&lt;/li&gt;
&lt;li&gt;OpenGL 3.x support&lt;/li&gt;
&lt;li&gt;Vulkan 1.1 support&lt;/li&gt;
&lt;li&gt;Metal support&lt;/li&gt;
&lt;li&gt;PassMark G3D Mark rating around 3000 (e.g., GeForce GTX 750, Radeon RX 560, Intel Arc A380)&lt;/li&gt;
&lt;li&gt;2 GB Video Memory&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;RAM: 4 GB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please note that the CPU requirements may vary depending on the complexity of the games you intend to play. CPU-intensive games may require a higher-rated CPU. You can refer to the PCSX2 wiki and forums for specific CPU recommendations based on game requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recommended System Requirements
&lt;/h3&gt;

&lt;p&gt;For an optimal gaming experience with PCSX2, it is recommended to have the following specifications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operating System:

&lt;ul&gt;
&lt;li&gt;Windows 10 21H2 (1809 or later) (64-bit)&lt;/li&gt;
&lt;li&gt;Ubuntu 22.04/Debian or newer, Arch Linux, or other distro (64-bit)&lt;/li&gt;
&lt;li&gt;macOS 10.14&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;CPU:

&lt;ul&gt;
&lt;li&gt;Supports AVX2&lt;/li&gt;
&lt;li&gt;PassMark Single Thread Performance rating near or greater than 2600&lt;/li&gt;
&lt;li&gt;Four physical cores, with or without hyperthreading&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;GPU:

&lt;ul&gt;
&lt;li&gt;Direct3D12 support&lt;/li&gt;
&lt;li&gt;OpenGL 4.6 support&lt;/li&gt;
&lt;li&gt;Vulkan 1.3 support&lt;/li&gt;
&lt;li&gt;Metal support&lt;/li&gt;
&lt;li&gt;PassMark G3D Mark rating around 6000 (&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;e.g., GeForce GTX 1650, Radeon RX 570)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 GB Video Memory&lt;/li&gt;
&lt;li&gt;RAM: 8 GB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The recommended GPU specifications are based on internal rendering at approximately 1080p resolution. Higher resolutions may require more powerful graphics cards.&lt;/p&gt;

&lt;p&gt;It's worth noting that both the CPU and GPU requirements are game-dependent. Some games may have higher demands than others, especially those labeled as CPU or GPU intensive. For a quick reference on game requirements, you can consult the PCSX2 wiki and forums.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Notes
&lt;/h2&gt;

&lt;p&gt;To ensure a smooth experience with PCSX2, consider the following technical notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On Windows, you need to install the &lt;a href="https://support.microsoft.com/en-us/help/2977003/"&gt;Visual C++ 2019 x64 Redistributables&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Support for Windows XP and Direct3D9 was discontinued after stable release 1.4.0.&lt;/li&gt;
&lt;li&gt;Support for Windows 7, Windows 8.0, and Windows 8.1 was discontinued after stable release 1.6.0.&lt;/li&gt;
&lt;li&gt;32-bit and wxWidgets support were dropped after stable release 1.6.0, with the wxWidgets code being entirely removed on December 25th, 2022.&lt;/li&gt;
&lt;li&gt;Keeping your operating system and drivers up to date is crucial for the best experience. It is also recommended to have a newer GPU with the latest supported drivers.&lt;/li&gt;
&lt;li&gt;To legally use PCSX2, you need to obtain a BIOS dump extracted from a PS2 console that you own. For detailed instructions on acquiring the BIOS, refer to &lt;a href="https://github.com/PCSX2/pcsx2/blob/development/Docs/PCSX2_FAQ.md#question-13-where-do-i-get-a-ps2-bios"&gt;PCSX2's FAQ&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;By default, PCSX2 utilizes two CPU cores for emulation. The MTVU speed hack allows the use of a third core, which is compatible with most games and can provide a significant speed boost on CPUs with three or more cores. However, GS-limited games or CPUs with fewer than two cores may experience a slowdown. Software renderers utilize additional threads for rendering and require higher core counts to run efficiently.&lt;/li&gt;
&lt;li&gt;The requirements benchmarks provided are based on Passmark's CPU benchmarking software, specifically the "Single Thread Rating" (STR) statistic. You can compare your CPU's performance to PCSX2's requirements by checking its rating on &lt;a href="https://cpubenchmark.net"&gt;Passmark's CPU benchmark website&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Vulkan support requires an up-to-date GPU driver. Older drivers may cause graphical issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information and updates, visit the official &lt;a href="https://pcsx2.net/"&gt;PCSX2 website&lt;/a&gt;. Get ready to relive your favorite PS2 games on your PC with PCSX2, the ultimate PlayStation 2 emulator.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>pcsx2</category>
      <category>playstation</category>
      <category>emulator</category>
    </item>
    <item>
      <title>Embedchain: Building LLM-Powered Bots with Ease</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Sun, 25 Jun 2023 21:31:41 +0000</pubDate>
      <link>https://dev.to/stackfoss/embedchain-building-llm-powered-bots-with-ease-156o</link>
      <guid>https://dev.to/stackfoss/embedchain-building-llm-powered-bots-with-ease-156o</guid>
      <description>&lt;h1&gt;
  
  
  Embedchain: Building LLM-Powered Bots with Ease
&lt;/h1&gt;

&lt;p&gt;Embedchain is a powerful framework designed to simplify the process of creating language model (LLM) powered bots using any dataset. It provides an abstraction layer that handles dataset loading, chunking, embedding creation, and storage in a vector database.&lt;/p&gt;

&lt;p&gt;By using the &lt;code&gt;.add&lt;/code&gt; and &lt;code&gt;.add_local&lt;/code&gt; functions, you can easily add single or multiple datasets to your bot. Then, you can utilize the &lt;code&gt;.query&lt;/code&gt; function to retrieve answers from the added datasets.&lt;/p&gt;

&lt;p&gt;Let's say you want to create a bot based on Naval Ravikant, incorporating one YouTube video, one book in PDF format, two blog posts, and a question and answer pair. With Embedchain, all you need to do is provide the links to the videos, PDF, and blog posts, as well as the Q&amp;amp;A pair. Embedchain will handle the rest, creating a bot tailored to your specifications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from embedchain import App

naval_chat_bot = App()

# Embed Online Resources
naval_chat_bot.add("youtube_video", "https://www.youtube.com/watch?v=3qHkcs3kG44")
naval_chat_bot.add("pdf_file", "https://navalmanack.s3.amazonaws.com/Eric-Jorgenson_The-Almanack-of-Naval-Ravikant_Final.pdf")
naval_chat_bot.add("web_page", "https://nav.al/feedback")
naval_chat_bot.add("web_page", "https://nav.al/agi")

# Embed Local Resources
naval_chat_bot.add_local("qna_pair", ("Who is Naval Ravikant?", "Naval Ravikant is an Indian-American entrepreneur and investor."))

naval_chat_bot.query("What unique capacity does Naval argue humans possess when it comes to understanding explanations or concepts?")
# Answer: Naval argues that humans possess the unique capacity to understand explanations or concepts to the maximum extent possible in this physical reality.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;First, make sure you have the Embedchain package installed. If not, you can install it via &lt;code&gt;pip&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install embedchain

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Usage
&lt;/h3&gt;

&lt;p&gt;To get started with Embedchain, you'll need an OpenAI account and an API key. If you don't have an API key, you can create one by visiting &lt;a href="https://platform.openai.com/account/api-keys"&gt;this link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have your API key, set it as an environment variable named &lt;code&gt;OPENAI_API_KEY&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
os.environ["OPENAI_API_KEY"] = "sk-xxxx"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, import the &lt;code&gt;App&lt;/code&gt; class from Embedchain and use the &lt;code&gt;.add&lt;/code&gt; function to add datasets to your bot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from embedchain import App

naval_chat_bot = App()

# Embed Online Resources
naval_chat_bot.add("youtube_video", "https://www.youtube.com/watch?v=3qHkcs3kG44")
naval_chat_bot.add("pdf_file", "https://navalmanack.s3.amazonaws.com/Eric-Jorgenson_The-Almanack-of-Naval-Ravikant_Final.pdf")
naval_chat_bot.add("web_page", "https://nav.al/feedback")
naval_chat_bot.add("web_page", "https://nav.al/agi")

# Embed Local Resources
naval_chat_bot.add_local("qna_pair", ("Who is Naval Ravikant?", "Nav

al Ravikant is an Indian-American entrepreneur and investor."))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there is another app instance in your script or app, you can change the import as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from embedchain import App as EmbedChainApp

# or

from embedchain import App as ECApp

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that your app is created, you can use the &lt;code&gt;.query&lt;/code&gt; function to retrieve answers for any query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(naval_chat_bot.query("What unique capacity does Naval argue humans possess when it comes to understanding explanations or concepts?"))
# Answer: Naval argues that humans possess the unique capacity to understand explanations or concepts to the maximum extent possible in this physical reality.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Supported Formats
&lt;/h3&gt;

&lt;p&gt;Embedchain supports the following formats for dataset embedding:&lt;/p&gt;

&lt;h4&gt;
  
  
  YouTube Video
&lt;/h4&gt;

&lt;p&gt;To add a YouTube video to your app, use the data type (first argument to &lt;code&gt;.add&lt;/code&gt;) as &lt;code&gt;"youtube_video"&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.add('youtube_video', 'a_valid_youtube_url_here')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  PDF File
&lt;/h4&gt;

&lt;p&gt;To add a PDF file, use the data type as &lt;code&gt;"pdf_file"&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.add('pdf_file', 'a_valid_url_where_pdf_file_can_be_accessed')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that password-protected PDFs are not supported.&lt;/p&gt;

&lt;h4&gt;
  
  
  Web Page
&lt;/h4&gt;

&lt;p&gt;To add a web page, use the data type as &lt;code&gt;"web_page"&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.add('web_page', 'a_valid_web_page_url')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Text
&lt;/h4&gt;

&lt;p&gt;To supply your own text, use the data type as &lt;code&gt;"text"&lt;/code&gt; and enter a string. The text is not processed, making it highly versatile. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.add_local('text', 'Seek wealth, not money or status. Wealth is having assets that earn while you sleep. Money is how we transfer time and wealth. Status is your place in the social hierarchy.')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: This example is not used in the provided code snippets because it's more common to supply a whole paragraph or file, which couldn't fit in the code examples.&lt;/p&gt;

&lt;h4&gt;
  
  
  Q&amp;amp;A Pair
&lt;/h4&gt;

&lt;p&gt;To supply your own question and answer pair, use the data type as &lt;code&gt;"qna_pair"&lt;/code&gt; and enter a tuple. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.add_local('qna_pair', ("Question", "Answer"))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  More Formats Coming Soon
&lt;/h4&gt;

&lt;p&gt;If you want to add any other format, please create an &lt;a href="https://github.com/embedchain/embedchain/issues"&gt;issue&lt;/a&gt;, and we will consider adding it to the list of supported formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;Creating a chat bot based on a dataset involves several steps, each with its own nuances. Embedchain simplifies this process and provides a straightforward interface to create bots over any dataset.&lt;/p&gt;

&lt;p&gt;The steps involved in creating and querying a bot are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Load the Data&lt;/strong&gt; : Load the dataset into the bot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Meaningful Chunks&lt;/strong&gt; : Break the data into meaningful chunks, determining the appropriate chunk size.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Embeddings&lt;/strong&gt; : Generate embeddings for each chunk using an embedding model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store Chunks in a Vector Database&lt;/strong&gt; : Store the chunks, along with their embeddings, in a vector database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query the Bot&lt;/strong&gt; : When a user asks a query, create an embedding for the query and retrieve similar documents from the vector database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Obtain the Answer&lt;/strong&gt; : Pass the similar documents as context to the LLM and obtain the final answer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Embedchain takes care of these steps and handles the underlying complexities. It provides a simplified interface to create bots over any dataset, allowing you to focus on building and deploying your application quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;p&gt;Embedchain is built on the following technology stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[Langchain](&lt;a href="https://github.com/hwchase17"&gt;https://github.com/hwchase17&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;/langchain): An LLM (Language Model) framework used to load, chunk, and index data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://platform.openai.com/docs/guides/embeddings"&gt;OpenAI's Ada embedding model&lt;/a&gt;: An embedding model provided by OpenAI used to generate embeddings.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://platform.openai.com/docs/guides/gpt/chat-completions-api"&gt;OpenAI's ChatGPT API&lt;/a&gt;: An LLM provided by OpenAI used to generate answers given a context.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/chroma-core/chroma"&gt;Chroma&lt;/a&gt;: A vector database used to store embeddings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Author
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Taranjeet Singh (&lt;a href="https://twitter.com/taranjeetio"&gt;@taranjeetio&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Embedchain simplifies the process of creating language-powered bots over any dataset. With its easy-to-use framework, you can quickly build bots that leverage the power of language models to provide answers and insights. Whether you want to create a chatbot for a specific domain or build a knowledge base for a particular topic, Embedchain can help you streamline the development process. Get started with Embedchain today and unlock the potential of language-powered bots in your applications.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>bots</category>
      <category>llm</category>
      <category>embedchain</category>
    </item>
    <item>
      <title>Spacedrive: A File Explorer from the Future</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Sun, 25 Jun 2023 21:14:37 +0000</pubDate>
      <link>https://dev.to/stackfoss/spacedrive-a-file-explorer-from-the-future-1d27</link>
      <guid>https://dev.to/stackfoss/spacedrive-a-file-explorer-from-the-future-1d27</guid>
      <description>&lt;h1&gt;
  
  
  Spacedrive: A File Explorer from the Future
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S1XcmtQW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/spacedriveapp/spacedrive/raw/main/packages/assets/images/AppLogo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S1XcmtQW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/spacedriveapp/spacedrive/raw/main/packages/assets/images/AppLogo.png" alt="spacedrive" width="512" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Spacedrive is an open source cross-platform file manager powered by a virtual distributed filesystem (VDFS) written in Rust. It aims to provide a secure and intuitive file management experience, combining the storage capacity and processing power of multiple devices into one personal distributed cloud.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : Spacedrive is currently under active development, and most of the listed features are still experimental and subject to change.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Virtual Distributed Filesystem (VDFS)&lt;/strong&gt;: Spacedrive utilizes a VDFS to organize files across multiple devices and storage layers. It maintains a virtual index of all storage locations and synchronizes the database between clients in real-time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-Platform Support&lt;/strong&gt; : Spacedrive is designed to work seamlessly across various operating systems, including macOS, Windows, Linux, iOS, watchOS, and Android.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secure and Personal Cloud&lt;/strong&gt; : With Spacedrive, users can create their own personal distributed cloud, combining the storage capacity and processing power of their devices. This provides increased security and ownership over their data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Free File Management Experience&lt;/strong&gt; : Spacedrive offers a free file management experience like no other, catering to independent creatives, hoarders, and anyone who wants to own their digital footprint.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Screenshots
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qd8rscm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/spacedriveapp/spacedrive/main/apps/landing/public/app.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qd8rscm_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/spacedriveapp/spacedrive/main/apps/landing/public/app.png" alt="spacedriveapp" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Community and Development
&lt;/h2&gt;

&lt;p&gt;Join the Spacedrive community and stay updated with the latest developments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://discord.gg/gTaF2Z44f5"&gt;Discord&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/spacedriveapp"&gt;Twitter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://instagram.com/spacedriveapp"&gt;Instagram&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  License
&lt;/h2&gt;

&lt;p&gt;Spacedrive is licensed under the AGPL v3.0. For more information, refer to the &lt;a href="https://www.gnu.org/licenses/agpl-3.0"&gt;GNU General Public License v3.0&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Architecture
&lt;/h2&gt;

&lt;p&gt;Spacedrive uses the "PRRTT" stack (Prisma, Rust, React, TypeScript, Tauri) for its development architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prisma&lt;/strong&gt; : Prisma is used on the front-end, thanks to &lt;a href="https://github.com/brendonovich/prisma-client-rust"&gt;prisma-client-rust&lt;/a&gt; by &lt;a href="https://github.com/brendonovich"&gt;Brendonovich&lt;/a&gt;. It provides access to the Prisma migration CLI and syntax for the schema. The application bundles with the Prisma query engine and codegen, offering a powerful Rust API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tauri&lt;/strong&gt; : Spacedrive uses Tauri to create a pure Rust native OS webview, resulting in a smaller bundle size and lower memory usage compared to traditional Electron apps. Tauri contributes to a more native feel, especially on macOS, thanks to Safari's close integration with the operating system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;rspc&lt;/strong&gt; : The project utilizes &lt;a href="https://rspc.dev"&gt;rspc&lt;/a&gt;, which allows defining functions in Rust and calling them on the TypeScript frontend in a typesafe manner, reducing the chance of bugs making it into production.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;sdcore&lt;/strong&gt; : The core of Spacedrive, referred to as &lt;code&gt;sdcore&lt;/code&gt;, is written in pure Rust and handles filesystem, database, and networking logic. It can be deployed in a variety of host applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Monorepo Structure
&lt;/h2&gt;

&lt;p&gt;The Spacedrive repository follows a monorepo structure with the following components:&lt;/p&gt;

&lt;h3&gt;
  
  
  Apps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;desktop&lt;/code&gt;: A Tauri app for desktop platforms.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mobile&lt;/code&gt;: A React Native app for mobile platforms.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;web&lt;/code&gt;: A React web app.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;landing&lt;/code&gt;: A React app using Vite SSR &amp;amp; Vite pages for the landing page.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Core
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;core&lt;/code&gt;: The Rust core, known as &lt;code&gt;sdcore&lt;/code&gt;, containing filesystem, database, and networking logic. It can be deployed in various host applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Packages
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;client&lt;/code&gt;: A TypeScript client library that handles data flow via RPC between the UI and the Rust core.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ui&lt;/code&gt;: A shared React component library.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;interface&lt;/code&gt;: The complete user interface in React, used by the &lt;code&gt;desktop&lt;/code&gt;, &lt;code&gt;web&lt;/code&gt;, and &lt;code&gt;landing&lt;/code&gt; apps.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;config&lt;/code&gt;: ESLint configurations, including &lt;code&gt;eslint-config-next&lt;/code&gt;, &lt;code&gt;eslint-config-prettier&lt;/code&gt;, and all &lt;code&gt;tsconfig.json&lt;/code&gt; files used throughout the monorepo.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;macos&lt;/code&gt;: A Swift native binary for macOS system extensions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ios&lt;/code&gt;: A planned Swift native binary for iOS.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;windows&lt;/code&gt;: A planned C# native binary for Windows.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;android&lt;/code&gt;: A planned Kotlin native binary for Android.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For detailed instructions on how to install and contribute to Spacedrive, please refer to the &lt;a href="//CONTRIBUTING.md"&gt;contributing guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a VDFS?
&lt;/h2&gt;

&lt;p&gt;A VDFS (Virtual Distributed Filesystem) is a filesystem designed to work across a variety of storage layers. It provides a uniform API for manipulating and accessing content across multiple devices, eliminating the limitations of a single machine. A VDFS maintains a virtual index of all storage locations, synchronizing the database between clients in real-time. It uses Content-Addressable Storage (CAS) to uniquely identify files while keeping a record of logical file paths relative to the storage locations.&lt;/p&gt;

&lt;p&gt;The concept of a VDFS was introduced in a UC Berkeley &lt;a href="https://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-29.pdf"&gt;paper&lt;/a&gt; by Haoyuan Li. While the paper focuses on its use in cloud computing, the underlying concepts can be applied to open consumer software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;Spacedrive aims to address the challenges of managing multiple cloud accounts, unbacked drives, and data at risk of loss. Traditional cloud services have limited capacity and lack interoperability between services and operating systems. Spacedrive envisions a future where photo albums and data are not tied to device ecosystems or harvested for advertising. It aims to provide an OS-agnostic, permanent, and personally owned file management solution. By leveraging open-source technology, Spacedrive aims to empower users to retain absolute control over the data that defines their lives, with unlimited scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roadmap
&lt;/h2&gt;

&lt;p&gt;To explore the planned features of Spacedrive, visit the &lt;a href="https://spacedrive.com/roadmap"&gt;spacedrive.com/roadmap&lt;/a&gt; page for a comprehensive list.&lt;/p&gt;

&lt;p&gt;Spacedrive is an ambitious project with a roadmap that outlines upcoming enhancements and features. As it continues to evolve, Spacedrive strives to deliver an unparalleled file management experience for users.&lt;/p&gt;




&lt;p&gt;Spacedrive is an exciting open-source project that envisions a future where file management is&lt;/p&gt;

&lt;p&gt;seamless, secure, and user-centric. With its unique features, cross-platform support, and focus on user ownership, Spacedrive aims to revolutionize how we interact with our digital data. Join the community and be a part of shaping the future of file management.&lt;/p&gt;

</description>
      <category>spacedrive</category>
      <category>opensource</category>
      <category>fileexplorer</category>
    </item>
    <item>
      <title>Skateshop13: An Open Source E-Commerce Skateshop Built with Next.js 13</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Sun, 25 Jun 2023 20:56:47 +0000</pubDate>
      <link>https://dev.to/stackfoss/skateshop13-an-open-source-e-commerce-skateshop-built-with-nextjs-13-2ik8</link>
      <guid>https://dev.to/stackfoss/skateshop13-an-open-source-e-commerce-skateshop-built-with-nextjs-13-2ik8</guid>
      <description>&lt;h1&gt;
  
  
  Skateshop13: An Open Source E-Commerce Skateshop Built with Next.js 13
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ti3DGO3p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/sadmann7/skateshop/raw/main/public/screenshot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ti3DGO3p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/sadmann7/skateshop/raw/main/public/screenshot.png" alt="Skateshop13" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Skateshop13 is an open source e-commerce skateshop built with the latest features of Next.js 13. This project utilizes the &lt;code&gt;create-t3-app&lt;/code&gt; bootstrap and aims to provide a modern and efficient platform for skateshop owners and customers. Although it is still in development, Skateshop13 showcases the potential of new technologies such as server actions and drizzle ORM. However, it is important to note that these technologies are subject to change, and the project is not yet suitable for production use.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This project is still in development and is not ready for production use.&lt;/p&gt;

&lt;p&gt;It uses new technologies (server actions, drizzle ORM) which are subject to change and may break your application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;p&gt;Skateshop13 leverages the following technologies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://nextjs.org"&gt;Next.js&lt;/a&gt;: Next.js is a popular React framework for building server-rendered and static websites. It offers powerful features such as automatic code splitting, server-side rendering, and efficient caching.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://clerk.dev"&gt;Clerk Auth&lt;/a&gt;: Clerk Auth provides a seamless and secure authentication solution for web applications. Skateshop13 utilizes Clerk Auth for user authentication and authorization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://orm.drizzle.team"&gt;Drizzle ORM&lt;/a&gt;: Drizzle ORM is a modern object-relational mapper (ORM) that simplifies database interactions in Next.js applications. Skateshop13 integrates Drizzle ORM to manage database operations efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://tailwindcss.com"&gt;Tailwind CSS&lt;/a&gt;: Tailwind CSS is a utility-first CSS framework that enables developers to rapidly build custom designs with reusable components. Skateshop13 utilizes Tailwind CSS for its flexible and customizable styling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://uploadthing.com"&gt;uploadthing&lt;/a&gt;: uploadthing is a file upload service used by Skateshop13 to handle file uploads efficiently and securely.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://stripe.com"&gt;Stripe&lt;/a&gt;: Stripe is a popular payment processing platform. Skateshop13 integrates Stripe for subscription management, payment processing, and billing functionality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;p&gt;Skateshop13 offers the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Authentication with Clerk&lt;/strong&gt; : Skateshop13 provides a seamless authentication experience powered by Clerk Auth. Users can create accounts, log in, and enjoy personalized shopping experiences.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;File Uploads with uploadthing&lt;/strong&gt; : Skateshop13 leverages uploadthing to enable users to upload and manage files securely. This feature is especially useful for users who want to upload images or other files related to their skateshop products.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Subscription, Payment, and Billing with Stripe&lt;/strong&gt; : Skateshop13 integrates with Stripe to enable subscription-based services, handle payments securely, and manage billing for skateshop owners and customers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storefront with Products and Categories&lt;/strong&gt; : Skateshop13 offers a fully functional storefront where skateshop owners can showcase their products. Products are organized into categories, allowing users to easily browse and find items of interest.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seller and Customer Workflows&lt;/strong&gt; : Skateshop13 supports both seller and customer workflows. Skateshop owners can manage their stores, products, orders, subscriptions, and payments through the admin dashboard, while customers can browse products, place orders, and manage their subscriptions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Admin Dashboard&lt;/strong&gt; : Skateshop13 provides an intuitive admin dashboard that empowers skateshop owners to manage various aspects of their business. From the dashboard, owners can manage stores, products, orders, subscriptions, and payments efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;To install and set up Skateshop13 on your local environment, follow these steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Clone the repository
&lt;/h3&gt;

&lt;p&gt;Clone the Skateshop13 repository by running the&lt;/p&gt;

&lt;p&gt;following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/sadmann7/skateshop

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install dependencies
&lt;/h3&gt;

&lt;p&gt;Navigate to the project's root directory and install the required dependencies by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm install

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Create a &lt;code&gt;.env&lt;/code&gt; file
&lt;/h3&gt;

&lt;p&gt;Create a new file called &lt;code&gt;.env&lt;/code&gt; in the project's root directory. Copy the environment variables from the &lt;code&gt;.env.example&lt;/code&gt; file and set their values as appropriate for your environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Run the application
&lt;/h3&gt;

&lt;p&gt;Start the Skateshop13 application by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm run dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Push the database
&lt;/h3&gt;

&lt;p&gt;To initialize the database, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm run db:push

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Listen for Stripe events
&lt;/h3&gt;

&lt;p&gt;To enable the handling of Stripe events, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm run stripe:listen

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;If you wish to deploy Skateshop13, refer to the deployment guides for &lt;a href="https://create.t3.gg/en/deployment/vercel"&gt;Vercel&lt;/a&gt;, &lt;a href="https://create.t3.gg/en/deployment/netlify"&gt;Netlify&lt;/a&gt;, and &lt;a href="https://create.t3.gg/en/deployment/docker"&gt;Docker&lt;/a&gt; for detailed instructions.&lt;/p&gt;

&lt;p&gt;Deploying Skateshop13 on these platforms will allow you to make your e-commerce skateshop accessible to a wider audience while taking advantage of the benefits provided by each deployment option.&lt;/p&gt;

&lt;p&gt;Note that before deploying, ensure that you have properly configured the required environment variables and have followed any additional deployment-specific steps outlined in the respective guides.&lt;/p&gt;

&lt;p&gt;Skateshop13 is a promising open source project that combines the power of Next.js 13 with various modern technologies to deliver a robust and feature-rich e-commerce platform for skateshops. While still under development, it already offers impressive functionalities for both skateshop owners and customers. With further enhancements and refinements, Skateshop13 has the potential to revolutionize the way skateshops operate and serve their customers in the digital age.&lt;/p&gt;

</description>
      <category>ecommerce</category>
      <category>stakeshop13</category>
      <category>nextjs</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Cybernetically Enhanced Web Apps: Svelte</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Sat, 24 Jun 2023 13:46:25 +0000</pubDate>
      <link>https://dev.to/stackfoss/cybernetically-enhanced-web-apps-svelte-16ge</link>
      <guid>https://dev.to/stackfoss/cybernetically-enhanced-web-apps-svelte-16ge</guid>
      <description>&lt;h1&gt;
  
  
  Cybernetically Enhanced Web Apps: Svelte
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TiZbqdod--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://sveltejs.github.io/assets/banner.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TiZbqdod--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://sveltejs.github.io/assets/banner.png" alt="Svelte Banner" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//LICENSE.md"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YQa_3JNu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://img.shields.io/npm/l/svelte.svg" alt="license" width="78" height="20"&gt;&lt;/a&gt; &lt;a href="https://svelte.dev/chat"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HCiPnrnx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://img.shields.io/discord/457912077277855764%3Flabel%3Dchat%26logo%3Ddiscord" alt="Chat" width="125" height="20"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of web development, innovation and efficiency are key factors in creating successful applications. Svelte, a cutting-edge technology, offers a fresh approach to building web applications. By utilizing a compiler, Svelte transforms declarative components into highly efficient JavaScript code that updates the Document Object Model (DOM) with surgical precision. This article explores the features, benefits, and contributions surrounding Svelte.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Svelte?
&lt;/h2&gt;

&lt;p&gt;Svelte is a revolutionary tool that simplifies the process of web application development. Unlike traditional frameworks or libraries that rely on runtime interpretation, Svelte takes a different approach. It compiles components into optimized JavaScript code during the build process, resulting in lightweight and performant applications.&lt;/p&gt;

&lt;p&gt;The fundamental principle of Svelte is that it shifts the heavy lifting from the browser to the build process. Instead of shipping a framework to the client and interpreting it at runtime, Svelte compiles the application code into small and efficient JavaScript files. This approach eliminates the need for a runtime framework, resulting in faster loading times and improved overall performance.&lt;/p&gt;

&lt;p&gt;To learn more about Svelte, visit the &lt;a href="https://svelte.dev"&gt;official Svelte website&lt;/a&gt; or join the vibrant &lt;a href="https://svelte.dev/chat"&gt;Discord chatroom&lt;/a&gt; where you can engage with the Svelte community.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supporting Svelte
&lt;/h2&gt;

&lt;p&gt;Svelte is an open-source project released under the MIT license. The ongoing development and maintenance of Svelte are made possible by the hard work and dedication of its volunteers. If you appreciate the benefits of Svelte and would like to support its continued growth, consider becoming a backer on &lt;a href="https://opencollective.com/svelte"&gt;Open Collective&lt;/a&gt;. Donations made through Open Collective contribute to expenses such as hosting costs and may even directly support the development of Svelte.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roadmap
&lt;/h2&gt;

&lt;p&gt;To gain insights into the future direction of Svelte, you can explore the project's &lt;a href="https://svelte.dev/roadmap"&gt;roadmap&lt;/a&gt;. The roadmap provides an overview of the features and enhancements planned for upcoming releases, offering an exciting glimpse into the evolution of Svelte.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contributing to Svelte
&lt;/h2&gt;

&lt;p&gt;Svelte welcomes contributions from the community. If you're interested in getting involved, please refer to the &lt;a href="//CONTRIBUTING.md"&gt;Contributing Guide&lt;/a&gt; and the &lt;a href="https://dev.topackages/svelte"&gt;svelte package&lt;/a&gt; for detailed information on how to contribute effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Development
&lt;/h3&gt;

&lt;p&gt;Svelte encourages pull requests and appreciates contributions from developers of all levels. You can make a positive impact on the project by tackling open issues. To get started with local development of Svelte, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the Svelte repository:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/sveltejs/svelte.git
cd svelte
pnpm install

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: Ensure you use &lt;code&gt;pnpm&lt;/code&gt; to install the dependencies, as specified in the &lt;code&gt;pnpm-lock.json&lt;/code&gt; file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Build the compiler and other included modules:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm build

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Continuously rebuild the package while watching for changes (useful when using &lt;a href="https://pnpm.io/cli/link"&gt;&lt;code&gt;pnpm link&lt;/code&gt;&lt;/a&gt; to test changes locally in a project):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Svelte compiler is primarily written in &lt;a href="https://www.typescriptlang.org/"&gt;TypeScript&lt;/a&gt;, a superset of JavaScript. Don't be intimidated if you're new&lt;/p&gt;

&lt;p&gt;to TypeScript—it is similar to JavaScript with the addition of type annotations. If you're using an editor other than &lt;a href="https://code.visualstudio.com/"&gt;Visual Studio Code&lt;/a&gt;, consider installing a plugin to enable syntax highlighting and code hints for TypeScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Tests
&lt;/h3&gt;

&lt;p&gt;Svelte has a comprehensive test suite to ensure the stability and reliability of the project. You can run the tests using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm test

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To filter specific tests, you can use the &lt;code&gt;-g&lt;/code&gt; or &lt;code&gt;--grep&lt;/code&gt; option. For example, if you only want to run tests related to transitions, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnpm test -- -g transition

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  svelte.dev
&lt;/h3&gt;

&lt;p&gt;The source code for the official Svelte website, &lt;a href="https://svelte.dev"&gt;https://svelte.dev&lt;/a&gt;, resides in the &lt;a href="https://github.com/sveltejs/sites"&gt;sites&lt;/a&gt; repository. The website's documentation is located in the &lt;a href="https://dev.tosite/content"&gt;site/content&lt;/a&gt; directory. The Svelte website is built using &lt;a href="https://kit.svelte.dev"&gt;SvelteKit&lt;/a&gt;, an intuitive framework for building web applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;If you experience difficulty accessing any &lt;code&gt;.dev&lt;/code&gt; sites, including &lt;a href="https://svelte.dev"&gt;https://svelte.dev&lt;/a&gt;, there might be a possible issue. You can refer to &lt;a href="https://superuser.com/q/1413402"&gt;this SuperUser question and answer&lt;/a&gt; for troubleshooting steps and solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  License
&lt;/h2&gt;

&lt;p&gt;Svelte is released under the &lt;a href="//LICENSE.md"&gt;MIT license&lt;/a&gt;, granting users the freedom to use, modify, and distribute the software for both personal and commercial purposes.&lt;/p&gt;




&lt;p&gt;In conclusion, Svelte offers a new paradigm for building web applications, empowering developers with a highly efficient and performant approach. With its focus on compilation and optimized JavaScript output, Svelte enables the creation of cybernetically enhanced web apps that deliver an exceptional user experience. Join the Svelte community today and be a part of shaping the future of web development.&lt;/p&gt;

</description>
      <category>svelte</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Generative Models by Stability AI</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Sat, 24 Jun 2023 13:38:03 +0000</pubDate>
      <link>https://dev.to/stackfoss/generative-models-by-stability-ai-10l</link>
      <guid>https://dev.to/stackfoss/generative-models-by-stability-ai-10l</guid>
      <description>&lt;h1&gt;
  
  
  Generative Models by Stability AI
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X6K3_4lN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/Stability-AI/generative-models/raw/main/assets/000.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X6K3_4lN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/Stability-AI/generative-models/raw/main/assets/000.jpg" alt="Generative Models by Stability AI" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  News
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;June 22, 2023&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We are releasing two new diffusion models for research purposes:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;SD-XL 0.9-base&lt;/code&gt;: The base model was trained on a variety of aspect ratios on images with resolution 1024^2. The base model uses &lt;a href="https://github.com/mlfoundations/open_clip"&gt;OpenCLIP-ViT/G&lt;/a&gt; and &lt;a href="https://github.com/openai/CLIP/tree/main"&gt;CLIP-ViT/L&lt;/a&gt; for text encoding whereas the refiner model only uses the OpenCLIP model.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SD-XL 0.9-refiner&lt;/code&gt;: The refiner has been trained to denoise small noise levels of high quality data and as such is not expected to work as a text-to-image model; instead, it should only be used as an image-to-image model.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you would like to access these models for your research, please apply using one of the following links:&lt;br&gt;&lt;br&gt;
&lt;a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9"&gt;SDXL-0.9-Base model&lt;/a&gt;, and &lt;a href="https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9"&gt;SDXL-0.9-Refiner&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
This means that you can apply for any of the two links - and if you are granted - you can access both.&lt;br&gt;&lt;br&gt;
Please log in to your HuggingFace Account with your organization email to request access.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;We plan to do a full release soon (July).&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The codebase
&lt;/h2&gt;
&lt;h3&gt;
  
  
  General Philosophy
&lt;/h3&gt;

&lt;p&gt;Modularity is king. This repo implements a config-driven approach where we build and combine submodules by calling &lt;code&gt;instantiate_from_config()&lt;/code&gt; on objects defined in yaml configs. See &lt;code&gt;configs/&lt;/code&gt; for many examples.&lt;/p&gt;
&lt;h3&gt;
  
  
  Changelog from the old &lt;code&gt;ldm&lt;/code&gt; codebase
&lt;/h3&gt;

&lt;p&gt;For training, we use &lt;a href="https://www.pytorchlightning.ai/index.html"&gt;pytorch-lightning&lt;/a&gt;, but it should be easy to use other training wrappers around the base modules. The core diffusion model class (formerly &lt;code&gt;LatentDiffusion&lt;/code&gt;, now &lt;code&gt;DiffusionEngine&lt;/code&gt;) has been cleaned up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No more extensive subclassing! We now handle all types of conditioning inputs (vectors, sequences and spatial conditionings, and all combinations thereof) in a single class: &lt;code&gt;GeneralConditioner&lt;/code&gt;, see &lt;code&gt;sgm/modules/encoders/modules.py&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;We separate guiders (such as classifier-free guidance, see &lt;code&gt;sgm/modules/diffusionmodules/guiders.py&lt;/code&gt;) from the
samplers (&lt;code&gt;sgm/modules/diffusionmodules/sampling.py&lt;/code&gt;), and the samplers are independent of the model.&lt;/li&gt;
&lt;li&gt;We adopt the &lt;a href="https://arxiv.org/abs/2206.00364"&gt;"denoiser framework"&lt;/a&gt; for both training and inference (most notable change is probably now the option to train continuous time models):

&lt;ul&gt;
&lt;li&gt;Discrete times models (denoisers) are simply a special case of continuous time models (denoisers); see &lt;code&gt;sgm/modules/diffusionmodules/denoiser.py&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The following features are now independent: weighting of the diffusion loss function (&lt;code&gt;sgm/modules/diffusionmodules/denoiser_weighting.py&lt;/code&gt;), preconditioning of the network (&lt;code&gt;sgm/modules/diffusionmodules/denoiser_scaling.py&lt;/code&gt;), and sampling of noise levels during training (&lt;code&gt;sgm/modules/diffusionmodules/sigma_sampling.py&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Autoencoding models have also been cleaned up.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Installation:
&lt;/h2&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  1. Clone the repo
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:Stability-AI/generative

-models.git
cd generative-models

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  2. Setting up the virtualenv
&lt;/h4&gt;

&lt;p&gt;This is assuming you have navigated to the &lt;code&gt;generative-models&lt;/code&gt; root after cloning it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This is tested under &lt;code&gt;python3.8&lt;/code&gt; and &lt;code&gt;python3.10&lt;/code&gt;. For other python versions, you might encounter version conflicts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PyTorch 1.13&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# install required packages from pypi
python3 -m venv .pt1
source .pt1/bin/activate
pip3 install wheel
pip3 install -r requirements_pt13.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;PyTorch 2.0&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# install required packages from pypi
python3 -m venv .pt2
source .pt2/bin/activate
pip3 install wheel
pip3 install -r requirements_pt2.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Inference:
&lt;/h2&gt;

&lt;p&gt;We provide a &lt;a href="https://streamlit.io/"&gt;streamlit&lt;/a&gt; demo for text-to-image and image-to-image sampling in &lt;code&gt;scripts/demo/sampling.py&lt;/code&gt;. The following models are currently supported:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9"&gt;SD-XL 0.9-base&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9"&gt;SD-XL 0.9-refiner&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/stabilityai/stable-diffusion-2-1-base/blob/main/v2-1_512-ema-pruned.safetensors"&gt;SD 2.1-512&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.safetensors"&gt;SD 2.1-768&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weights for SDXL&lt;/strong&gt; :&lt;br&gt;&lt;br&gt;
If you would like to access these models for your research, please apply using one of the following links:&lt;br&gt;&lt;br&gt;
&lt;a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9"&gt;SDXL-0.9-Base model&lt;/a&gt;, and &lt;a href="https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9"&gt;SDXL-0.9-Refiner&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
This means that you can apply for any of the two links - and if you are granted - you can access both.&lt;br&gt;&lt;br&gt;
Please log in to your HuggingFace Account with your organization email to request access.&lt;/p&gt;

&lt;p&gt;After obtaining the weights, place them into &lt;code&gt;checkpoints/&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
Next, start the demo using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;streamlit run scripts/demo/sampling.py --server.port &amp;lt;your_port&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Invisible Watermark Detection
&lt;/h3&gt;

&lt;p&gt;Images generated with our code use the&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/ShieldMnt/invisible-watermark/"&gt;invisible-watermark&lt;/a&gt;&lt;br&gt;&lt;br&gt;
library to embed an invisible watermark into the model output. We also provide&lt;br&gt;&lt;br&gt;
a script to easily detect that watermark. Please note that this watermark is&lt;br&gt;&lt;br&gt;
not the same as in previous Stable Diffusion 1.x/2.x versions.&lt;/p&gt;

&lt;p&gt;To run the script you need to either have a working installation as above or&lt;br&gt;&lt;br&gt;
try an &lt;em&gt;experimental&lt;/em&gt; import using only a minimal amount of packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m venv .detect
source .detect/bin/activate

pip install "numpy&amp;gt;=1.17" "PyWavelets&amp;gt;=1.1.1" "opencv-python&amp;gt;=4.1.0.25"
pip install --no-deps invisible-watermark

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the script you need to have a working installation as above. The script&lt;br&gt;&lt;br&gt;
is then useable in the following ways (don't forget to activate your&lt;br&gt;&lt;br&gt;
virtual.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>stabilityai</category>
      <category>oss</category>
    </item>
    <item>
      <title>Controlled File System Interaction and Script Execution with Kaguya: A ChatGPT Plugin</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Fri, 23 Jun 2023 10:01:47 +0000</pubDate>
      <link>https://dev.to/stackfoss/controlled-file-system-interaction-and-script-execution-with-kaguya-a-chatgpt-plugin-8dd</link>
      <guid>https://dev.to/stackfoss/controlled-file-system-interaction-and-script-execution-with-kaguya-a-chatgpt-plugin-8dd</guid>
      <description>&lt;h1&gt;
  
  
  Kaguya: Empowering Developers with Controlled File System Interaction
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L1oelEnd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1679083216051-aa510a1a2c0e%3Fixlib%3Drb-4.0.3%26ixid%3DM3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%253D%253D%26auto%3Dformat%26fit%3Dcrop%26w%3D1032%26q%3D80" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L1oelEnd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1679083216051-aa510a1a2c0e%3Fixlib%3Drb-4.0.3%26ixid%3DM3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%253D%253D%26auto%3Dformat%26fit%3Dcrop%26w%3D1032%26q%3D80" alt="Kaguya" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kaguya is a powerful ChatGPT plugin designed to provide developers with enhanced control over their local files. With Kaguya, you can seamlessly load and edit your files in a controlled manner, while also running Python, JavaScript, and bash scripts directly from within ChatGPT. This innovative tool revolutionizes the way developers interact with their file systems, enabling efficient collaboration and development workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Endpoints
&lt;/h2&gt;

&lt;p&gt;Kaguya offers a comprehensive set of API endpoints that allow you to interact with your file system seamlessly. The API is documented in the &lt;code&gt;openapi.yaml&lt;/code&gt; file and provides the following functionality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;POST /api/executeCommand&lt;/code&gt;: Execute a shell command.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GET /api/listFilesInDirectory&lt;/code&gt;: List files and directories in the specified directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GET /api/readFile&lt;/code&gt;: Read the content of a file in the user's directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/update&lt;/code&gt;: Update a file in the user's directory by performing a search-and-replace operation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/updateWholeFile&lt;/code&gt;: Replace the entire content of a file in the user's directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/createFile&lt;/code&gt;: Create a new file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/deleteFile&lt;/code&gt;: Delete a file in the user's directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/renameFile&lt;/code&gt;: Rename a file in the user's directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/appendToFile&lt;/code&gt;: Append content to the end of an existing file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/createDirectory&lt;/code&gt;: Create a new directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/deleteDirectory&lt;/code&gt;: Delete a directory and its contents.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /api/readMultipleFiles&lt;/code&gt;: Read the content of multiple files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These API endpoints empower developers with granular control over their files and directories, making it easy to perform a wide range of file system operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Project
&lt;/h2&gt;

&lt;p&gt;Getting started with Kaguya is straightforward. You can run the project using Docker by executing the &lt;code&gt;docker.sh&lt;/code&gt; script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the script is executed, you can access Kaguya through ChatGPT by using the appropriate localhost port.&lt;/p&gt;

&lt;h2&gt;
  
  
  Discover Kaguya in Action
&lt;/h2&gt;

&lt;p&gt;To get a firsthand look at Kaguya's capabilities, check out the demo videos shared on Twitter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://twitter.com/ykdojo/status/1645846044843077635"&gt;Demo Video&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/ykdojo/status/1670848611532562433"&gt;Second Demo Video&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These demos showcase how Kaguya seamlessly integrates with ChatGPT, empowering developers with efficient file system interaction and script execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Join the Kaguya Community on Discord
&lt;/h2&gt;

&lt;p&gt;For further engagement and support, we invite you to join our Discord server. Connect with like-minded developers, share your experiences, and stay up to date with the latest Kaguya developments. Join our Discord community &lt;a href="https://discord.com/invite/nNtVfKddDD"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Experience the power and convenience of Kaguya as it transforms the way developers work with their local files. Elevate your development workflow and unlock new possibilities with Kaguya today!&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>kaguya</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Linen: The Future of Community Chat</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Fri, 23 Jun 2023 09:23:12 +0000</pubDate>
      <link>https://dev.to/stackfoss/linen-the-future-of-community-chat-pan</link>
      <guid>https://dev.to/stackfoss/linen-the-future-of-community-chat-pan</guid>
      <description>&lt;h1&gt;
  
  
  Linen: The Future of Community Chat
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://camo.githubusercontent.com/f9c6750b798cb78611a227c4c73ac7c7c3e9206a3c18d346ba75d165d473d54b/68747470733a2f2f64326d753836613862656c7862672e636c6f756466726f6e742e6e65742f6c6f676f732f6c696e656e2d626c61636b2d6c6f676f2e737667" class="article-body-image-wrapper"&gt;&lt;img src="https://camo.githubusercontent.com/f9c6750b798cb78611a227c4c73ac7c7c3e9206a3c18d346ba75d165d473d54b/68747470733a2f2f64326d753836613862656c7862672e636c6f756466726f6e742e6e65742f6c6f676f732f6c696e656e2d626c61636b2d6c6f676f2e737667" alt="Linen" width="495" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Linen is a revolutionary community chat tool designed to provide an alternative to closed platforms like Slack and Discord. It offers the benefits of real-time communication while incorporating the organizational advantages of traditional forums. With Linen, communities can enjoy a structured and searchable environment, ensuring that valuable information is easily accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Philosophy
&lt;/h2&gt;

&lt;p&gt;Modern communities heavily rely on chat applications for seamless collaboration and communication. While tools like Slack and Discord excel in providing fast real-time responses, they often become overwhelming and chaotic repositories of information. In the past, communities thrived in forums, which offered better structure and search-engine friendliness. At Linen, we believe in a hybrid model that combines the advantages of real-time chat with the organizational benefits of a forum.&lt;/p&gt;

&lt;p&gt;Linen is committed to fostering better community interactions, and we provide our platform free of charge, ensuring unlimited message retention for all users. You can sign up and experience the Linen community at &lt;a href="https://linen.community"&gt;Linen.community&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the Linen Cloud edition, visit &lt;a href="https://linen.dev"&gt;linen.dev&lt;/a&gt;. Join our public community by following this link: &lt;a href="https://linen.dev/s/linen"&gt;linen.dev/s/linen&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Development Philosophy
&lt;/h3&gt;

&lt;p&gt;At Linen, our development philosophy revolves around delivering the smallest functioning features that significantly enhance users' lives and then iterating upon them. We strive for continuous improvement and actively incorporate user feedback into our development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Features
&lt;/h2&gt;

&lt;p&gt;Linen offers a range of powerful features that set it apart from traditional chat platforms. These features include:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Search Engine Friendly
&lt;/h3&gt;

&lt;p&gt;Unlike most chat applications that heavily rely on JavaScript, Linen communities prioritize search engine friendliness. With over 50,000 pages indexed on Google and more than 10,000,000 search impressions, Linen ensures that your community's content is easily discoverable. We achieve this by providing a sitemap, conditionally rendering a static version of our pages for search engines, and implementing cursor-based pagination to maintain consistency.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Customer Support Tooling
&lt;/h3&gt;

&lt;p&gt;Communities often become hubs for customer support, and Linen understands this dynamic. To streamline customer support processes, Linen incorporates a dedicated customer support tooling system. All threads have an open/close state, and we offer a feed that allows you to browse all open and closed conversations in one centralized location. This feature eliminates the need to worry about missed messages across different channels and conversations.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Async First Approach
&lt;/h3&gt;

&lt;p&gt;Recognizing that chat environments can quickly become noisy, especially within large communities, Linen employs an "async first" approach. By providing a feed of conversations in which you actively participate, you can stay informed without the fear of missing important messages. Additionally, we have reimagined @mentions as async notifications. Instead of interruptive notifications, they appear in your feed as !mentions. If urgency is required, Linen offers the option to send a push notification with !mention.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Import Communities
&lt;/h3&gt;

&lt;p&gt;Linen supports the seamless import of public conversations, attachments, emojis, and members from popular platforms like Slack and Discord. Transitioning to Linen has never been easier, ensuring a smooth migration process for your community.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Single Account Across Multiple Communities
&lt;/h3&gt;

&lt;p&gt;Say goodbye to managing multiple emails and passwords for different communities. With Linen, you can join multiple communities using a single login, simplifying your community engagement and collaboration experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Private Communities
&lt;/h3&gt;

&lt;p&gt;In addition to public communities, Linen offers support for private communities that require a password for access. This feature is ideal for internal team discussions or exclusive community spaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Thread and Message Management
&lt;/h3&gt;

&lt;p&gt;Linen empowers users with the ability to efficiently manage threads and messages. You can easily drag and drop messages, merging them into a single thread. Furthermore, moving threads between channels is a simple process, enabling streamlined organization within your community.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Discord Forum Support
&lt;/h3&gt;

&lt;p&gt;Linen synchronizes with Discord and enhances its search engine friendliness. This integration allows you to enjoy the benefits of Linen's structured community environment while leveraging the familiarity and features of Discord.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Private Channels
&lt;/h3&gt;

&lt;p&gt;Linen offers private channels that are invite-only within your community. This feature allows you to create exclusive spaces for specific groups or purposes, ensuring privacy and control over communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Direct Messages
&lt;/h3&gt;

&lt;p&gt;Engage in direct messages within your Linen community. Connect with individuals privately, fostering personalized and efficient communication channels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roadmap
&lt;/h2&gt;

&lt;p&gt;Linen is continually evolving, and we have an exciting roadmap of future developments. Here are some of the upcoming features and improvements we have planned:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Github Integration
&lt;/h3&gt;

&lt;p&gt;We understand that many open-source communities rely on GitHub issues to manage their projects. Linen aims to streamline this process by allowing you to tag conversations with specific GitHub issues. This integration will automatically post messages when tickets are closed or updated, keeping your community informed about the progress of relevant issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Improved Search Functionality
&lt;/h3&gt;

&lt;p&gt;While our current search functionality utilizes full-text search with PostgreSQL, we are dedicated to further enhancing the search experience. We are considering hosting a separate search service to provide even more powerful and accurate search capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Desktop and Mobile Clients
&lt;/h3&gt;

&lt;p&gt;To ensure a seamless user experience, we are actively developing desktop and mobile clients for Linen. These dedicated applications will provide push notifications for urgent messages, allowing you to stay connected and informed across multiple devices.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Botting and Automation
&lt;/h3&gt;

&lt;p&gt;Linen aims to empower users with custom botting and automation capabilities. Soon, you will be able to build and add your own custom bots, further enhancing community interactions and automating routine tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feedback
&lt;/h2&gt;

&lt;p&gt;Linen is currently in its early stages of development, and we highly value user feedback. Your insights and suggestions play a crucial role in shaping the future of Linen. We encourage you to share your thoughts, ideas, and concerns with us, helping us improve and tailor Linen to your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Miscellaneous Features
&lt;/h2&gt;

&lt;p&gt;In addition to the core features outlined above, Linen offers several supplementary features to enhance your community experience. These features include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Markdown message support&lt;/li&gt;
&lt;li&gt;Custom community branding&lt;/li&gt;
&lt;li&gt;Custom domain hosting for Cloud edition&lt;/li&gt;
&lt;li&gt;Attachments support&lt;/li&gt;
&lt;li&gt;Emoji support&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;To help you navigate and utilize Linen effectively, we have divided our documentation into several sections. These sections cover various aspects of Linen's functionality, allowing you to make the most out of the platform. Here are some key sections:&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer Docs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="//./docs/getting-started.md"&gt;Getting Started&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Desktop Client Releases
&lt;/h2&gt;

&lt;p&gt;Stay up-to-date with the latest releases of Linen's desktop client by visiting our GitHub repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Linen-dev/desktop-client/releases"&gt;https://github.com/Linen-dev/desktop-client/releases&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join us on this exciting journey as we revolutionize the way communities interact and collaborate. Together, let's embrace the future of community chat with Linen!&lt;/p&gt;

</description>
      <category>linen</category>
      <category>opensource</category>
      <category>slack</category>
    </item>
    <item>
      <title>Copybara: A Tool for Transforming and Moving Code between Repositories</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Thu, 22 Jun 2023 10:30:43 +0000</pubDate>
      <link>https://dev.to/stackfoss/copybara-a-tool-for-transforming-and-moving-code-between-repositories-1pg8</link>
      <guid>https://dev.to/stackfoss/copybara-a-tool-for-transforming-and-moving-code-between-repositories-1pg8</guid>
      <description>&lt;h1&gt;
  
  
  Copybara: A Tool for Transforming and Moving Code between Repositories
&lt;/h1&gt;

&lt;p&gt;Copybara is an internal tool developed and used by Google to facilitate the transformation and movement of code between repositories. It addresses the need for code to exist in multiple repositories and provides a solution for keeping them in sync. One common use case is maintaining a confidential repository alongside a public repository.&lt;/p&gt;

&lt;p&gt;To ensure there is always one source of truth, Copybara requires the selection of an authoritative repository. However, contributions can be made to any repository, and any repository can be used to cut a release. The tool simplifies the repetitive movement of code between repositories, and it can also be used for one-time code transfers to a new repository.&lt;/p&gt;

&lt;p&gt;The main features of Copybara include its stateless nature and the ability to store state in the destination repository as a label in the commit message. This allows multiple users or services to utilize Copybara with the same configuration and repositories, ensuring consistent results.&lt;/p&gt;

&lt;p&gt;Currently, Copybara supports Git repositories as the primary repository type. It also has experimental support for reading from Mercurial repositories. The extensible architecture of Copybara enables the addition of custom origins and destinations for various use cases. Official support for other repository types is planned for future releases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;Here is an example of how Copybara is used in a Python workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;core.workflow(
    name = "default",
    origin = git.github_origin(
      url = "https://github.com/google/copybara.git",
      ref = "master",
    ),
    destination = git.destination(
        url = "file:///tmp/foo",
    ),

    # Copy everything but don't remove a README_INTERNAL.txt file if it exists.
    destination_files = glob(["third_party/copybara/**"], exclude = ["README_INTERNAL.txt"]),

    authoring = authoring.pass_thru("Default email &amp;lt;default@default.com&amp;gt;"),
    transformations = [
        core.replace(
                before = "//third_party/bazel/bashunit",
                after = "//another/path:bashunit",
                paths = glob(["**/BUILD"])),
        core.move("", "third_party/copybara")
    ],
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the Copybara workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ (mkdir /tmp/foo ; cd /tmp/foo ; git init --bare)
$ copybara copy.bara.sky

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Getting Started with Copybara
&lt;/h2&gt;

&lt;p&gt;As Copybara doesn't have an official release process yet, you need to compile it from the latest source code. Follow these steps to get started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install JDK 11 from the &lt;a href="https://www.oracle.com/java/technologies/downloads/#java11"&gt;Oracle website&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Install Bazel by following the &lt;a href="https://bazel.build/install"&gt;official installation guide&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Clone the Copybara source code repository locally using the command: &lt;code&gt;git clone https://github.com/google/copybara.git&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Build Copybara:

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;bazel build //java/com/google/copybara&lt;/code&gt; to build the project.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;bazel build //java/com/google/copybara:copybara_deploy.jar&lt;/code&gt; to create an executable uberjar.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Optionally, you can run tests to ensure the integrity of the codebase using the command: &lt;code&gt;bazel test //...&lt;/code&gt;. Note that some tests may require additional tools such as Mercurial or Quilt.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  System Packages
&lt;/h3&gt;

&lt;p&gt;You can install system packages required by Copybara using the package manager for your operating system. Here's an example for Arch Linux using the AUR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the package &lt;code&gt;copybara-git&lt;/code&gt; from the &lt;a href="https://aur.archlinux.org/packages/copybara-git"&gt;AUR&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using Intellij with Bazel Plugin
&lt;/h3&gt;

&lt;p&gt;If you're using IntelliJ with the Bazel plugin, you can configure the&lt;/p&gt;

&lt;p&gt;project as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;directories:
  copybara/integration
  java/com/google/copybara
  javatests/com/google/copybara
  third_party

targets:
  //copybara/integration/...
  //java/com/google/copybara/...
  //javatests/com/google/copybara/...
  //third_party/...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that configuration files can be stored anywhere, but it is recommended to treat them as source code and store them in a version control system like Git.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Copybara in an External Bazel Workspace
&lt;/h3&gt;

&lt;p&gt;To build Copybara in an external Bazel workspace, you can define convenience macros for its dependencies. Add the following code to your &lt;code&gt;WORKSPACE&lt;/code&gt; file, replacing &lt;code&gt;{{ sha256sum }}&lt;/code&gt; and &lt;code&gt;{{ commit }}&lt;/code&gt; with the appropriate values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http_archive(
  name = "com_github_google_copybara",
  sha256 = "{{ sha256sum }}",
  strip_prefix = "copybara-{{ commit }}",
  url = "https://github.com/google/copybara/archive/{{ commit }}.zip",
)

load("@com_github_google_copybara//:repositories.bzl", "copybara_repositories")
copybara_repositories()

load("@com_github_google_copybara//:repositories.maven.bzl", "copybara_maven_repositories")
copybara_maven_repositories()

load("@com_github_google_copybara//:repositories.go.bzl", "copybara_go_repositories")
copybara_go_repositories()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then build and run the Copybara tool within your workspace using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bazel run @com_github_google_copybara//java/com/google/copybara -- &amp;lt;args...&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using Docker to Build and Run Copybara
&lt;/h3&gt;

&lt;p&gt;Please note that Docker usage with Copybara is currently experimental. To build Copybara using Docker, follow these steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --rm -t copybara .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the build process is complete, you can run the Copybara image from the root of the code you want to use Copybara on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it -v "$(pwd)":/usr/src/app copybara copybara

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Several environment variables are available to customize the execution of Copybara within the Docker container. Here's an example with custom options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run \
    -e COPYBARA_CONFIG='other.config.sky' \
    -e COPYBARA_SUBCOMMAND='validate' \
    -v "$(pwd)":/usr/src/app \
    -it copybara copybara

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Git Config and Credentials
&lt;/h4&gt;

&lt;p&gt;To share your Git configuration and SSH credentials with the Docker container, you can mount the necessary directories. Here's an example for macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run \
    -v ~/.ssh:/root/.ssh \
    -v ~/.gitconfig:/root/.gitconfig \
    -v "$(pwd)":/usr/src/app \
    -it copybara copybara

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;While the official documentation for Copybara is still a work in progress, you can find some resources to get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="//docs/reference.md"&gt;Reference documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="//docs/examples.md"&gt;Examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.kubesimplify.com/moving-code-between-git-repositories-with-copybara"&gt;Tutorial on how to get started&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Contact
&lt;/h2&gt;

&lt;p&gt;If you have any questions about how Copybara works, you can reach out to Copybara team through our &lt;a href="https://groups.google.com/forum/#!forum/copybara-discuss"&gt;mailing list&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optional Tips
&lt;/h2&gt;

&lt;p&gt;Here's an optional tip for Bazel users who want to see test errors directly in the Bazel output:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your &lt;code&gt;~/.bazelrc&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Add the&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;following line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test --test_output=streamed

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration, test errors will be displayed directly in the Bazel output instead of having to manually inspect the logs.&lt;/p&gt;

&lt;p&gt;These instructions should help you get started with Copybara and explore its capabilities. If you have any further questions or need assistance, don't hesitate to reach out to the Copybara community or the developers working on the project.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>copybara</category>
    </item>
    <item>
      <title>Flowise - LangchainJS UI: Build Customized LLM Flows with Drag &amp;amp; Drop Interface</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Thu, 22 Jun 2023 10:10:08 +0000</pubDate>
      <link>https://dev.to/stackfoss/flowise-langchainjs-ui-build-customized-llm-flows-with-drag-amp-drop-interface-660</link>
      <guid>https://dev.to/stackfoss/flowise-langchainjs-ui-build-customized-llm-flows-with-drag-amp-drop-interface-660</guid>
      <description>&lt;h1&gt;
  
  
  Flowise - LangchainJS UI
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lQOyaYd3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lQOyaYd3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif%3Fraw%3Dtrue" alt="Flowise" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drag &amp;amp; drop UI to build your customized LLM flow using &lt;a href="https://github.com/hwchase17/langchainjs"&gt;LangchainJS&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0ib980rG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/26a1.png%3Fv%3D6nb76hjp8l2" alt="⚡" title="⚡" width="64" height="64"&gt; Quick Start
&lt;/h2&gt;

&lt;p&gt;Download and Install &lt;a href="https://nodejs.org/en/download"&gt;NodeJS&lt;/a&gt; &amp;gt;= 18.15.0&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install Flowise&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start Flowise&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VWlRGSoD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f433.png%3Fv%3D6nb76hjp8l2" alt="🐳" title="🐳" width="64" height="64"&gt; Docker
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker Compose
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;code&gt;docker&lt;/code&gt; folder at the root of the project&lt;/li&gt;
&lt;li&gt;Create an &lt;code&gt;.env&lt;/code&gt; file and specify the &lt;code&gt;PORT&lt;/code&gt; (refer to &lt;code&gt;.env.example&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;docker-compose up -d&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Open &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You can bring the containers down by running &lt;code&gt;docker-compose stop&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Docker Image
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Build the image locally:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the image:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stop the image:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0aMcNKIr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f468-200d-1f4bb.png%3Fv%3D6nb76hjp8l2" alt="👨‍💻" title=":male-technologist:" width="64" height="64"&gt; Developers
&lt;/h2&gt;

&lt;p&gt;Flowise has 3 different modules in a single mono repository.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;server&lt;/code&gt;: Node backend to serve API logics&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ui&lt;/code&gt;: React frontend&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;components&lt;/code&gt;: Langchain components&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;a href="https://classic.yarnpkg.com/en/docs/install"&gt;Yarn v1&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone the repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go into the repository folder&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install all dependencies of all modules:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build all the code:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start the app:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For development build:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eGRaEvY---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f512.png%3Fv%3D6nb76hjp8l2" alt="🔒" title="🔒" width="64" height="64"&gt; Authentication
&lt;/h2&gt;

&lt;p&gt;To enable app-level authentication, add &lt;code&gt;FLOWISE_USERNAME&lt;/code&gt; and &lt;code&gt;FLOWISE_PASSWORD&lt;/code&gt; to the &lt;code&gt;.env&lt;/code&gt; file in &lt;code&gt;packages/server&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FLOWISE_USERNAME=user
FLOWISE_PASSWORD=1234

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DUmSlR82--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f4d6.png%3Fv%3D6nb76hjp8l2" alt="📖" title="📖" width="64" height="64"&gt; Documentation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.flowiseai.com/"&gt;Flowise Docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Iu9SHZgb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f310.png%3Fv%3D6nb76hjp8l2" alt="🌐" title=":globe\_with\_meridians:" width="64" height="64"&gt; Self Host
&lt;/h2&gt;

&lt;h3&gt;
  
  
  [Railway](&lt;a href="https://docs.flowiseai.com/deployment/railway"&gt;https://docs.flowiseai.com/deployment/railway&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://railway.app/template/YK7J0v"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--keKIfrBd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://railway.app/button.svg" alt="Deploy on Railway" width="183" height="40"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.flowiseai.com/deployment/render"&gt;Render&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.flowiseai.com/deployment/render"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y19pLV7p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://render.com/images/deploy-to-render-button.svg" alt="Deploy to Render" width="212" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.flowiseai.com/deployment/aws"&gt;AWS&lt;/a&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.flowiseai.com/deployment/azure"&gt;Azure&lt;/a&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.flowiseai.com/deployment/digital-ocean"&gt;DigitalOcean&lt;/a&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.flowiseai.com/deployment/gcp"&gt;GCP&lt;/a&gt;
&lt;/h3&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XOvtV7ip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f4bb.png%3Fv%3D6nb76hjp8l2" alt="💻" title="💻" width="64" height="64"&gt; Cloud Hosted
&lt;/h2&gt;

&lt;p&gt;Coming soon&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GHvpmUA6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f64b.png%3Fv%3D6nb76hjp8l2" alt="🙋" title=":raising\_hand:" width="64" height="64"&gt; Support
&lt;/h2&gt;

&lt;p&gt;Feel free to ask any questions, raise problems, and request new features in the &lt;a href="https://github.com/FlowiseAI/Flowise/discussions"&gt;discussion&lt;/a&gt; section.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SeVH3yKn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f64c.png%3Fv%3D6nb76hjp8l2" alt="🙌" title=":raised\_hands:" width="64" height="64"&gt; Contributing
&lt;/h2&gt;

&lt;p&gt;See the &lt;a href="//CONTRIBUTING.md"&gt;contributing guide&lt;/a&gt;. Reach out to us on &lt;a href="https://discord.gg/jbaHfsRVBW"&gt;Discord&lt;/a&gt; if you have any questions or issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://star-history.com/#FlowiseAI/Flowise&amp;amp;Date"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BS2U5YvJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://api.star-history.com/svg%3Frepos%3DFlowiseAI/Flowise%26type%3DTimeline" alt="Star History Chart" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d6afkxbi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.stackfoss.com/assets/plugins/nodebb-plugin-emoji/emoji/android/1f4c4.png%3Fv%3D6nb76hjp8l2" alt="📄" title=":page\_facing\_up:" width="64" height="64"&gt; License
&lt;/h2&gt;

&lt;p&gt;Source code in this repository is made available under the &lt;a href="//LICENSE.md"&gt;MIT License&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>langchainjs</category>
      <category>llm</category>
      <category>stackfoss</category>
    </item>
    <item>
      <title>Tinygrad: A Simple Deep Learning Framework</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Thu, 22 Jun 2023 09:57:17 +0000</pubDate>
      <link>https://dev.to/stackfoss/tinygrad-a-simple-deep-learning-framework-33fa</link>
      <guid>https://dev.to/stackfoss/tinygrad-a-simple-deep-learning-framework-33fa</guid>
      <description>&lt;h1&gt;
  
  
  Introduction to tinygrad: A Simple Deep Learning Framework
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fgeohot%2Ftinygrad%2Fmaster%2Fdocs%2Flogo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fgeohot%2Ftinygrad%2Fmaster%2Fdocs%2Flogo.png" alt="logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;tinygrad is a deep learning framework that aims to provide a balance between the simplicity of &lt;a href="https://github.com/karpathy/micrograd" rel="noopener noreferrer"&gt;karpathy/micrograd&lt;/a&gt; and the functionality of &lt;a href="https://github.com/pytorch/pytorch" rel="noopener noreferrer"&gt;PyTorch&lt;/a&gt;. Maintained by tiny corp, tinygrad is designed to be an easy-to-use framework for adding new accelerators and supports both inference and training. While it may not be the most advanced deep learning framework available, it offers a straightforward and accessible solution for developing machine learning models. In this article, we will explore the features, architecture, and potential applications of tinygrad.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  LLaMA and Stable Diffusion
&lt;/h3&gt;

&lt;p&gt;One notable feature of tinygrad is its ability to run LLaMA and Stable Diffusion algorithms. These algorithms are useful in various applications, and tinygrad provides a convenient environment for implementing them. To learn more about LLaMA and Stable Diffusion, refer to the &lt;a href="///docs/showcase.md#llama"&gt;LLaMA showcase&lt;/a&gt; and &lt;a href="///docs/showcase.md#stable-diffusion"&gt;Stable Diffusion showcase&lt;/a&gt; in the documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Laziness
&lt;/h3&gt;

&lt;p&gt;tinygrad leverages the power of laziness to optimize computations. For example, when performing a matrix multiplication (&lt;code&gt;matmul&lt;/code&gt;), the framework fuses the operation into a single kernel, resulting in efficient execution. Consider the following code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DEBUG=3 python3 -c "from tinygrad.tensor import Tensor;
N = 1024; a, b = Tensor.rand(N, N), Tensor.rand(N, N);
c = (a.reshape(N, 1, N) * b.permute(1,0).reshape(1, N, N)).sum(axis=2);
print((c.numpy() - (a.numpy() @ b.numpy())).mean())"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By executing the code and setting the &lt;code&gt;DEBUG&lt;/code&gt; variable to &lt;code&gt;3&lt;/code&gt;, you can observe the optimization performed by tinygrad. Additionally, increasing the &lt;code&gt;DEBUG&lt;/code&gt; value to &lt;code&gt;4&lt;/code&gt; reveals the generated code. This laziness feature allows for efficient computation and optimization of deep learning models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Neural Networks
&lt;/h3&gt;

&lt;p&gt;tinygrad recognizes that a significant portion of building neural networks relies on a reliable autograd/tensor library. With tinygrad, you can easily construct neural networks using the available tensor operations and autograd capabilities. The following example demonstrates the construction and training of a neural network using tinygrad:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from tinygrad.tensor import Tensor
import tinygrad.nn.optim as optim

class TinyBobNet:
  def __init__ (self):
    self.l1 = Tensor.uniform(784, 128)
    self.l2 = Tensor.uniform(128, 10)

  def forward(self, x):
    return x.dot(self.l1).relu().dot(self.l2).log_softmax()

model = TinyBobNet()
optim = optim.SGD([model.l1, model.l2], lr=0.001)

# ... complete data loader here

out = model.forward(x)
loss = out.mul(y).mean()
optim.zero_grad()
loss.backward()
optim.step()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, a simple neural network, &lt;code&gt;TinyBobNet&lt;/code&gt;, is defined with two linear layers (&lt;code&gt;l1&lt;/code&gt; and &lt;code&gt;l2&lt;/code&gt;). The &lt;code&gt;forward&lt;/code&gt; method specifies the forward pass of the network. The example also demonstrates the usage of an optimizer, &lt;code&gt;SGD&lt;/code&gt;, and the typical training loop involving forward and backward passes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accelerators
&lt;/h2&gt;

&lt;p&gt;tinygrad supports various accelerators out of the box, including CPU, GPU (OpenCL), C Code (Clang), LLVM, METAL,&lt;/p&gt;

&lt;p&gt;CUDA, Triton, and even PyTorch. These accelerators provide hardware acceleration for the computations performed by tinygrad, improving the performance and efficiency of deep learning models. Adding support for additional accelerators is also straightforward. An accelerator only needs to implement a set of low-level operations, totaling 26 (optionally 27). For more information on adding new accelerators, consult the &lt;a href="///docs/adding_new_accelerators.md"&gt;documentation on adding new accelerators&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;To install tinygrad, the recommended method is to build it from source. Follow the steps below to install tinygrad on your system:&lt;/p&gt;

&lt;h3&gt;
  
  
  From Source
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Clone the tinygrad repository:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/geohot/tinygrad.git
cd tinygrad

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install the package using pip:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m pip install -e . # or `py3 -m pip install -e .` if you are on windows

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember to include the &lt;code&gt;.&lt;/code&gt; at the end of the command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;The tinygrad documentation, including a quick start guide, can be found in the &lt;a href="https://dev.to/docs"&gt;docs/&lt;/a&gt; directory. The documentation provides detailed information on the various aspects of using tinygrad, such as tensors, autograd, optimizers, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Example comparing to PyTorch
&lt;/h3&gt;

&lt;p&gt;Here is a quick example that demonstrates the usage of tinygrad and compares it to the equivalent code in PyTorch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from tinygrad.tensor import Tensor

x = Tensor.eye(3, requires_grad=True)
y = Tensor(&amp;amp;lsqb;&amp;amp;lsqb;2.0,0,-2.0&amp;amp;rsqb;&amp;amp;rsqb;, requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The equivalent code in PyTorch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import torch

x = torch.eye(3, requires_grad=True)
y = torch.tensor(&amp;amp;lsqb;&amp;amp;lsqb;2.0,0,-2.0&amp;amp;rsqb;&amp;amp;rsqb;, requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example demonstrates the similarity between tinygrad and PyTorch in terms of tensor operations and autograd functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contributing
&lt;/h2&gt;

&lt;p&gt;tinygrad has received significant interest from the community, and contributions are welcome. If you're interested in contributing to the project, here are some guidelines to follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bug fixes are highly appreciated and always welcome. If you encounter a bug, feel free to submit a fix.&lt;/li&gt;
&lt;li&gt;When modifying the code, make sure you understand the changes you're making.&lt;/li&gt;
&lt;li&gt;Code golf pull requests will be closed, but conceptual cleanups are encouraged.&lt;/li&gt;
&lt;li&gt;If you're adding new features, please include appropriate tests to ensure their correctness.&lt;/li&gt;
&lt;li&gt;Improving test coverage is highly beneficial. Reliable and non-brittle tests are encouraged.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more detailed guidelines, refer to the &lt;a href="///CONTRIBUTING.md"&gt;CONTRIBUTING.md&lt;/a&gt; file in the repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Tests
&lt;/h3&gt;

&lt;p&gt;To run the full test suite of tinygrad, you can use the following examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m pip install -e '.[testing]'
python3 -m pytest
python3 -m pytest -v -k TestTrain
python3 ./test/models/test_train.py TestTrain.test_efficientnet

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands install the necessary dependencies for testing and execute the tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;tinygrad offers a simple yet powerful framework for deep learning. Its ease of use, support for various accelerators, and compatibility with PyTorch make it an attractive option for developers and researchers. Whether you're building neural networks, implementing advanced algorithms like&lt;/p&gt;

&lt;p&gt;LLaMA and Stable Diffusion, or exploring new accelerators, tinygrad provides a solid foundation. With ongoing development and community contributions, tinygrad is poised to grow and become a valuable tool in the machine learning ecosystem.&lt;/p&gt;

&lt;p&gt;To learn more about tinygrad, visit the &lt;a href="https://github.com/geohot/tinygrad" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; and the &lt;a href="https://tinygrad.org" rel="noopener noreferrer"&gt;official website&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>tinygard</category>
      <category>tensorflow</category>
      <category>pytorch</category>
    </item>
    <item>
      <title>OpenChat: Simplifying Chatbot Creation with Large Language Models</title>
      <dc:creator>StackFoss</dc:creator>
      <pubDate>Wed, 21 Jun 2023 18:54:14 +0000</pubDate>
      <link>https://dev.to/stackfoss/openchat-simplifying-chatbot-creation-with-large-language-models-25a4</link>
      <guid>https://dev.to/stackfoss/openchat-simplifying-chatbot-creation-with-large-language-models-25a4</guid>
      <description>&lt;h1&gt;
  
  
  OpenChat: Simplifying Chatbot Creation with Large Language Models
&lt;/h1&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;OpenChat is an everyday user chatbot console that simplifies the utilization of large language models. As AI continues to advance, the installation and usage of these models have become overwhelming. OpenChat aims to address this challenge by providing a two-step setup process to create a comprehensive chatbot console. It serves as a central hub for managing multiple customized chatbots.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important disclaimer: This project is not production-ready and is meant for a local environment at this early stage. We quickly built this project to validate the idea, so please excuse any shortcomings in the code. You may come across several areas that require enhancements, and we truly appreciate your support by opening issues, submitting pull requests, and providing suggestions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Try OpenChat Now
&lt;/h2&gt;

&lt;p&gt;You can try OpenChat on &lt;a href="http://openchat.so/"&gt;openchat.so&lt;/a&gt;. Visit the website to explore the capabilities of OpenChat and experience the power of large language models in chatbot development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J3zhycU2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/openchatai/OpenChat/assets/32633162/112a72a7-4314-474b-b7b5-91228558370c" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J3zhycU2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/openchatai/OpenChat/assets/32633162/112a72a7-4314-474b-b7b5-91228558370c" alt="OpenChat Demo" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Features
&lt;/h2&gt;

&lt;p&gt;OpenChat provides the following features to simplify chatbot creation and management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create unlimited local chatbots based on GPT-3 (and GPT-4 if available).&lt;/li&gt;
&lt;li&gt;Customize your chatbots by providing PDF files, websites, and soon, integrations with platforms like Notion, Confluence, and Office 365.&lt;/li&gt;
&lt;li&gt;Each chatbot has unlimited memory capacity, enabling seamless interaction with large files such as a 400-page PDF.&lt;/li&gt;
&lt;li&gt;Embed chatbots as widgets on your website or internal company tools.&lt;/li&gt;
&lt;li&gt;Use your entire codebase as a data source for your chatbots (pair programming mode).&lt;/li&gt;
&lt;li&gt;And much more!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Roadmap
&lt;/h2&gt;

&lt;p&gt;OpenChat is continuously evolving and improving. Here is the roadmap for upcoming features and enhancements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Create unlimited chatbots&lt;/li&gt;
&lt;li&gt;
Share chatbots via URL&lt;/li&gt;
&lt;li&gt;
Integrate chatbots on any website using JS (as a widget on the bottom right corner)&lt;/li&gt;
&lt;li&gt;
Support GPT-3 models&lt;/li&gt;
&lt;li&gt;
Support vector database to provide chatbots with larger memory&lt;/li&gt;
&lt;li&gt;
Accept websites as a data source&lt;/li&gt;
&lt;li&gt;
Accept PDF files as a data source&lt;/li&gt;
&lt;li&gt;
Support multiple data sources per chatbot&lt;/li&gt;
&lt;li&gt;
Support ingesting an entire codebase using GitHub API and&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;use it as a data source with pair programming mode&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Support pre-defined messages with a single click&lt;/li&gt;
&lt;li&gt;
Support Slack integration (allow users to connect chatbots with their Slack workspaces)&lt;/li&gt;
&lt;li&gt;
Support Intercom integration (enable users to sync chat conversations with Intercom)&lt;/li&gt;
&lt;li&gt;
Support offline open-source models (e.g., Alpaca, LLM drivers)&lt;/li&gt;
&lt;li&gt;
Support Vertex AI and Palm as LLMs&lt;/li&gt;
&lt;li&gt;
Support Confluence, Notion, Office 365, and Google Workspace&lt;/li&gt;
&lt;li&gt;
Refactor the codebase to be API ready&lt;/li&gt;
&lt;li&gt;
Create a new UI designer for website-embedded chatbots&lt;/li&gt;
&lt;li&gt;
Support custom input fields for chatbots&lt;/li&gt;
&lt;li&gt;
Support offline usage: this is a major feature, OpenChat will operate fully offline with no internet connection at this stage (offline LLMs, offline Vector DBs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We welcome your feedback and ideas! If you have any suggestions or cool ideas, feel free to reach out to us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;To get started with OpenChat, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Make sure you have Docker installed on your machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clone the OpenChat Git repository by running the following command:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:openchatai/OpenChat.git

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Update the &lt;code&gt;common.env&lt;/code&gt; file with your API keys:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAI_API_KEY=# you can get it from your account in openai.com
PINECONE_API_KEY=# you can get from "API Keys" tab in pinecone
PINECONE_ENVIRONMENT=# you can get it after creating your index in pinecone
PINECONE_INDEX_NAME=# you can get it after creating your index in pinecone

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: For Pinecone DB, make sure that the dimension is equal to 1536.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the repository folder and run the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make install

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the installation is complete, you can access the OpenChat console at &lt;a href="http://localhost:8000"&gt;http://localhost:8000&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrade Guide
&lt;/h2&gt;

&lt;p&gt;OpenChat strives to avoid introducing breaking changes. To upgrade to the latest version, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;git pull&lt;/code&gt; to fetch the latest changes from the repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;make install&lt;/code&gt; to install the latest dependencies and update the project.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Acknowledgements
&lt;/h2&gt;

&lt;p&gt;We would like to express our gratitude to the following contributors for their valuable contributions to the OpenChat project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/eltociear"&gt;Ikko Eltociear Ashimine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/jsindy"&gt;Joshua Sindy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/erjanmx"&gt;Erjan Kalybek&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://woahai.com/"&gt;WoahAI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project follows the &lt;a href="https://github.com/all-contributors/all-contributors"&gt;all-contributors&lt;/a&gt; specification. Contributions of any kind are welcome!&lt;/p&gt;

&lt;h2&gt;
  
  
  License
&lt;/h2&gt;

&lt;p&gt;OpenChat is licensed under the MIT License.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;OpenChat aims to simplify the creation and management of chatbots powered by large language models. With its user-friendly interface and extensive features, OpenChat enables users to leverage the power of AI in their chatbot projects. Whether you are a developer or a non-technical user, OpenChat provides a streamlined experience for building and deploying chatbots. We are continuously working on enhancing OpenChat and incorporating new features based on user feedback and requirements. Feel free to join our Discord community and contribute to the project. Together, let's unlock the potential of large language models in chatbot development!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>programming</category>
      <category>stackfoss</category>
      <category>openchat</category>
    </item>
  </channel>
</rss>
