<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adrian Brown</title>
    <description>The latest articles on DEV Community by Adrian Brown (@adrbrownx).</description>
    <link>https://dev.to/adrbrownx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adrbrownx"/>
    <language>en</language>
    <item>
      <title>Open-source collaboration</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Thu, 15 Jun 2023 04:58:44 +0000</pubDate>
      <link>https://dev.to/adrbrownx/open-source-collaboration-5aj5</link>
      <guid>https://dev.to/adrbrownx/open-source-collaboration-5aj5</guid>
      <description>&lt;p&gt;My name is Adrian and I'm an open-source maker, just created &lt;a href="https://github.com/apolloapi/apolloapi"&gt;Apollo&lt;/a&gt; an open-source LLM aggregation library for python.&lt;/p&gt;

&lt;p&gt;I'm posting because what better way to share the news! We're currently planning a hackathon, if you're interested feel free to sign up for the &lt;a href="https://apolloapi.io/"&gt;waitlist&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm looking to add support for a full API to manage in-app integrations, user subscription management, preferences and etc... And looking for feedback from people who had experience with some of those topics previously.&lt;/p&gt;

&lt;p&gt;BTW, we are looking for new contributors if you'll find it interesting :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fwdxw4qru2q6wzuhrfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fwdxw4qru2q6wzuhrfv.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>webdev</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How we reduced our review time by 90%</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Thu, 11 May 2023 21:30:47 +0000</pubDate>
      <link>https://dev.to/useapolloapi/how-we-reduced-our-review-time-by-90-3hnc</link>
      <guid>https://dev.to/useapolloapi/how-we-reduced-our-review-time-by-90-3hnc</guid>
      <description>&lt;p&gt;At &lt;a href="https://apolloapi.io"&gt;Apollo&lt;/a&gt;, we use a no-code automation platform to allow users to manage their integrations &amp;amp; tasks. There are many debates over whether automating content moderation is a one-size-fits-all approach, it's not. AI powered content moderation has become table stakes in protecting and enhancing user experience at scale, but for us, visibility into other areas of enterprise/personal decision making led us to use tools like automation rules &amp;amp; workflow platforms. &lt;/p&gt;

&lt;h2&gt;
  
  
  TLDR;
&lt;/h2&gt;

&lt;p&gt;One case study for content moderation from &lt;a href="https://yikyak.com/"&gt;YikYak&lt;/a&gt;, we migrated from &lt;a href="https://retool.com/"&gt;Retool&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24ry4zjl0f19sok7moqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24ry4zjl0f19sok7moqf.png" alt="Image description" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger, the slower
&lt;/h2&gt;

&lt;p&gt;With all the advantages, there are a few drawbacks to using retool and a standalone automation system for task monitoring/completion. We noticed a particular drawback when scaling the number of users and amount of activity with each one: The time it takes to detect bad actors accurately and then investigate. So a typical task for a investigation would run anywhere between 3 minutes to 4 days. And that's for each time an automated flag was created or a user report was sent.&lt;/p&gt;

&lt;p&gt;More than that, truly having accurate data was a nightmare. Standalone automation systems (environments with one AI model) would score correctly only a fraction of the time.&lt;/p&gt;

&lt;p&gt;This amount of time spent investigating and analyzing to make an informed decision reduced the moderator efficiency and wasted collectively so much talented people's time. Being a social app dedicated to curating safer community, this was unacceptable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging the slowest tasks
&lt;/h2&gt;

&lt;p&gt;Inspecting a typical 4 day case/support ticket, it was clear that two specific steps took almost 70-80% of the overall time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gathering metadata from existing platform policy, account information and community guidelines: 4 Hours&lt;/li&gt;
&lt;li&gt;Connecting historical data to recent events: ~1 day&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Migrating from Standalone Automation Systems to Integrated Automation System
&lt;/h2&gt;

&lt;p&gt;Retool coupled with our standalone automation system provided a fast, efficient infrastructure as a service toolkit, and from some of the benchmarks, there was a massive improvement in active response time when migrated off of a independent automation system to an integrated automation system like &lt;a href="https://apolloapi.io"&gt;Apollo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Moving from Retool + Standalone automation took around 2 minutes, the migration to integrated automation (&lt;a href="https://apolloapi.io"&gt;Apollo&lt;/a&gt;) was effortless: Just adding metadata about your internal databases and a click of a button, that's all. The aggregation and integration for each database/api resource were efficiently synced, in... wait for it... just milliseconds! And that's without any code at all! After integrated automation tools like &lt;a href="https://apolloapi.io"&gt;Apollo&lt;/a&gt; increases the accuracy of the scored content, it takes less than 40 seconds to deploy and integrate multiple AI models under one API.&lt;/p&gt;

&lt;p&gt;Reducing ~3 minutes to 13 seconds from the standalone's system time for every investigation and accuracy for moderators is a HUGE win. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Reducing our active response times from 3+ minutes to around 13 seconds significantly impacts the user experience of our community. It also reduces the overhead incurred by regulatory violations or non-compliance, which can run a bill up to $400k a month! You can checkout our beta on our Github repository.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>webdev</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>What in the Python?</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Fri, 05 May 2023 00:24:00 +0000</pubDate>
      <link>https://dev.to/adrbrownx/what-in-the-python-44da</link>
      <guid>https://dev.to/adrbrownx/what-in-the-python-44da</guid>
      <description>&lt;h2&gt;
  
  
  TLDR;
&lt;/h2&gt;

&lt;p&gt;This tutorial explains how to use the &lt;a href="https://github.com/apolloapi/apolloapi/"&gt;Apollo SDK&lt;/a&gt; Python package to connect to an external database, transform data using Pandas, and perform Natural Language Processing on the data. With just a few lines of code, this powerful SDK provides a flexible interface for working with data and integrating with other services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have a &lt;a href="https://supabase.com/"&gt;supabase instance&lt;/a&gt; spun up with data &lt;/li&gt;
&lt;li&gt;Create an &lt;a href="https://docs.apolloapi.io/docs/api/authentication"&gt;API token&lt;/a&gt; with Apollo API&lt;/li&gt;
&lt;li&gt;A copy of your database connection string in URI format&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Quickstart
&lt;/h1&gt;

&lt;p&gt;Lets begin!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up a Connection to Supabase
&lt;/h2&gt;

&lt;p&gt;First, let's install the Apollo SDK Python package using pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;apollo&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sdk&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the package is installed, we can use it to set up a connection to &lt;a href="https://supabase.com/"&gt;Supabase&lt;/a&gt;, a popular database management tool. Here's an example code snippet that shows how to connect to Supabase using the Apollo SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# import the package
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;apollo.client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Apollo&lt;/span&gt;

&lt;span class="c1"&gt;# sync data from your database instance (we support supabase at the current moment or postgresql via uri format)
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres://username:password@hostname:port/database_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# If you want to test out operation on your external connection
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch_tables&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Query the table in desc or asc order
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;desc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;table&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;column&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code imports the Apollo module from the &lt;code&gt;apollo.client&lt;/code&gt; package and uses the &lt;code&gt;Apollo.connect()&lt;/code&gt; method to establish a connection to a Supabase database instance. Once the connection is established, we can use the &lt;code&gt;Apollo.query()&lt;/code&gt; method to run queries against the database and fetch data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transforming Data Using Pandas
&lt;/h2&gt;

&lt;p&gt;Once we have fetched data from the external database, we can use Pandas to transform the data into a table. Here's an example code snippet that shows how to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;

&lt;span class="c1"&gt;# the data returned by the Apollo.query() method
&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[...]&lt;/span&gt; &lt;span class="c1"&gt;# replace with the actual data
&lt;/span&gt;
&lt;span class="c1"&gt;# convert the data into a Pandas DataFrame
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# return the first 5 rows of the DataFrame
&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;head&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# print the output to the console
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;ex. output of dataframe for us using reddit data&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hew2na831m39793avjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hew2na831m39793avjv.png" alt="Image description" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This code uses the Pandas package to transform the data returned by the &lt;code&gt;Apollo.query()&lt;/code&gt; method into a table. We first import the pandas module and create a Pandas DataFrame from the data using the &lt;code&gt;pd.DataFrame()&lt;/code&gt; method. We then use the &lt;code&gt;head()&lt;/code&gt; method to return the first 5 rows of the DataFrame.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sending Data to Apollo's AI model
&lt;/h2&gt;

&lt;p&gt;Finally, we can use the Apollo SDK to send the data to a content API for further processing against an AI model. Here's an example code snippet that shows how to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# import the necessary modules
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;apollo.client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Apollo&lt;/span&gt;

&lt;span class="c1"&gt;# set up the connection to the content API
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apollo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_API_TOKEN_HERE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# select the 'text' column from the DataFrame
&lt;/span&gt;&lt;span class="n"&gt;texts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# loop through each text in the 'text' column and send it to the content API
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;texts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# send the text to the content API to detect any threats
&lt;/span&gt;    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;contains&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Threats&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# print the result to the console
&lt;/span&gt;    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code sets up the connection to a content API using the &lt;code&gt;Apollo.use()&lt;/code&gt; method and provides the necessary API token. We then select the text column from the Pandas DataFrame and loop through each text in the column. We use the &lt;code&gt;Apollo.detectText()&lt;/code&gt; method to send each text to the content API and detect any potential threats.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ex. output&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    status: 200,
        data: {
            actionsTriggered: [
            {
                id: 'ks93-as3',
                name: 'Mute User'
            },
            {
                id: 'f2-xa03d',
                name: 'Escalate Content'
            }
        ]
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This tutorial explains how to use the Apollo SDK Python package to connect to an external database, transform data using Pandas, and utilize an AI model for NLP. With just a few lines of code, this powerful SDK provides a flexible interface for working with data and integrating with other services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Useful links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.apolloapi.io/"&gt;Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Collaborate with us on &lt;a href="https://github.com/apolloapi/"&gt;Github&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join our &lt;a href="https://discord.gg/ZUH7f7AzUY/"&gt;discord&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Checkout the &lt;a href="https://apolloapi.io"&gt;website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Our &lt;a href="https://www.thebriefnewsletter.com"&gt;newsletter&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tutorial</category>
      <category>python</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Using APIs with AI</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Thu, 27 Apr 2023 23:18:15 +0000</pubDate>
      <link>https://dev.to/adrbrownx/how-to-work-w-ai-models-15ep</link>
      <guid>https://dev.to/adrbrownx/how-to-work-w-ai-models-15ep</guid>
      <description>&lt;h2&gt;
  
  
  TLDR;
&lt;/h2&gt;

&lt;h4&gt;
  
  
  What is Apollo-SDK?
&lt;/h4&gt;

&lt;p&gt;It's a cli toolkit for developers to build automation pipelines on their own. You can aggregate multiple LLMs into one place using one single api and sync data across different sources. This allows you to centralize the deployment of multiple models into one channel and build from there using one api. You can use your model, our model or a combination of both. &lt;/p&gt;

&lt;p&gt;Apollo user interface (UI) is designed for creating decision making workflows. With Apollo platform, you can perform no-code AI tasks using the best engines in the market. Apollo SDK is looking for opensource contributors. The UI is coming soon!&lt;/p&gt;

&lt;h4&gt;
  
  
  What can I do with it?
&lt;/h4&gt;

&lt;p&gt;You should checkout our feature list &lt;a href="https://docs.apolloapi.io/docs/features"&gt;here&lt;/a&gt;. We offer models for lots of providers using Image, Speech, Video and Text. &lt;/p&gt;

&lt;h2&gt;
  
  
  Quickstart
&lt;/h2&gt;

&lt;p&gt;Lets install the SDK first...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install apollo-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Let's setup your first Integration!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It will pull from your local database (and keep it in sync).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# import the package
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;apollo.client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Apollo&lt;/span&gt;

&lt;span class="c1"&gt;# sync data from your database instance
# (we support supabase at the current moment or postgresql via uri format)
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres://username:password@hostname:port/database_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# If you want to test out operation on your external connection
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch_tables&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;desc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;table&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;column&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...and create a workflow with a simple command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# import the package
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;apollo.client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Apollo&lt;/span&gt;

&lt;span class="c1"&gt;# Use our custom model to test building decisions
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Apollo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# We support video, speech, image and text. Try text!
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Phrase1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;contains&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Threats&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In practice, you probably want to use our user interface (UI) so you dont have to write code. If so, ping us at &lt;a href="mailto:adrian@apolloapi.io"&gt;adrian@apolloapi.io&lt;/a&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  🔌 Integrations
&lt;/h3&gt;

&lt;p&gt;We pre-built integrations with providers like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supabase&lt;/li&gt;
&lt;li&gt;Postgresql&lt;/li&gt;
&lt;li&gt;Firebase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...We have a huge need for developing integrations with Mongodb, Mysql and more.&lt;/p&gt;

&lt;p&gt;To get involved checkout some of our issues here!&lt;/p&gt;

&lt;h3&gt;
  
  
  🧪 Clients
&lt;/h3&gt;

&lt;p&gt;Apollo has built in support for custom API calls via rest using make_http_requests &amp;amp; make_http_request methods.&lt;/p&gt;

&lt;p&gt;We are building the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Graphql&lt;/li&gt;
&lt;li&gt;gRPC&lt;/li&gt;
&lt;li&gt;XML formats&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>opensource</category>
      <category>api</category>
    </item>
    <item>
      <title>How To Uninstall Core Audio Driver On Mac</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Thu, 27 Apr 2023 23:06:39 +0000</pubDate>
      <link>https://dev.to/adrbrownx/how-to-uninstall-core-audio-driver-on-mac-1j6n</link>
      <guid>https://dev.to/adrbrownx/how-to-uninstall-core-audio-driver-on-mac-1j6n</guid>
      <description>&lt;h2&gt;
  
  
  TLDR;
&lt;/h2&gt;

&lt;p&gt;In this guide, we will show you the steps to uninstall the Core Audio Driver (MSTeamsAudioDevice.driver) from your Mac. Ever since work from home became the new normal, users have flocked over to numerous video conferencing apps. Many users have been inclined towards Teams due to its handy integration with other Microsoft services.&lt;/p&gt;

&lt;p&gt;Core Audio Driver MSTeamsAudioDevice.driver However, as of late, numerous Mac users have voiced their concern that even after uninstalling Microsoft Teams, its core audio driver file MSTeamsAudioDevice.driver is still present on their system. Not only that, it continues to run in the background and ends up hogging valuable system resources as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Uninstall Core Audio Driver MSTeamsAudioDevice.driver on Mac
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Launch Finder and press Shift+Command+. to view hidden files&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfgqw0ulnhyaaq1bxlil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfgqw0ulnhyaaq1bxlil.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Then delete the file from the following directory
```
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;/Library/Audio/Plug-Ins/HAL/MSTeamsAudioDevice.driver&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;3. After that, delete the file from the following location as well:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;/Library/Audio/Plug-Ins/HAL/MSTeamsAudioDevice.driver/Contents/MacOS/MSTeamsAudioDevice&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;4. Now open Activity Monitor, select Core Audio Driver, and hit Quit for one final time.Core Audio Driver MSTeamsAudioDevice.driver


![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1owzn5q85t8t5xd7ahxl.png)


That’s it. These were the steps to uninstall the Core Audio Driver (MSTeamsAudioDevice.driver) from your Mac. 


#### Checkout Apollo API
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3axczs55snbu6rg4jboo.png)
We're building a aggregation tool for top AI providers and are looking for contributors! [Star](https://github.com/apolloapi/apolloapi/) our github and connect with us on [discord](https://discord.gg/ZUH7f7AzUY)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>tutorial</category>
      <category>community</category>
      <category>todayilearned</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>LLMs using a single API</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Thu, 27 Apr 2023 05:54:34 +0000</pubDate>
      <link>https://dev.to/adrbrownx/llms-using-a-single-api-3lkf</link>
      <guid>https://dev.to/adrbrownx/llms-using-a-single-api-3lkf</guid>
      <description>&lt;p&gt;Use your model, &lt;a href="https://github.com/apolloapi/apolloapi"&gt;Apollo's&lt;/a&gt; model or a combination of both.&lt;/p&gt;

&lt;p&gt;Today we're talking about aggregating different AI models under one API. I'm an open-source maker and have worked on various projects but none as fun as this! &lt;/p&gt;

&lt;p&gt;Apollo allows you to integrate with any existing provider and sync data across multiple resources - the user interface (UI) is designed for creating decision making workflows. With the Apollo platform, you can perform no-code AI tasks using the best engines in the market. Apollo SDK is a cli toolkit for developers to build automation pipelines on their own.&lt;/p&gt;

&lt;p&gt;...Lets take a look at the &lt;a href="https://github.com/apolloapi/apolloapi"&gt;SDK&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  TLDR;
&lt;/h2&gt;

&lt;p&gt;Skip to the &lt;a href="https://apolloapi-doc.vercel.app/docs/api/quickstart"&gt;quickstart&lt;/a&gt; to start playing around with the SDK!&lt;/p&gt;

&lt;h2&gt;
  
  
  Show me an example
&lt;/h2&gt;

&lt;p&gt;In your code you can write:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;postgres://username:password@hostnam...&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...)&lt;/span&gt; &lt;span class="c1"&gt;# Starts syncing content forever!
&lt;/span&gt;
&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OpenAI&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;moderation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...)&lt;/span&gt; &lt;span class="c1"&gt;# Connect to existing providers!
&lt;/span&gt;
&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Phrase1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0.8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Create custom rules!
&lt;/span&gt;
&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Apollo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;violence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...)&lt;/span&gt; &lt;span class="c1"&gt;# Connect with our internal models!
&lt;/span&gt;
&lt;span class="c1"&gt;# Detect bad actors at scale!
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Image1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;contains&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;VERY_LIKELY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Image Analysis/OCR
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectSpeech&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Audio1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;contains&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;UNLIKELY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Audio Processing
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectVideo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Video1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;contains&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POSSIBLE&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Video Analysis
&lt;/span&gt;&lt;span class="n"&gt;Apollo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Phrase1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;contains&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;UNKNOWN&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Text Analysis
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Apollo then takes care of:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Automated detection against image, video, audio or text&lt;/li&gt;
&lt;li&gt;Connecting with any external resource&lt;/li&gt;
&lt;li&gt;Making sure your integration is robust, so you never again have to worry about stuck/stale data or false-positives&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cool, what can I build with it?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Teams in companies use Apollo to build native in-app connections related to active response, content moderation, fraud detection, etc.&lt;/li&gt;
&lt;li&gt;Some automate their personal lives with Apollo by integrating against discord communities, or do other things like intelligent search. Checkout our &lt;a href="https://apolloapi-doc.vercel.app/docs/features"&gt;Features!&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Apollo can help you quickly build for hobby projects&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Thanks for making it this far! If you want to join or find interest feel free to star our repo or join the discord: &lt;a href="https://github.com/apolloapi/apolloapi"&gt;https://github.com/apolloapi/apolloapi&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Ansible Inventory Plugin</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Tue, 15 Nov 2022 15:13:48 +0000</pubDate>
      <link>https://dev.to/adrbrownx/aws-ec2-inventory-plugin-w-ansible-om3</link>
      <guid>https://dev.to/adrbrownx/aws-ec2-inventory-plugin-w-ansible-om3</guid>
      <description>&lt;h3&gt;
  
  
  Intro
&lt;/h3&gt;

&lt;p&gt;This &lt;a href="https://github.com/cloudguruab/ansible-plugin"&gt;repo&lt;/a&gt; is a smaller part to executing generic ansible commands inside an ansible execution environment using ansible-runner to consume inventory from an external source other than an inventory file.&lt;/p&gt;

&lt;p&gt;Here is the repo that is used inside an execution env for those who are curious: &lt;a href="https://github.com/cloudguruab/edpm_plugin"&gt;https://github.com/cloudguruab/edpm_plugin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have questions feel free to open an issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements &amp;amp; Dependencies
&lt;/h3&gt;

&lt;p&gt;I have specified all the requirements for this plugin &lt;a href="https://github.com/cloudguruab/ansible-plugin"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ansible
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;AWS CLI
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install boto3
pip install awscli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;Be sure the desired collection is installed if you're pulling from pre-built module&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-galaxy collection list # list installed plugins
ansible-galaxy collection install amazon.aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running &lt;code&gt;ansible-galaxy collection list&lt;/code&gt; you should see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Collection        Version
----------------- -------
amazon.aws        5.1.0
community.general 5.2.0
openstack.cloud   1.10.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done you need to navigate to your aws console and obtain your &lt;code&gt;aws_secret_key&lt;/code&gt;, and &lt;code&gt;aws_public_key&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#setting up auth 
bash-5.1$ aws configure
AWS Access Key ID [None]: key
AWS Secret Access Key [None]: key
Default region name [None]: us-east-1
Default output format [None]: json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Updating your project
&lt;/h3&gt;

&lt;p&gt;We need to update our inventory source with our &lt;code&gt;private_ip&lt;/code&gt; to connect with our instance. Our ansible config file will also map to our inventory source and the plugin we are using. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;aws_ec2.yml&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;plugin: aws_ec2
regions:
  - us-east-1
keyed_groups:
  - key: tags.Tagname
filters:
  instance-state-name: running
compose:
  ansible_host: private_ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ansible.cfg&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[defaults]
enable_plugins=aws_ec2
inventory=aws_ec2.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing
&lt;/h3&gt;

&lt;p&gt;Lets look for our host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-inventory -i aws_ec2.yml --list #look for host
ansible-inventory -i aws_ec2.yml --graph # graph view
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Resources
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.ansible.com/ansible/latest/plugins/inventory.html#using-inventory-plugins"&gt;Ansible&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ansible-runner.readthedocs.io/en/stable/"&gt;Anisble-runner&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ansible</category>
      <category>aws</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>What is Section 230?</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Sat, 14 May 2022 13:08:33 +0000</pubDate>
      <link>https://dev.to/adrbrownx/what-is-section-230-1c70</link>
      <guid>https://dev.to/adrbrownx/what-is-section-230-1c70</guid>
      <description>&lt;p&gt;What to remember about section 230 of the Communications Decency Act of 1996: it impacts everyone. Section 230s objective is to promote the development of the world wide web by providing immunity to web providers and users alike. This immunity granted by section 230 covers granting online access to information created by third parties and acting in good faith to edit or restrict access to objectionable material, even if the material would be considered protected constitutionally. Its impact on big tech giants like Facebook, Twitter, and Google shows us how section 230 has shaped today’s social communities and how we begin to look at moderating content.&lt;/p&gt;

</description>
      <category>watercooler</category>
      <category>news</category>
    </item>
    <item>
      <title>Building an API</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Wed, 16 Feb 2022 22:28:13 +0000</pubDate>
      <link>https://dev.to/adrbrownx/building-an-api-2b29</link>
      <guid>https://dev.to/adrbrownx/building-an-api-2b29</guid>
      <description>&lt;h3&gt;
  
  
  🐴 Why
&lt;/h3&gt;

&lt;p&gt;This tutorial should serve as an example of using supabase api to connect to your database instance and build a service to periodically cache and serve consumer credit data on client request. This project covers redis as a caching mechanism, supabase to support our postgres instance, and fastapi for our framework. &lt;/p&gt;

&lt;h3&gt;
  
  
  Before we start
&lt;/h3&gt;

&lt;p&gt;Setup the following, and use the code found at the last link to follow along with. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://redis.io/topics/quickstart"&gt;Installing Redis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/docs/reference"&gt;Setting up Supabase&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fastapi.tiangolo.com/tutorial/"&gt;Getting started with FastApi&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cloudguruab/supafast"&gt;The code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ☂️ Setting up your environment
&lt;/h3&gt;

&lt;p&gt;Setup your virtual environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv &lt;span class="nb"&gt;env&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Activating your environment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source env&lt;/span&gt;/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the root directory run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🤖 Starting Redis in development environment
&lt;/h3&gt;

&lt;p&gt;To begin working with redis, run the following command, after completion open a new terminal window.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;redis-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  👾 Activating your development server
&lt;/h3&gt;

&lt;p&gt;To start your local server run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvicorn main:app &lt;span class="nt"&gt;--reload&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On success of the commad you should see;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;INFO:     Uvicorn running on http://127.0.0.1:8000 &lt;span class="o"&gt;(&lt;/span&gt;Press CTRL+C to quit&lt;span class="o"&gt;)&lt;/span&gt;
INFO:     Started reloader process &lt;span class="o"&gt;[&lt;/span&gt;13385] using watchgod
INFO:     Started server process &lt;span class="o"&gt;[&lt;/span&gt;13387]
2022-02-11 19:32:12,509:INFO - Started server process &lt;span class="o"&gt;[&lt;/span&gt;13387]
INFO:     Waiting &lt;span class="k"&gt;for &lt;/span&gt;application startup.
2022-02-11 19:32:12,509:INFO - Waiting &lt;span class="k"&gt;for &lt;/span&gt;application startup.
2022-02-11 19:32:12,510:INFO -  02/11/2022 07:32:12 PM | CONNECT_BEGIN: Attempting to connect to Redis server...
2022-02-11 19:32:12,511:INFO -  02/11/2022 07:32:12 PM | CONNECT_SUCCESS: Redis client is connected to server.
INFO:     Application startup complete.
2022-02-11 19:32:12,511:INFO - Application startup complete.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🎾 Endpoints
&lt;/h3&gt;

&lt;p&gt;Introduction to your application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http &lt;span class="s2"&gt;"http://127.0.0.1:8000/"&lt;/span&gt;

HTTP/1.1 200 OK
content-length: 102
content-type: application/json
&lt;span class="nb"&gt;date&lt;/span&gt;: Wed, 16 Feb 2022 22:01:14 GMT
server: uvicorn

&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"👋 Hello"&lt;/span&gt;: &lt;span class="s2"&gt;"Please refer to the readme documentation for more or visit http://localhost:8000/docs"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Working with your redis cache, the following call will pull data &lt;br&gt;
from your supabase database, and cache it. &lt;/p&gt;

&lt;p&gt;The x-fastapi-cache header field indicates that this response was found in the Redis cache (a.k.a. a Hit). &lt;/p&gt;

&lt;p&gt;The only other possible value for this field is Miss. The expires field and max-age value in the cache-control field indicate that this response will be considered fresh for 604321 seconds(1 week). This is expected since it was specified in the &lt;a class="mentioned-user" href="https://dev.to/cache"&gt;@cache&lt;/a&gt; decorator.&lt;/p&gt;

&lt;p&gt;The etag field is an identifier that is created by converting the response data to a string and applying a hash function. If a request containing the if-none-match header is received, any etag value(s) included in the request will be used to determine if the data requested is the same as the data stored in the cache. If they are the same, a 304 NOT MODIFIED response will be sent. If they are not the same, the cached data will be sent with a 200 OK response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Command&lt;/span&gt;
http &lt;span class="s2"&gt;"http://127.0.0.1:8000/cachedResults"&lt;/span&gt;

&lt;span class="c"&gt;# Response Headers&lt;/span&gt;
HTTP/1.1 200 OK
cache-control: max-age&lt;span class="o"&gt;=&lt;/span&gt;604321
content-length: 894
content-type: application/json
&lt;span class="nb"&gt;date&lt;/span&gt;: Wed, 16 Feb 2022 21:53:56 GMT
etag: W/-9174636245072902018
expires: Wed, 23 Feb 2022 21:45:57 GMT
server: uvicorn
x-supafast-cache: Hit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>python</category>
      <category>programming</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Automation and Multi-processing w/ Python</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Mon, 15 Nov 2021 20:05:01 +0000</pubDate>
      <link>https://dev.to/adrbrownx/automation-and-multi-processing-w-python-4d1k</link>
      <guid>https://dev.to/adrbrownx/automation-and-multi-processing-w-python-4d1k</guid>
      <description>&lt;h4&gt;
  
  
  TL;DR
&lt;/h4&gt;

&lt;p&gt;When it comes to finding ways to run a task python offers us tools that allow us to automate those task, whether it be read/writes or api calls there's lots of usecases where this might be necessary.&lt;/p&gt;

&lt;p&gt;Lets start with understanding the what. So, what's a subprocess in python? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A subprocess in Python is a task that a python script delegates to the Operative system (OS) - Daniel Diaz, GeekFlare&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In short, a subprocess allows us to achieve the unachievable by giving us a new level of cohesiveness with the operating system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;h4&gt;
  
  
  By the end of this tutorial, you will
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Understand the concept of subprocess&lt;/li&gt;
&lt;li&gt;Have learned the basics of the Python multi-process/time library&lt;/li&gt;
&lt;li&gt;Found meaningful examples/usecases where you can implement the subprocess/multiprocessing library&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Core Concepts
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;In this tutorial I am assuming you have some basic foundations in python programming, and data structures&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The nice part about the subprocess module is it falls under the core utilities offered to us as python developers. This means there is no need to install this module as a dependency using PIP or Pipenv, it's there for us already!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#calling the subprocess module into our environment
import subprocess
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Quick use cases,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Program checker&lt;/li&gt;
&lt;li&gt;Setting up a virtual environments &lt;/li&gt;
&lt;li&gt;Run other programming languages&lt;/li&gt;
&lt;li&gt;Open external programs &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Our Application
&lt;/h3&gt;

&lt;h6&gt;
  
  
  &lt;em&gt;In this tutorial we will be looking at using the Process.start() module to run background computations for excel data.&lt;/em&gt;
&lt;/h6&gt;

&lt;h3&gt;
  
  
  Tutorial
&lt;/h3&gt;

&lt;p&gt;So after learning at a high level what the subprocess package does, and what it looks like at the import level you're ready to move towards this part of the tutorial. &lt;/p&gt;

&lt;p&gt;I wanted to give two quick distinctions we will be using the &lt;code&gt;Process.start()&lt;/code&gt; api from the &lt;code&gt;multiprocessing&lt;/code&gt; library offered by Python. &lt;/p&gt;

&lt;p&gt;So, why cover Subprocess's?&lt;/p&gt;

&lt;p&gt;I like to explain things backwards. For me this helps by understanding the full context of my environment in which I am learning from when what I am learning from is more reading intensive rather than visually intensive. &lt;/p&gt;

&lt;h6&gt;
  
  
  Recap: We know what subprocess's are now think of multiprocessing as the stepchild/(really parent) to what a subprocess will allow you to do. Instead of running that process in the background we can now bring it "foreground" by running multiple process's single handedly.
&lt;/h6&gt;

&lt;h3&gt;
  
  
  Task
&lt;/h3&gt;

&lt;p&gt;Here we are going to take our data set and using some basic arithmetic sort and sum all the entries in our list of data. Once&lt;br&gt;
this is complete we will set a task for this to occur every 60 seconds to simulate a batched process using code. To draw more from real world scenarios this data set could be healthcare/financial data for a company, and your jobs has been to extract, transform this data into a list. So, after doing so the company will need a summary for all the data in this list to make biweekly decisions. &lt;/p&gt;

&lt;h5&gt;
  
  
  Our data set
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#lets assume this data is updated every half minute
data = [3, 4, 7, 10, 2, 32, 15, 8]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Code
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from multiprocessing import Process
from file_location import data

def process_manager(data):
    while True:
        results = sum(data)
        time.sleep(60)
        return results

if __name__ == '__main__':
    p = Process(target=process_manager(), args=(data))
    p.start()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To explain, we import our &lt;code&gt;multiprocessing&lt;/code&gt; and near real time data object into our code environment. We define a function to govern our process that we want to run every minute. &lt;/p&gt;

&lt;p&gt;Here you can see how Process takes two arguments, the target function and arguments. These will work together to fork this process in our developer environment and run in sync with our application. &lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;NOTE: Python has been known to lack/struggle with multiprocessing and concurrency compared to other tools/packages offered by other languages like rust, so be sure to check the trade offs when making time-cost decisions. Feel free to copy/paste and try this in your own IDE, if you run into issues spend some time debugging and getting your code to run. This should be a safe exercise to practice your debugging skills with as you will gain experience/understanding of more PYTHON!&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;&lt;a href="https://docs.python.org/3/library/multiprocessing.html"&gt;Link to multiprocess&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>webdev</category>
      <category>programming</category>
      <category>python</category>
    </item>
    <item>
      <title>Database Management Systems 5</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Mon, 08 Nov 2021 00:23:46 +0000</pubDate>
      <link>https://dev.to/adrbrownx/database-management-systems-5-4c3i</link>
      <guid>https://dev.to/adrbrownx/database-management-systems-5-4c3i</guid>
      <description>&lt;h4&gt;
  
  
  TL;DR
&lt;/h4&gt;

&lt;p&gt;Following the last post on OLTP vs OLAP, Indexing, and Hashing we will now cover something outside the bounds of a database management system and that follows OLAP from our last post. &lt;/p&gt;

&lt;p&gt;In this blog post we will start by covering data warehouses and what they are. The overall objective for a data warehouse and its subset of properties as well. &lt;/p&gt;

&lt;p&gt;In short, you will wrap up this series by learning the importance of data in our database environment and what will serve as structured vs unstructured data. &lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;h4&gt;
  
  
  By the end of this tutorial, you will understand
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Data Warehouse&lt;/li&gt;
&lt;li&gt;Data Mining &lt;/li&gt;
&lt;li&gt;Structured vs Unstructured Data&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Core Concepts
&lt;/h3&gt;

&lt;h5&gt;
  
  
  Data Warehouse
&lt;/h5&gt;

&lt;p&gt;With DWH being separate from the DBMS as a whole we can understand its use for storing huge amounts of data that typically will be collected from a variety of sources. &lt;/p&gt;

&lt;p&gt;An ordinary database system will store data and for specific purposes in relation to transactional process's where as a DWH will do the same with the intent for analytical purposes. &lt;/p&gt;

&lt;p&gt;Also, whats cool to note is data marts since this typically will live among the variety of vocabulary words one might hear when discussing data warehouses. &lt;/p&gt;

&lt;p&gt;In short, a data mart is a subset of the data that makes up a data warehouse.&lt;/p&gt;

&lt;h5&gt;
  
  
  Data Mining
&lt;/h5&gt;

&lt;blockquote&gt;
&lt;p&gt;“Data Mining” can be referred to as knowledge mining from data, knowledge extraction, data/pattern analysis, data archaeology, and data dredging.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This process is what makes up an ETL process or parts of it for that matter. The sole purpose of mining the data from a DWH  is to carry out extraction of useful data in bulk. This data can help make decisions or look over trends within a business. &lt;/p&gt;

&lt;h5&gt;
  
  
  Structured vs Unstructured Data
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Structured data is data with addressable information for effective analysis&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unstructured data is data with unorganized or defined schema that isn't a good fit for a typical relational database. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;This concludes my series on database management systems, I hope you enjoyed reading and found some stuff to be useful or maybe things I could improve on! If you want to reach out to me feel free to do so via &lt;a href="https://github.com/cloudguruab"&gt;github&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>codenewbie</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Database Management Systems 4</title>
      <dc:creator>Adrian Brown</dc:creator>
      <pubDate>Sun, 07 Nov 2021 23:43:37 +0000</pubDate>
      <link>https://dev.to/adrbrownx/database-management-systems-4-30d2</link>
      <guid>https://dev.to/adrbrownx/database-management-systems-4-30d2</guid>
      <description>&lt;h4&gt;
  
  
  TL;DR
&lt;/h4&gt;

&lt;p&gt;Last blog post we covered ACID properties, Concurrency, and Recovery in our database systems. &lt;/p&gt;

&lt;p&gt;This post I have dedicated time to cover Online Transaction Processing vs Online Analytical Processing as well as Indexing and Hashing in our databases. &lt;/p&gt;

&lt;p&gt;To note, OLTP vs OLAP looks requires some fundamental knowledge of database systems and maybe ETL (optional). Aside from that Indexing and Hashing are ways to optimize our query performance. &lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;h4&gt;
  
  
  By the end of this tutorial, you will understand
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;OLTP vs OLAP&lt;/li&gt;
&lt;li&gt;Indexing vs Inverted Indexing&lt;/li&gt;
&lt;li&gt;Hashing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Core Concepts
&lt;/h3&gt;

&lt;h5&gt;
  
  
  OLTP
&lt;/h5&gt;

&lt;p&gt;Online transaction processing provides transaction focused applications for our data. Examples to clarify would be ATM's are OLTP based applications. &lt;/p&gt;

&lt;p&gt;This process consist of current data, and has business task use cases. Also composed of both read and write operations when we think of our database implementations. &lt;/p&gt;

&lt;h5&gt;
  
  
  OLAP
&lt;/h5&gt;

&lt;p&gt;Online analytical processing provides software tools/applications focused of data analytics. An example to clarify would be Netflix's movie recommendation system. &lt;/p&gt;

&lt;p&gt;This process consist of historical data, and the data is used in planning/problem solving for business's as well as decision making. Will primarily be composed of only read and rarely any write operations.  &lt;/p&gt;

&lt;h5&gt;
  
  
  Side Note
&lt;/h5&gt;

&lt;p&gt;I really believe taking time to understand the Extract Transform Load process can help clarify further what OLTP vs OLAP will provide you in your system architecture so definitely check that out. &lt;/p&gt;

&lt;h5&gt;
  
  
  Indexing vs Inverted Indexing
&lt;/h5&gt;

&lt;blockquote&gt;
&lt;p&gt;Indexing is a way to optimize the performance of a database by minimizing the number of disk accesses required when a query is processed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE INDEX index_name
ON table_name (column1, column2, ...);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;IMPORTANT&lt;/code&gt;: Updating a table with indexes takes more time than updating a table without (because the indexes also need an update). So, only create indexes on columns that will be frequently searched against.&lt;/p&gt;

&lt;p&gt;Inverted Indexing is a way of mapping content such as elements, data or objects to a particular data object or document. (Google Search: Inverted Indexing) &lt;/p&gt;

&lt;p&gt;In short, indexing is a way to highly optimize the retrieval of information and data through stored queries. &lt;/p&gt;

&lt;h5&gt;
  
  
  Hashing
&lt;/h5&gt;

&lt;blockquote&gt;
&lt;p&gt;Hashing is an efficient technique to directly search the location of desired data on the disk without using index structure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Typically searching through all the data points in hopes of quick retrieval can bog down or inadvertently have some inefficiency when it comes to index's. Hashing solves this. &lt;/p&gt;

&lt;p&gt;To conclude, hashing involves looking at data thats stored on blocks of data addressed via a hash function. This works as the particular address is where that memory allocation will live and is known as a data bucket. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This concludes part 4 of the series, I will post more on part 5 shortly.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
