<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roy-Wanyoike</title>
    <description>The latest articles on DEV Community by Roy-Wanyoike (@roywanyoike).</description>
    <link>https://dev.to/roywanyoike</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/roywanyoike"/>
    <language>en</language>
    <item>
      <title>Data Science for Beginners: 2023 - 2024 Complete Road Map</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Thu, 05 Oct 2023 07:05:11 +0000</pubDate>
      <link>https://dev.to/roywanyoike/data-science-for-beginners-2023-2024-complete-road-map-23i7</link>
      <guid>https://dev.to/roywanyoike/data-science-for-beginners-2023-2024-complete-road-map-23i7</guid>
      <description>&lt;p&gt;In the ever-evolving realm of technology, data science stands as a beacon of innovation, driving insights from vast datasets to influence decision-making and transform industries. For beginners embarking on the exhilarating journey into the world of data science in 2023-2024, a well-structured roadmap is indispensable. This comprehensive guide is tailored to equip aspiring data scientists with the foundational knowledge, technical skills, and ethical awareness necessary for success in this dynamic field.&lt;/p&gt;

&lt;p&gt;I. Building a Strong Foundation:&lt;/p&gt;

&lt;p&gt;The journey commences with a robust understanding of fundamental mathematical concepts, including linear algebra, calculus, and probability theory. Mastery of these principles provides the analytical groundwork essential for advanced data manipulation and analysis. Concurrently, proficiency in programming languages, particularly Python, is paramount. Python serves as the lingua franca of data science, enabling beginners to grasp essential programming constructs and dive into data manipulation libraries like NumPy and Pandas.&lt;/p&gt;

&lt;p&gt;II. Data Manipulation and Analysis:&lt;/p&gt;

&lt;p&gt;With the foundational knowledge in place, aspiring data scientists delve into data manipulation using libraries such as Pandas, which empowers them to clean, preprocess, and analyze datasets effectively. Skills in data cleaning, outlier detection, and feature engineering are honed, paving the way for insightful data analysis.&lt;/p&gt;

&lt;p&gt;III. Crafting Visual Narratives:&lt;/p&gt;

&lt;p&gt;Data visualization emerges as a potent tool for communicating complex insights. Through Matplotlib, Seaborn, and interactive visualization tools like Plotly, beginners learn to craft compelling visual narratives. This stage emphasizes not only the creation of visualizations but also the art of data storytelling, enabling data scientists to convey their findings in a compelling manner.&lt;/p&gt;

&lt;p&gt;IV. Mastering Machine Learning:&lt;/p&gt;

&lt;p&gt;A solid understanding of supervised and unsupervised learning algorithms is crucial. Supervised learning techniques, including regression and classification, are explored alongside evaluation metrics such as accuracy and precision. Unsupervised learning methods like clustering and dimensionality reduction expand the data scientist's toolkit. Advanced topics like ensemble methods and deep learning are introduced, offering a glimpse into cutting-edge technologies shaping the field.&lt;/p&gt;

&lt;p&gt;V. Exploring Specializations:&lt;/p&gt;

&lt;p&gt;The roadmap further extends to specialized domains such as natural language processing (NLP) and reinforcement learning. In NLP, beginners learn text preprocessing techniques and delve into recurrent neural networks (RNNs) for text analysis. Reinforcement learning introduces the concept of agents learning from interactions, opening doors to applications in gaming and autonomous systems.&lt;/p&gt;

&lt;p&gt;VI. Ethical Considerations and Soft Skills:&lt;/p&gt;

&lt;p&gt;Ethical considerations in data science, including bias mitigation and responsible AI practices, are woven into the fabric of this roadmap. Additionally, the cultivation of soft skills, including effective communication, problem-solving, and critical thinking, enhances the holistic development of aspiring data scientists.&lt;/p&gt;

&lt;p&gt;VII. Continuous Learning and Application:&lt;/p&gt;

&lt;p&gt;The roadmap culminates in a capstone project, allowing learners to apply their acquired skills to real-world scenarios. Participation in online communities and platforms such as Kaggle fosters collaboration, offering opportunities to learn from peers and industry experts. Continuous learning, facilitated through engagement with research papers and staying abreast of industry trends, ensures that aspiring data scientists remain agile in the face of evolving technologies.&lt;/p&gt;

&lt;p&gt;In conclusion, the roadmap presented here provides a structured and holistic approach for beginners venturing into the dynamic realm of data science in 2023-2024. With a strong foundation, technical expertise, ethical acumen, and continuous learning, aspiring data scientists are well-equipped to navigate the complexities of this ever-expanding field, making meaningful contributions to the world of data-driven innovation.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>dataengineering</category>
      <category>python</category>
    </item>
    <item>
      <title>Stored Procedures in SQL database</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Thu, 20 Apr 2023 09:23:39 +0000</pubDate>
      <link>https://dev.to/roywanyoike/stored-procedures-in-sql-database-35jb</link>
      <guid>https://dev.to/roywanyoike/stored-procedures-in-sql-database-35jb</guid>
      <description>&lt;p&gt;Stored procedures are database objects that contain a set of SQL statements or code that can be executed on demand. They are used to encapsulate business logic, complex operations or calculations that can be reused by multiple applications, and also provide an additional level of security and data validation.&lt;/p&gt;

&lt;p&gt;Here are some advantages of using stored procedures:&lt;/p&gt;

&lt;p&gt;Improved performance: Stored procedures are compiled and stored in memory, which allows them to execute faster than ad-hoc SQL statements.&lt;/p&gt;

&lt;p&gt;Reusability: Once created, stored procedures can be reused multiple times by different applications, which saves time and effort in development.&lt;/p&gt;

&lt;p&gt;Centralized code: Stored procedures provide a central location to store and manage complex business logic, which makes it easier to maintain and update the code.&lt;/p&gt;

&lt;p&gt;Security: Stored procedures can be granted permissions separately from the underlying tables, which provides an additional level of security to the data.&lt;/p&gt;

&lt;p&gt;Data validation: Stored procedures can be used to validate input data, which helps to prevent SQL injection attacks and ensure data consistency.&lt;/p&gt;

&lt;p&gt;However, there are also some disadvantages of using stored procedures:&lt;/p&gt;

&lt;p&gt;Maintenance: Stored procedures can be complex and difficult to maintain, especially if they are poorly designed or documented.&lt;/p&gt;

&lt;p&gt;Portability: Stored procedures are specific to the database system they are created on, which makes it difficult to port them to other database systems.&lt;/p&gt;

&lt;p&gt;Debugging: Debugging stored procedures can be difficult, as they are executed on the database server and not on the application.&lt;/p&gt;

&lt;p&gt;Versioning: Changes to stored procedures can cause compatibility issues with existing applications that rely on them, which can be difficult to manage.&lt;br&gt;
Example of stored procedure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;OR&lt;/span&gt; &lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;PROCEDURE&lt;/span&gt; &lt;span class="n"&gt;spAddCars&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;CarId&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;Model&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;BodyType&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;Brand&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;Prices&lt;/span&gt; &lt;span class="nb"&gt;DECIMAL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;IsDeleted&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt;

&lt;span class="k"&gt;BEGIN&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;Cars&lt;/span&gt;
     &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;carId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;bodyType&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;brand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;isDeleted&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;CarId&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;Model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;BodyType&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;Brand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;Prices&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;IsDeleted&lt;/span&gt; 
     &lt;span class="p"&gt;)&lt;/span&gt; 
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;Cars&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;carId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;CarId&lt;/span&gt;  
&lt;span class="k"&gt;END&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

&lt;span class="k"&gt;EXECUTE&lt;/span&gt; &lt;span class="n"&gt;spAddCars&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;Cars&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In summary, stored procedures can be a powerful tool for improving performance, reusability, security, and data validation in database applications. However, careful consideration should be given to their design and maintenance, as well as their impact on the overall application architecture.&lt;/p&gt;

</description>
      <category>database</category>
      <category>sql</category>
      <category>backend</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>understanding difference between git merge and git rebase</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Wed, 15 Mar 2023 14:09:39 +0000</pubDate>
      <link>https://dev.to/roywanyoike/understanding-difference-between-git-merge-and-git-rebase-2h2g</link>
      <guid>https://dev.to/roywanyoike/understanding-difference-between-git-merge-and-git-rebase-2h2g</guid>
      <description>&lt;p&gt;Git merge and git rebase are two ways to integrate changes from one branch into another. Here are the differences between the two:&lt;/p&gt;

&lt;p&gt;Git Merge:&lt;/p&gt;

&lt;p&gt;Merges changes from one branch into another&lt;br&gt;
Creates a new commit that combines the changes from the source branch and the target branch&lt;br&gt;
Preserves the history of both branches&lt;br&gt;
Can result in a "merge commit" that shows the point at which the two branches were merged&lt;br&gt;
Does not modify the original branch&lt;br&gt;
Git Rebase:&lt;/p&gt;

&lt;p&gt;Integrates changes from one branch into another by applying them as new commits on top of the target branch&lt;br&gt;
Rewrites the history of the target branch&lt;br&gt;
Results in a linear history with a clear timeline of commits&lt;br&gt;
Does not create a merge commit&lt;br&gt;
Can cause conflicts if multiple people are working on the same branch&lt;br&gt;
In general, if you want to preserve the history of both branches and create a merge commit, use git merge. If you want a linear history with a clear timeline of commits, use git rebase. However, it's important to consider the potential conflicts that may arise with rebasing, especially if you're working in a team with multiple contributors.&lt;br&gt;
Here are some examples to help illustrate the differences between git merge and git rebase.&lt;/p&gt;

&lt;p&gt;Let's say we have two branches, feature and master, and we want to integrate changes from feature into master.&lt;/p&gt;

&lt;p&gt;Git Merge example:&lt;br&gt;
&lt;code&gt;git checkout master&lt;br&gt;
git merge feature&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
This creates a new commit that combines the changes from feature and master. The resulting commit history would look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;*   Merge branch 'feature' into 'master'
|\  
| * Commit D (feature)
| * Commit C (feature)
| * Commit B (feature)
|/  
* Commit A (master)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the new "merge commit" that shows the point at which the two branches were merged.&lt;/p&gt;

&lt;p&gt;Git Rebase example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout feature
git rebase master
git checkout master
git merge feature

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This applies the changes from feature as new commits on top of master. The resulting commit history would look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Commit D' (feature)
* Commit C' (feature)
* Commit B' (feature)
* Commit A (master)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that there is no merge commit and the commit history is linear, with a clear timeline of commits. However, it's important to note that if there were conflicts during the rebase process, they would need to be resolved before merging feature into master.&lt;/p&gt;

&lt;p&gt;I hope this helps clarify the differences between git merge and git rebase!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Cracking Git Rebase</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Wed, 15 Mar 2023 13:59:17 +0000</pubDate>
      <link>https://dev.to/roywanyoike/cracking-git-rebase-2em6</link>
      <guid>https://dev.to/roywanyoike/cracking-git-rebase-2em6</guid>
      <description>&lt;p&gt;git rebase is a Git command that allows you to modify the history of a branch by moving the branch to a new base commit. This can be useful in situations where you want to incorporate changes from one branch into another branch, or to clean up the history of a branch by removing unnecessary commits.&lt;/p&gt;

&lt;p&gt;When you run git rebase, Git identifies the common ancestor commit between the current branch and the branch you want to rebase onto. It then replays all the changes made in the current branch after that common ancestor commit onto the new base commit. This results in a linear history, with all the changes from both branches combined in chronological order.&lt;/p&gt;

&lt;p&gt;The basic syntax for git rebase is:&lt;br&gt;
&lt;code&gt;&lt;br&gt;
git checkout &amp;lt;branch-to-rebase&amp;gt;&lt;br&gt;
git rebase &amp;lt;new-base-branch&amp;gt;&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
This will rebase the current branch onto the new-base-branch. During the rebase process, Git will pause and allow you to resolve any conflicts that arise between the changes in the two branches.&lt;/p&gt;

&lt;p&gt;It's important to note that git rebase rewrites the history of a branch, so it should only be used on branches that have not yet been shared with others or pushed to a remote repository. If you rebase a branch that others have already based their work on, you will create conflicts and confusion for everyone involved.&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Essential SQL Commands for Data Science</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Tue, 07 Mar 2023 18:42:06 +0000</pubDate>
      <link>https://dev.to/roywanyoike/essential-sql-commands-for-data-science-9n7</link>
      <guid>https://dev.to/roywanyoike/essential-sql-commands-for-data-science-9n7</guid>
      <description>&lt;p&gt;SQL (Structured Query Language) is a powerful tool for managing and manipulating relational databases. It is an essential skill for data scientists, as it allows them to extract, clean, and analyze large datasets. Here are some essential SQL commands for data science:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SELECT: This command is used to select data from one or more tables. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;column1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;column2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="k"&gt;table_name&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;WHERE: This command is used to filter data based on certain conditions. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT column1, column2, ... FROM table_name WHERE condition;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;GROUP BY: This command is used to group data based on one or more columns. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT column1, column2, ... FROM table_name GROUP BY column1, column2, ...;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;ORDER BY: This command is used to sort data based on one or more columns. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT column1, column2, ... FROM table_name ORDER BY column1, column2, ... [ASC | DESC];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;JOIN: This command is used to combine data from two or more tables based on a common column. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT column1, column2, ... FROM table1 JOIN table2 ON table1.column = table2.column;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;DISTINCT: This command is used to select unique values from a column. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT DISTINCT column1 FROM table_name;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;COUNT: This command is used to count the number of rows or non-null values in a column. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT COUNT(*) FROM table_name;
SELECT COUNT(column1) FROM table_name;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;SUM, AVG, MAX, MIN: These commands are used to perform mathematical operations on a column. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT SUM(column1) FROM table_name;
SELECT AVG(column1) FROM table_name;
SELECT MAX(column1) FROM table_name;
SELECT MIN(column1) FROM table_name;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;LIMIT: This command is used to limit the number of rows returned by a query. The syntax is as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT column1, column2, ... FROM table_name LIMIT n;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are just some of the essential SQL commands for data science. There are many more commands and functions available in SQL, but these should be enough to get you started with data analysis.&lt;/p&gt;

&lt;p&gt;Thank Hope you enjoyed reading Keep tuned for me from &lt;br&gt;
&lt;a href="https://twitter.com/WanyoikeRoy"&gt;Roy Wanyoike&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>database</category>
      <category>sql</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Getting Started with Google Cloud Platform (GCP)</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Mon, 06 Mar 2023 10:35:44 +0000</pubDate>
      <link>https://dev.to/roywanyoike/getting-started-with-google-cloud-platform-gcp-h3f</link>
      <guid>https://dev.to/roywanyoike/getting-started-with-google-cloud-platform-gcp-h3f</guid>
      <description>&lt;p&gt;Getting started with Google Cloud Platform (GCP) can be a bit overwhelming at first, but there are some simple steps you can take to get started:&lt;/p&gt;

&lt;p&gt;Create a GCP account: The first step is to create a GCP account if you don't have one already. You can do this by going to the GCP website and signing up for a free trial account. You will need to provide your billing information, but you won't be charged unless you use resources that are not covered by the free tier.&lt;/p&gt;

&lt;p&gt;Navigate the GCP Console: Once you have created your account, you can access the GCP Console. The console is the central hub for managing your GCP resources. You can use it to create and manage virtual machines, storage buckets, databases, and more.&lt;/p&gt;

&lt;p&gt;Learn about GCP services: GCP offers a wide range of services, from computing and storage to machine learning and analytics. Take some time to explore the different services that are available and what they can do for you. The GCP documentation is a great resource for learning about these services.&lt;/p&gt;

&lt;p&gt;Start a project: To start using GCP, you will need to create a project. Projects are containers for resources that allow you to organize and manage your GCP resources. You can create multiple projects within your account.&lt;/p&gt;

&lt;p&gt;Try out GCP services: Once you have a project set up, you can start trying out GCP services. For example, you can create a virtual machine, deploy an application, or set up a database. There are also many tutorials available that can help you get started with specific services.&lt;/p&gt;

&lt;p&gt;Get help: If you get stuck or have questions, there are many resources available to help you. The GCP documentation is a great place to start, and there are also forums and community groups where you can ask for help. Google Cloud Support is also available if you need more personalized support.&lt;/p&gt;

&lt;p&gt;By following these steps, you can start using GCP and exploring its many capabilities.&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>devops</category>
      <category>datascience</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Getting Started with Explanatory Data Analysis(EDA)</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Wed, 22 Feb 2023 09:34:44 +0000</pubDate>
      <link>https://dev.to/roywanyoike/getting-started-with-explanatory-data-analysiseda-404a</link>
      <guid>https://dev.to/roywanyoike/getting-started-with-explanatory-data-analysiseda-404a</guid>
      <description>&lt;p&gt;Explanatory Data Analysis (EDA) is the process of analyzing and visualizing data to extract meaningful insights and conclusions. It is a crucial step in the data science pipeline, as it helps to understand the data, identify patterns, and gain insights that can be used to make informed decisions.&lt;/p&gt;

&lt;p&gt;In this article, we will explore the steps involved in getting started with EDA.&lt;/p&gt;

&lt;p&gt;Gather the data&lt;br&gt;
The first step in EDA is to gather the data. The data can come from various sources, such as online repositories, databases, or web scraping. It is important to ensure that the data is reliable and accurate. The data should be stored in a format that can be easily analyzed, such as CSV, Excel, or JSON.&lt;/p&gt;

&lt;p&gt;Explore the data&lt;br&gt;
Once the data is collected, the next step is to explore it. Exploring the data involves looking at the basic statistics of the data, such as mean, median, and standard deviation, to get an idea of the central tendency and dispersion of the data. It is also important to plot the data using different visualizations such as histograms, box plots, scatter plots, and heatmaps to understand the distribution, trends, and relationships between different variables.&lt;/p&gt;

&lt;p&gt;For example, if we are analyzing sales data for a retail store, we can start by looking at the total sales for each day of the week and plotting it on a line graph. This will help us identify the days of the week when the store makes the most sales.&lt;/p&gt;

&lt;p&gt;Clean the data&lt;br&gt;
Data cleaning is an important step in EDA. It involves identifying and handling missing values, outliers, and anomalies in the data. Missing values can be handled by imputing them with a suitable value, such as the mean or median of the data. Outliers can be handled by removing them from the data or by transforming the data using techniques such as normalization or log transformation.&lt;/p&gt;

&lt;p&gt;Analyze the data&lt;br&gt;
Once the data is cleaned, the next step is to analyze it. There are several statistical techniques that can be used to analyze the data, such as hypothesis testing, correlation analysis, and regression analysis.&lt;/p&gt;

&lt;p&gt;Hypothesis testing is used to test a hypothesis about the data. For example, we can test the hypothesis that the average sales on weekends are higher than the average sales on weekdays. Correlation analysis is used to identify the relationships between different variables. For example, we can analyze the correlation between the sales of different products in the store. Regression analysis is used to model the relationships between different variables. For example, we can model the relationship between the sales of a product and the price of the product.&lt;/p&gt;

&lt;p&gt;Communicate the results&lt;br&gt;
The final step in EDA is to communicate the results. It is important to present the findings in a clear and concise manner using appropriate visualizations and narratives that highlight the key insights and conclusions drawn from the analysis.&lt;/p&gt;

&lt;p&gt;For example, we can present the findings of our sales data analysis using a dashboard that shows the total sales for each day of the week, the sales of each product, and the correlation between the sales of different products.&lt;/p&gt;

&lt;p&gt;Some additional tips to consider when performing EDA are:&lt;/p&gt;

&lt;p&gt;Focus on the questions you want to answer with your analysis and tailor your approach accordingly. It is important to have a clear understanding of the problem you are trying to solve and the insights you want to extract from the data.&lt;br&gt;
Document your findings and the steps you took to arrive at them to ensure reproducibility and transparency. It is important to keep a record of the data sources, cleaning steps, analysis techniques, and visualizations used in the analysis.&lt;br&gt;
Continuously iterate and refine your analysis as you gain more insights and knowledge about the data. EDA is an iterative process that involves refining and updating the analysis as new insights are gained.&lt;br&gt;
In conclusion,Explanatory Data Analysis is a critical step in the data science pipeline that helps to uncover insights and patterns in the data. By following the steps outlined above, data analysts can effectively explore, clean, and analyze their data and communicate their findings in a clear and concise manner.&lt;/p&gt;

&lt;p&gt;It is also important to note that EDA is not a one-time process. As new data becomes available or as the problem being studied evolves, the analyst may need to revisit and refine their analysis to ensure that the insights and conclusions are still relevant.&lt;/p&gt;

&lt;p&gt;Additionally, there are several tools and libraries available that can help streamline the EDA process. Python libraries such as Pandas, NumPy, and Matplotlib are commonly used for data cleaning, analysis, and visualization. Data visualization tools such as Tableau and Power BI can also be used to create interactive dashboards and visualizations that make it easy to communicate insights to stakeholders.&lt;/p&gt;

&lt;p&gt;In conclusion, EDA is an important step in the data science pipeline that helps to uncover insights and patterns in the data. By following the steps outlined in this article and using the appropriate tools and techniques, analysts can effectively explore, clean, and analyze their data and communicate their findings in a clear and concise manner.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>opensource</category>
      <category>discuss</category>
      <category>community</category>
    </item>
    <item>
      <title>SQL for data analysis</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Sun, 19 Feb 2023 12:55:20 +0000</pubDate>
      <link>https://dev.to/roywanyoike/sql-for-data-analysis-cid</link>
      <guid>https://dev.to/roywanyoike/sql-for-data-analysis-cid</guid>
      <description>&lt;h2&gt;
  
  
  SQL FOR DATA ANALYSIS
&lt;/h2&gt;

&lt;p&gt;Sql is a very crucial tool for use in data career. It is one of the most popular and used database. Just like how one needs a spoon to eat then similar one needs sql for data. In this tutorial we wull cover on some of the crucial aspects of SQL. We will explain how to use SQL for data visualization and analysis. We will explain how to use different queries for different functions. &lt;/p&gt;

&lt;p&gt;### Prerequisites&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is SQL&lt;/li&gt;
&lt;li&gt;Installation Instructions&lt;/li&gt;
&lt;li&gt;Getting Started&lt;/li&gt;
&lt;li&gt;Using SQL for data analysis and analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### &lt;/p&gt;

&lt;p&gt;SQL is a structured query language used for communicating with the database. It is considered the most standard query language for use in relation database management systems(RDMS). In a relational database data is stored in tabular format. Tabular format means in form of rows and columns hence a table. The two table and rows represents variant data attributes and also relationship which links between several data values. &lt;/p&gt;

&lt;h3&gt;
  
  
  SQL tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>career</category>
      <category>mentorship</category>
      <category>workplace</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Struggling to fix: Bluetooth connection failed: protocol not available?</title>
      <dc:creator>Roy-Wanyoike</dc:creator>
      <pubDate>Fri, 17 Feb 2023 19:03:15 +0000</pubDate>
      <link>https://dev.to/roywanyoike/struggling-to-fix-bluetooth-connection-failed-protocol-not-available-5be1</link>
      <guid>https://dev.to/roywanyoike/struggling-to-fix-bluetooth-connection-failed-protocol-not-available-5be1</guid>
      <description>&lt;p&gt;Well after upgrading my Linux version am using Parrot Electro Ara 5.1 i was not able to use my JBL headphones since bluetooth threw an error i couldn't catch lol!&lt;br&gt;
after a week of struggling with this issue i managed to solve it with just 5 lines of code. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ztBTkQT0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f84pvc4x2r0zythzkq4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ztBTkQT0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f84pvc4x2r0zythzkq4x.png" alt="Image description" width="222" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are the solutions to above problem. &lt;br&gt;
I preffered using Geany to edit &lt;br&gt;
sudo geany /etc/pulse/default.pa&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z5gbY5k6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajpg5mvmxlc3g8m147zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z5gbY5k6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajpg5mvmxlc3g8m147zx.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Edit geany to use editor of your preference)&lt;/p&gt;

&lt;p&gt;Move to line 65 and comment this line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;load-module module-bluetooth-discover
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--maxGd5qL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dusnj7i15sfn48uc2i1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--maxGd5qL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dusnj7i15sfn48uc2i1x.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;br&gt;
 upon edit it should look like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#load-module module-bluetooth-discover
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next edit the pulse-audio&lt;br&gt;
sudo geany usr/bin/start-pulseaudio-x11&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tUb1F_bS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4thl26kri5va4vt8bc0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tUb1F_bS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4thl26kri5va4vt8bc0v.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add to line 37&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7DumDsqb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xu3mt214m3d7kjicodb0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7DumDsqb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xu3mt214m3d7kjicodb0.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qIl_O2d5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lezf5q26t580zr2smza2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qIl_O2d5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lezf5q26t580zr2smza2.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add this line on line 37&lt;br&gt;
/usr/bin/pactl load-module module-bluetooth-discover&lt;/p&gt;

&lt;p&gt;save this script and you are good to reboot. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qPjnyRcu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qffawrujn2bnfyac4e5b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qPjnyRcu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qffawrujn2bnfyac4e5b.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enjoy &lt;/p&gt;

</description>
      <category>bluetooth</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
