<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cristiano D. Silva</title>
    <description>The latest articles on DEV Community by Cristiano D. Silva (@mcloide).</description>
    <link>https://dev.to/mcloide</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mcloide"/>
    <language>en</language>
    <item>
      <title>SunshinePHP 2020</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 18:47:43 +0000</pubDate>
      <link>https://dev.to/mcloide/sunshinephp-2020-5710</link>
      <guid>https://dev.to/mcloide/sunshinephp-2020-5710</guid>
      <description>&lt;p&gt;SunshinePHP is by far one of the most iconic PHP conferences around the globe, but more importantly, in the USA. I've attended each year since it's first.&lt;/p&gt;

&lt;p&gt;It has been held in the warm city of Miami (since in the US, most of the country is frozen in February) for 7+ years and, regardless of when you purchase tickets, every year there are high-quality talks worth listening to.&lt;/p&gt;

&lt;p&gt;Each year a few months prior to the conference the organizer releases "blind" pricing. During this period, you will be paying to attend a conference about PHP without knowing who will be speaking, but the price is unbeatable making it easy to justify a conference even if you have to travel.&lt;/p&gt;

&lt;p&gt;As an example, the blind price of the general admission plus tutorials is less than the general admission alone if purchased later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://2020.sunshinephp.com/register" rel="noopener noreferrer"&gt;https://2020.sunshinephp.com/register&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if want to get in for free, and still help the community, how about becoming a speaker? All you need to do is to pass the CFP selection process, which is now open.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cfp.sunshinephp.com/" rel="noopener noreferrer"&gt;https://cfp.sunshinephp.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See you there!&lt;/p&gt;

</description>
      <category>conference</category>
      <category>php</category>
      <category>mysql</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Software Quality as Perceived Risk</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 18:30:58 +0000</pubDate>
      <link>https://dev.to/mcloide/software-quality-as-perceived-risk-45dc</link>
      <guid>https://dev.to/mcloide/software-quality-as-perceived-risk-45dc</guid>
      <description>&lt;h2&gt;
  
  
  Part 1
&lt;/h2&gt;

&lt;p&gt;Last year I spent a good amount of time teaching about Software Engineering and the 3 core paths of Engineering that help with consistent and reliable software development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Engineering Cycle
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxxg516b4oede8bt9woz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxxg516b4oede8bt9woz.png" alt="engineering cycle" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Talking about software and engineering is talking about process, development cycle, troubleshooting which in the end has one goal: quality.&lt;/p&gt;

&lt;p&gt;There is a caveat though, what is software quality or even better how can software quality be measured? Truth is that there is no way to quantify the quality of software and by quantifying I mean that there is no way to apply something such as Six Sigma to measure the quality of software.&lt;/p&gt;

&lt;p&gt;Without getting into many details, especially because I’m not a 6σ Black Belt, Six Sigma is a continuous process to define one thing: In a manufacturing event, only 6 parts in a million can contain defects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7l9zwjpiom7r6g13wjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7l9zwjpiom7r6g13wjq.png" alt="6 sigma" width="400" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DMADV — &lt;a href="https://en.wikipedia.org/wiki/Six_Sigma" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Six_Sigma&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Considering any manufacturer, such as a nail manufacturer, it is possible to measure that in a million nails produced that only 6 had defects. This means that manufacturing is optimized in a way that there are no real measured losses during the process.&lt;/p&gt;

&lt;p&gt;Along with 6σ there other measuring certifications, compliances that are recognized worldwide for quality such as ISO 9001.&lt;/p&gt;

&lt;p&gt;Unfortunately, there isn’t a way to the same with software, not as far as I know. Software quality has been measured in perceived quality for as long as software development became a profession and a science.&lt;/p&gt;

&lt;p&gt;For a web application you can define that it is 70% unit test covered, that there is an acceptance of 70% of your audience, that QA successfully catches 99% of the logical bugs, the application follows the best coding practices and standards or our current time, but there is no way to quantify any of those numbers, there is no way to say that in a million lines of code only 6 had defects. Unfortunately, it doesn’t work that way.&lt;/p&gt;

&lt;p&gt;If you present your software idea to any person that is willing to invest on it the logical question is to determine how good is the software so one can measure the risk of the investment, and, subsequently the return of the investment (a subject for another article).&lt;/p&gt;

&lt;p&gt;In manufacturing a floor engineer can come to the CEO and affirm: We got ISO’s, 6 Sigma and we haven’t had an accident in the past 300 days. Without presenting many numbers it is the same as affirming that there aren’t any losses during the manufacturing process, therefore, the risk is small and profits can be high (not correlated).&lt;/p&gt;

&lt;p&gt;When the same is asked for a Software the common reply is based on percentages and although they give some base on quantifying the quality of the software it doesn’t not give true assurance. What does it really mean that 70% of the code is covered by unit tests? Translating that to an investor or a business person is fairly impossible without some explanation of what are unit tests, to begin with.&lt;/p&gt;

&lt;p&gt;Facing that problem the logical next step is to count successful deployments, count how many bugs, count many critical or high priority bugs, count how many hotfixes. There is always a need to quantify software quality but there is still not a single way to do it in a way that is accepted worldwide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring software quality becomes a problem.
&lt;/h3&gt;

&lt;p&gt;During the many years working as a software engineer and all the ever-moving phases that it has gone through, the software has increased in quality but still not fully measure, which leads us to accepted or perceived risk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.text911ct.org%2Fimages%2Fphone_en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.text911ct.org%2Fimages%2Fphone_en.png" alt="911 text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
911 text — Google Image Search&lt;/p&gt;

&lt;p&gt;Consider a team that is working on the software that will make calls to 911 how can the team report that the software will work all the time?&lt;/p&gt;

&lt;p&gt;Software can’t be quantified, presenting percentages is fairly hard to explain, magic as an explanation will just not work, the only thing left to present is perceived risk.&lt;/p&gt;

&lt;p&gt;Perceived risk is the possibility that an unexpected behavior can happen and in software, it is presented mostly at pre-deployment meetings.&lt;/p&gt;

&lt;p&gt;It is considered to be perceived risk because of the same reason why you can’t quantify software quality, therefore, to determine the quality of the software you determine if all the information about the software development and testing can generate an issue and how high is the probability of that happening.&lt;/p&gt;

&lt;h3&gt;
  
  
  Risk Assesment
&lt;/h3&gt;

&lt;p&gt;For example, in a software that is 70% unit tested, following standards, is fully tested by the quality team, the question is what is the risk in deployment that software to production, as, what is the chance of an unexpected behavior (and note this is not necessarily a bug) can happen.&lt;/p&gt;

&lt;p&gt;The usual response for that is either: minimal, considerable or high.&lt;/p&gt;

&lt;p&gt;If we take into consideration the scenario that was presented about the 911 calls, since there is no way to quantify the quality of the software the best that can be done is to determine the risk of it not working, or better said, the perceived risk of it not working. In the end, it is important that all calls regardless of what happens reaches the dispatch from 911. This scenario is a real scenario from the telecom industry and, as would anyone say from that industry, it is a life that is on the line so that call better reaches the dispatch.&lt;/p&gt;

&lt;p&gt;Till the day that we can get something like 6 Sigma for Software Quality, assessment of risk or perceived risk might be the only good enough measurement that software engineers can give in regards the quality of their software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2
&lt;/h2&gt;

&lt;p&gt;Now that we know the problem, how to fix it so we can finally quantify quality on a software application.&lt;/p&gt;

&lt;p&gt;To first quantify quality, we need to define what is quality and there is a whole bunch of perception that comes into play in this. For example, the Toyota Corolla and the Kia Optima are equivalent cars, therefore, they have similar quality but Toyota is always perceived as having a superior quality for reasons which are usually hard to understand when explained by customer versus the manufacturer.&lt;/p&gt;

&lt;p&gt;As a driver, I love my Kia Optima and I know that I would have a hard time selling the Optima as a better car to a current or former Toyota owner. If both cars have similar quality why is one perceived as a better car than the other? The answer for this is quite simple: humans aren’t machines that process binary logic, a lot of emotion is placed into our decision making.&lt;/p&gt;

&lt;p&gt;Quality is a matter of perception and it will be different for each of audiences Product Owner, Clients, and Investors and, although there are different messages to be given, they must, in the end, confirm that your software application has quality.&lt;/p&gt;

&lt;p&gt;The perception of quality for a product owner is in regards to risk. If the application has a lower risk to be in production this means that it will have a lower cost of technical debt giving more time on the development of new features. Although I made the association of risk and technical debt, the truth is that risk management has a lot of other facts into it to be accounted for. Unit tests, technical debt, complexity, the likelihood of generating a bug, etc.&lt;/p&gt;

&lt;p&gt;What is really important is the mathematical association of the higher the risk the higher the odds of technical debt and therefore the higher will be the cost.&lt;/p&gt;

&lt;p&gt;Measuring quality from a client perspective has to be done on a daily basis. That is done by the count of tech support calls it generates and by the churn rate (the rate which customers stop subscribing to your application).&lt;/p&gt;

&lt;p&gt;An application that has a low churn rate and a low number of tech support calls means that it has a high success rate, therefore, high quality. Tech support calls consume time and a large amount will consume more personal which means that costs that could be used for a given improvement are now being directed to the support of production issues (technical debt). A high churn rate means that the application has something that is generating a high drop in the average daily subscriptions for the application, which has a direct effect on the revenue that the application produces.&lt;/p&gt;

&lt;p&gt;The fun part is measuring quality for an investor. If you are familiar with TV shows such as Shark Tank you will know that all of the investors are asking is the same thing: what is the risk and what are the current growth. That is asked so they can successfully measure the ROI (return over the investment).&lt;/p&gt;

&lt;p&gt;When you are pitching your application to an investor so you can successfully grow it, you will be provided to him mainly the churn rate and the current profit and this is where it becomes hard.&lt;/p&gt;

&lt;p&gt;When the application is starting is very hard to get a large investment or an investor to join (not saying that it is impossible), which makes the team responsible for it the 1st investors, but when attempting a secondary investment for growth, then you must present data and that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How much gross revenue the application has done so far&lt;/li&gt;
&lt;li&gt;What is the overall client acquisition (subscriptions)&lt;/li&gt;
&lt;li&gt;What is the predicted client acquisition for the next 5 years&lt;/li&gt;
&lt;li&gt;What is the current churn rate (same churn rate that we measured with clients)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With all of this one can use the following as a way to quantify quality from the perspective outside of the engineering team:&lt;/p&gt;

&lt;pre&gt;
    Average Net profit vs Churn rate
&lt;/pre&gt;

&lt;p&gt;Where the average net profit is the amount of average of gains already discounted from the losses and the churn rate is the percentage of losses that you have on subscriptions.&lt;/p&gt;

&lt;p&gt;A high average net profit with a low churn rate is the equivalent of a successful software application and that is the equivalent of a quality product.&lt;/p&gt;

&lt;p&gt;When it comes to the Engineering Team, all the points mentioned in the 1st part are the least necessary to ensure a sound quality product.&lt;/p&gt;

&lt;p&gt;These articles were first published at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bit.ly/2rzaFsn" rel="noopener noreferrer"&gt;https://bit.ly/2rzaFsn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://bit.ly/2Zfu7dE" rel="noopener noreferrer"&gt;https://bit.ly/2Zfu7dE&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>software</category>
      <category>codequality</category>
      <category>engineering</category>
      <category>startup</category>
    </item>
    <item>
      <title>MySQL implicit commit</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 18:21:21 +0000</pubDate>
      <link>https://dev.to/mcloide/mysql-implicit-commit-48ep</link>
      <guid>https://dev.to/mcloide/mysql-implicit-commit-48ep</guid>
      <description>&lt;p&gt;MySQL implicit commit might be one of those features that can get you by surprise if you don’t understand how it works. I’m not going to jump in and question why it was done like that and I believe there is a very reasonable reason behind it, but, nonetheless, if you don’t understand this feature of MySQL it will act against you.&lt;/p&gt;

&lt;p&gt;The feature is simple: given certain situations MySQL, regardless of its configuration, will implicitly commit the preceding statements.&lt;/p&gt;

&lt;p&gt;What this means is that if you have a bunch of inserts and then something that causes an implicit commit, all those inserts, regardless of the result of the implicit commit statement, will be committed leaving the database in an odd state.&lt;/p&gt;

&lt;p&gt;Sometimes text doesn’t help so let’s move to some code or some form of a code.&lt;/p&gt;

&lt;pre&gt;
START TRANSACTION
INSERT
INSERT
INSERT
INSERT
ALTER TABLE
...
&lt;/pre&gt;

&lt;p&gt;Regardless of the result of the ALTER TABLE, based this feature of MySQL all the commits before it will be committed and, anything after the ALTER TABLE (if it doesn’t fail), will start a new transaction.&lt;/p&gt;

&lt;p&gt;This same behavior will occur with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;any structural changes&lt;/li&gt;
&lt;li&gt;another transaction start (note MySQL has the concept of SAVEPOINTS but not nested transactions)&lt;/li&gt;
&lt;li&gt;any BEGIN statement (there are 2 types)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So how can this affect you? Simply consider that you are restoring a SQL dump and a set of tables with it’s following data are correctly created and seeded, but one, is not. Everything up that moment will be committed and anything after will simply fail.&lt;/p&gt;

&lt;p&gt;Another scenario is data migration scripts or scripts that attempt to reconstruct/alter data. Not understanding the implicit commit will leave your database in a bad state.&lt;/p&gt;

&lt;p&gt;There is a lot more to talk about it but, just give the documentation a read. It is worth a million.&lt;/p&gt;

&lt;p&gt;Ref: &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html" rel="noopener noreferrer"&gt;https://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article was originally at: &lt;a href="https://bit.ly/2Z5EzF1" rel="noopener noreferrer"&gt;https://bit.ly/2Z5EzF1&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>dba</category>
      <category>development</category>
      <category>tip</category>
    </item>
    <item>
      <title>AWS Solutions Architect Associate</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 18:13:10 +0000</pubDate>
      <link>https://dev.to/mcloide/aws-solutions-architect-associate-2ml3</link>
      <guid>https://dev.to/mcloide/aws-solutions-architect-associate-2ml3</guid>
      <description>&lt;p&gt;This gotta be one of the moments that I felt most proud of my career. I have to build many things and accomplished many others when it comes to engineering web applications but accomplishing something that is out of my comfort zone felt empowering.&lt;/p&gt;

&lt;p&gt;Achieving this certification wasn’t easy and it was a journey none-of-the-less and I know that I’m not the only one that is trailing this path, there are many other engineers attempting to get to this point, therefore I will share here what I have done and the best resources I have found to pass this certification exam.&lt;/p&gt;

&lt;p&gt;I have started exploring AWS training and knowledge when facing a challenge to find out what are the best solutions for a new project that my company is pursuing. What I realized really quick is that, although knowing Linux, web server configuration and maintenance and even a great deal about databases I didn’t know how this is applied to AWS or, for that matter, what AWS would actually mean to our project.&lt;/p&gt;

&lt;p&gt;I took, to gather more knowledge about AWS solutions, 2 trainings and both in person. The first training covered the basics and the second covered the architectural level of the AWS services.&lt;/p&gt;

&lt;p&gt;You would expect that once you leave the training you would go and schedule an exam and simply pass because that is how it should work right? No! It doesn’t work like that. You gotta have some real understanding of how AWS culture works and their environment because the nuances, the small differences are what will make you wrongly answer a question.&lt;/p&gt;

&lt;p&gt;To pass the SAA exam I had to study for a couple of months and the following resources were essential to that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Training— Absolutely essential from free digital training to paid training. They have some free content for the certification itself, therefore, you can use that content as a reference on what you will get. Another important portion is that on the certification exam readiness videos there will be an ending section where the instructor goes into what you should look for on the exam and why it matters. That is essential for the exam.&lt;/li&gt;
&lt;li&gt;Linux Academy — I have navigated only a few resources from Linux Academy but I can affirm that it has the best community out there for anyone pursuing AWS knowledge. What I learned from their Slack AWS channel and from being able to be in touch with the instructors was essential for the exam. For the next certifications, I will be exploring a lot more of their content. I had seen feedback from students that affirm that without their hands-on labs and courses they wouldn’t be able to pass the exam.&lt;/li&gt;
&lt;li&gt;Udemy — There are a few certification exam tests questions that you can explore on Udemy. These questions help you to prepare your mindset on how to approach the exam.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When it comes to the exam itself there are a few things that you must keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time management is key and if you know well the content you can finish the exam way ahead of time. During the exam, you will have 130 minutes to answer 65 questions. That would give you 2 minutes per question but in fact, you have 1 minute per question because you want to have some time to spare to go back on the questions that you have set as flagged.&lt;/li&gt;
&lt;li&gt;If you aren’t sure of how to answer a question, answer it on your best guess, mark it as flagged and come back to it before completing the exam.&lt;/li&gt;
&lt;li&gt;If the answer seems overly complicated, very likely is not the answer. AWS culture revolves into less expensive, less complicated, more decoupled, more performant, more available and properly meeting the client needs.&lt;/li&gt;
&lt;li&gt;Nuances will be the reason why you answered a given question wrong. Little things like why choosing S3 instead of S3 infrequent access on a given question when both answers seem correct is key.&lt;/li&gt;
&lt;li&gt;Read the question 2x before answering. Every question will have a set of keywords that will lead you to the correct answer. This is extremely important especially to exclude answers that will be obviously wrong.&lt;/li&gt;
&lt;li&gt;The certificate has a lifecycle of 2 years and that is because AWS changes considerably in a period of a month, imagine in a period of 24 months.&lt;/li&gt;
&lt;li&gt;This is probably the best tip I can give because it serves as a check of your knowledge. For SAA AWS training has a downloadable 10 question test that comes with answers. You gotta get 90% of this test because it is a really easy test.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I won’t stop on this certification. Being a web developer, my next logical step is to explore the AWS Developer certification that talks about CloudFormation and more specific on how to code your infrastructure.&lt;/p&gt;

&lt;p&gt;To everyone that supported me to achieve this certification, thank you and if you are considering exploring this certification or taking it soon, good luck. I hope to see your comments here on how you did.&lt;/p&gt;

&lt;p&gt;This article was originally at: &lt;a href="https://bit.ly/2Z2qjgt" rel="noopener noreferrer"&gt;https://bit.ly/2Z2qjgt&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>exam</category>
      <category>architect</category>
    </item>
    <item>
      <title>Moving from MySQL 5.7 to MariaDB 10.1</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 14:56:08 +0000</pubDate>
      <link>https://dev.to/mcloide/moving-from-mysql-5-7-to-mariadb-10-1-3306</link>
      <guid>https://dev.to/mcloide/moving-from-mysql-5-7-to-mariadb-10-1-3306</guid>
      <description>&lt;h3&gt;
  
  
  Warning
&lt;/h3&gt;

&lt;pre&gt;
Please perform these tests first on a testing environment or a virtual machine.
&lt;/pre&gt;

&lt;p&gt;I never really needed to pay attention to MariaDB until it becomes a possible and viable option. The technical decision in place was obvious, moving into a newer version of MySQL and continuing with it and the unknowns of Oracle being the owner OR moving to MariaDB the well-spoken community replacement for MySQL.&lt;/p&gt;

&lt;p&gt;After about an hour of reading, I decided to do the so spoken magical move from MySQL to MariaDB and this is what I have learned.&lt;/p&gt;

&lt;p&gt;The next steps were all performed on CentOS 7 with an installed version of MySQL 5.7.13 and used the following article as reference: &lt;a href="https://mariadb.com/kb/en/mariadb/yum/"&gt;https://mariadb.com/kb/en/mariadb/yum/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 1st steps to be performed are the obvious ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mysqldump of all your databases&lt;/li&gt;
&lt;li&gt;backup of /var/lib/mysql&lt;/li&gt;
&lt;li&gt;backup of /etc/my.cnf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next step is quite obnoxious given that it requires more than a simple yum remove.&lt;/p&gt;

&lt;pre&gt;
    sudo yum remove mysql mysql-server mysql-common
&lt;/pre&gt;

&lt;p&gt;That should remove by default all references from MySQL but, because MariaDB won't pass the install check if there are still references, run the following command:&lt;/p&gt;

&lt;pre&gt;
    sudo yum list installed | grep -i mysql
&lt;/pre&gt;

&lt;p&gt;It will give you a list like this:&lt;/p&gt;

&lt;pre&gt;
    mysql-community-client.x86_64 5.7.13–1.el7 @mysql57-community
    mysql-community-common.x86_64 5.7.13–1.el7 @mysql57-community
    mysql-community-libs.x86_64 5.7.13–1.el7 @mysql57-community
    mysql-community-libs-compat.x86_64 5.7.13–1.el7 @mysql57-community
    mysql-community-server.x86_64 5.7.13–1.el7 @mysql57-community
    mysql57-community-release.noarch el7–7 @/mysql57-community-release-el7–7.noarch
&lt;/pre&gt;

&lt;p&gt;With the exception of libraries like the PHP-MySQL one, all the remaining must be removed. This can be done by simply running:&lt;/p&gt;

&lt;pre&gt;
    mysql-community-client.x86_64 5.7.13–1.el7
&lt;/pre&gt;

&lt;p&gt;If you have read any article that teaches how to perform the migration from MySQL to MariaDB, up to MySQL 5.6 and MariaDB 10.0, the only thing you needed to do to make the migration fully successful was creating the yum repo and installing MariaDB but this option doesn’t work so well.&lt;/p&gt;

&lt;p&gt;The following steps are every single step I have performed to install MariaDB 10.1 properly for a MySQL 5.7 migration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 — Create the Yum repo
&lt;/h3&gt;

&lt;pre&gt;
    sudo vi /etc/yum.repos.d/MariaDB.repo

    [mariadb]
    name = MariaDB
    baseurl = http://yum.mariadb.org/10.1/centos7-amd64
    gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
    gpgcheck=1
&lt;/pre&gt;

&lt;h3&gt;
  
  
  Step 2 — Install the rpm key
&lt;/h3&gt;

&lt;pre&gt;
    sudo rpm — import https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
&lt;/pre&gt;

&lt;h3&gt;
  
  
  Step 3— Completely remove MySQL data
&lt;/h3&gt;

&lt;p&gt;Yes, you read it right, remove /var/lib/mysql and it is the only way to properly do it. If you haven’t done a backup when you started this, then, stop and return to MySQL.&lt;/p&gt;

&lt;pre&gt;
    sudo rm -rf /var/lib/mysql
&lt;/pre&gt;

&lt;p&gt;Installing MariaDB 10.1 with that folder there you will get a noticeable error with InnoDB that, doesn’t matter how many times you search, you won't find a good solution for it. The following is a copy of the error in it’s full:&lt;/p&gt;

&lt;pre&gt;
    [vagrant@localmachine ~]$ sudo journalctl -xe
    — Unit mariadb.service has failed.
    —
    — The result is failed.
    Jun 27 17:46:46 vagrant systemd[1]: Unit mariadb.service entered failed state.
    Jun 27 17:46:46 vagrant systemd[1]: mariadb.service failed.
    Jun 27 17:46:46 vagrant polkitd[656]: Unregistered Authentication Agent for unix-process:12731:154478 (system bus name :1.31, object path /org/freedesktop/PolicyKit1/AuthenticationAgent,
    Jun 27 17:47:25 vagrant sudo[12754]: vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/bin/systemctl start mariadb.service
    Jun 27 17:47:25 vagrant polkitd[656]: Registered Authentication Agent for unix-process:12755:158423 (system bus name :1.33 [/usr/bin/pkttyagent — notify-fd 5 — fallback], object path /org
    Jun 27 17:47:25 vagrant systemd[1]: Starting MariaDB database server…
    — Subject: Unit mariadb.service has begun start-up
    — Defined-By: systemd
    — Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    —
    — Unit mariadb.service has begun starting up.
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] /usr/sbin/mysqld (mysqld 10.1.14-MariaDB) starting as process 12760 …
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Using mutexes to ref count buffer pool pages
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: The InnoDB memory heap is disabled
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Memory barrier is not used
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Compressed tables use zlib 1.2.7
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Using Linux native AIO
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Using SSE crc32 instructions
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Initializing buffer pool, size = 128.0M
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Completed initialization of buffer pool
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] InnoDB: Highest supported file format is Barracuda.
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: No valid checkpoint found.
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: If you are attempting downgrade from MySQL 5.7.9 or later,
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: please refer to http://dev.mysql.com/doc/refman/5.6/en/upgrading-downgrading.html
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: If this error appears when you are creating an InnoDB database,
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: the problem may be that during an earlier attempt you managed
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: to create the InnoDB data files, but log file creation failed.
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: If that is the case, please refer to
    Jun 27 17:47:26 vagrant mysqld[12760]: InnoDB: http://dev.mysql.com/doc/refman/5.6/en/error-creating-innodb.html
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [ERROR] Plugin ‘InnoDB’ init function returned error.
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [Note] Plugin ‘FEEDBACK’ is disabled.
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [ERROR] Could not open mysql.plugin table. Some plugins may be not loaded
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [ERROR] Unknown/unsupported storage engine: InnoDB
    Jun 27 17:47:26 vagrant mysqld[12760]: 2016–06–27 17:47:26 140668548266112 [ERROR] Aborting
    Jun 27 17:47:26 vagrant systemd[1]: mariadb.service: main process exited, code=exited, status=1/FAILURE
    Jun 27 17:47:26 vagrant systemd[1]: Failed to start MariaDB database server.
    — Subject: Unit mariadb.service has failed
    — Defined-By: systemd
    — Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    —
    — Unit mariadb.service has failed.
    —
    — The result is failed.
    Jun 27 17:47:26 vagrant systemd[1]: Unit mariadb.service entered failed state.
    Jun 27 17:47:26 vagrant systemd[1]: mariadb.service failed.
    Jun 27 17:47:26 vagrant polkitd[656]: Unregistered Authentication Agent for unix-process:12755:158423 (system bus name :1.33, object path /org/freedesktop/PolicyKit1/AuthenticationAgent,
    Jun 27 17:47:40 vagrant sudo[12775]: vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/bin/systemctl status mariadb.service
    Jun 27 17:48:10 vagrant sudo[12777]: vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/bin/journalctl -xe
    lines 2790–2842/2842 (END)
&lt;/pre&gt;

&lt;h3&gt;
  
  
  Step 4 — Install Maria DB
&lt;/h3&gt;

&lt;p&gt;Now that the folder is removed and you got all SQL dumps, install MariaDB&lt;/p&gt;

&lt;pre&gt;
    sudo yum install MariaDB-server MariaDB-client
&lt;/pre&gt;

&lt;h3&gt;
  
  
  Final Step — Start service and restore DBS
&lt;/h3&gt;

&lt;p&gt;Now all that is remaining is to restore the service and install the SQL dumps.&lt;/p&gt;

&lt;pre&gt;
    sudo systemctl start mariadb
&lt;/pre&gt;

&lt;p&gt;It is not a pleasant solution but it is a solution until they get the magical migration working just like it does for MariaDB 10.0&lt;/p&gt;

&lt;p&gt;I have done a good couple of hours of searches and tests until I came to the conclusion that, at least at this moment, there is no direct easy magical migration path from MySQL 5.7 to MariaDB 10.1, but there is a path. During this period what I have found out is that their IRC channel is very useful and you won't find as many answers out there as you will for MySQL, to make matters harder when searching for something to help troubleshoot a lot comes back with MySQL answers.&lt;/p&gt;

&lt;p&gt;Hope this helps in the same way that helped me.&lt;/p&gt;

&lt;h4&gt;
  
  
  Notes:
&lt;/h4&gt;

&lt;p&gt;While doing all of these tests yesterday I have missed one important part of the process: &lt;a href="https://dev.mysql.com/doc/refman/5.5/en/upgrading.html"&gt;https://dev.mysql.com/doc/refman/5.5/en/upgrading.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will be testing again, but, based on the results that I have got with another test that I have done with MariaDB 10.0 I believe that those steps before the process and running this:&lt;/p&gt;

&lt;pre&gt;
    mysql_upgrade -u  -p
&lt;/pre&gt;

&lt;p&gt;after the full install should fix all issues that were described before making the magic fully work.&lt;/p&gt;

&lt;p&gt;This post was originally at: &lt;a href="https://bit.ly/31SUCmu"&gt;https://bit.ly/31SUCmu&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>mariadb</category>
      <category>database</category>
      <category>innodb</category>
    </item>
    <item>
      <title>Setting up your AWS IoT Button</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 14:47:10 +0000</pubDate>
      <link>https://dev.to/mcloide/setting-up-your-aws-iot-button-5e8n</link>
      <guid>https://dev.to/mcloide/setting-up-your-aws-iot-button-5e8n</guid>
      <description>&lt;p&gt;As a fair warning for you the reader and as a fellow engineer I’m required to tell you that this guide was based on the AWS interface of 2019 so, if you can’t find the resources or directions based on what is said here, then, very likely you will need to adjust. This guide also assumes that you have some level of knowledge on how to set up certain things on AWS.&lt;/p&gt;

&lt;p&gt;If you are considering in setting up your AWS IoT Gen 1 button this means that you also know the potential of IoT in the next technological revolution and the sooner you learn this, the more opportunities you can open.&lt;/p&gt;

&lt;p&gt;First things first, you need to have an AWS Console access, either as root or with a user that has enough access to configure and manage the IoT devices. As an extended help, also create an IAM programmatic user to give you access to the AWS CLI. Just like the Console user this user will need also access to the IoT resources. This CLI access is pretty handy to get some information from the IoT such as the endpoint (you will need this later).&lt;/p&gt;

&lt;p&gt;On the Console services search bar type IoT and select IoT Core. A new page will open and here you should click on Manage and then Things.&lt;/p&gt;

&lt;p&gt;Up to the point of creating the Policy you can follow the AWS documentation here. Just remember to download the certificate files during this process.&lt;/p&gt;

&lt;p&gt;During the Policy creation, it will ask you to add 2 statements, one for connect and another for publishing. The connect you can keep exactly as it shows, but the publish, for now, instead of publishing to a topic, use *. You can change this later on. The goal right now is to connect and publish by all means.&lt;/p&gt;

&lt;p&gt;After you have done this continue following the documentation until you reach the configure your device.&lt;/p&gt;

&lt;p&gt;Now is the part that you can't find any documentation what-so-ever and all you gonna find are guides, like this, that has some reference on how this works. To make matters worse, the device that you just got didn’t come with a manual to show how to perform the steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fgiphygifs.s3.amazonaws.com%2Fmedia%2FLAFShX32UwUj6%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fgiphygifs.s3.amazonaws.com%2Fmedia%2FLAFShX32UwUj6%2Fgiphy.gif" alt="sad" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yeah, pretty much that.&lt;/p&gt;

&lt;p&gt;Well, let's move forward from this disappointment and set up the device. Grab a few pieces of information that you will need to set up the device.&lt;/p&gt;

&lt;p&gt;If you have installed the AWS CLI tool, run the following command:&lt;/p&gt;

&lt;pre&gt;
aws iot describe-endpoint
&lt;/pre&gt;

&lt;p&gt;It will throw an error asking for a region and that is because you haven’t configured the CLI yet, for that, just run this command:&lt;/p&gt;

&lt;pre&gt;
aws configure
&lt;/pre&gt;

&lt;p&gt;It will ask you the information from your IAM user that you created at the beginning of this guide.&lt;/p&gt;

&lt;p&gt;Save the identifier from the URL it returned. It should have returned something in this format:&lt;/p&gt;

&lt;pre&gt;
.iot..amazonaws.com
&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;strong&gt;: This is an MQTT REST endpoint so, in theory, as long as the thing has a certificate and private key, anything can be a thing.&lt;/strong&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next piece of information you will need is the password for your WiFi network. Grab that and have it handy. Now hold the button on your device until it shows a &lt;strong&gt;blue&lt;/strong&gt; light. Just like you would do with a Bluetooth device.&lt;/p&gt;

&lt;p&gt;It will create for you a special network with the name “ConfigureMe” in it. The password will be in uppercase letters the last 8 digits of your device serial number. That is available on both the box and the device itself.&lt;/p&gt;

&lt;p&gt;Once it is connected go to the configuration page &lt;a href="http://192.168.0.1/index.html" rel="noopener noreferrer"&gt;http://192.168.0.1/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure your WiFi network for this device&lt;/li&gt;
&lt;li&gt;Attach the private key and the certificate file to this device&lt;/li&gt;
&lt;li&gt;Add that  that you saved before&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once all are filled, hit the save button and it will automatically disconnect the WiFi and attempt a first message publish.&lt;/p&gt;

&lt;p&gt;If all goes well you will get a green light. If not, well, that is when things get nasty and really painful to troubleshoot since it will be up to visual interpretation. When something fails it will show a red led and if that happens, you need to read the red pattern portion of the troubleshooting that can be found here.&lt;/p&gt;

&lt;p&gt;It will not tell you why is failing directly, but it will give you clues. Basically, it can fail on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Certificate not active or invalid&lt;/li&gt;
&lt;li&gt;Policy invalid or too restrictive&lt;/li&gt;
&lt;li&gt;Endpoint invalid&lt;/li&gt;
&lt;li&gt;WiFi not connected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is important to remember that AWS works under the principle of denying all and most of the issues can be related to this.&lt;/p&gt;

&lt;p&gt;Now that you have a green light, you need to find a way to see what it is publishing and the easiest way is to simply add debug logs to CloudWatch.&lt;/p&gt;

&lt;p&gt;Congratulations your device successfully connects to AWS and publish messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25y4mi9ezo5yzelo7yoi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25y4mi9ezo5yzelo7yoi.gif" alt="well done" width="480" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next steps are obviously doing something with these messages, but that I will let to you.&lt;/p&gt;

&lt;p&gt;Good luck and don’t forget to have fun and share the knowledge.&lt;/p&gt;

&lt;p&gt;This article was originally posted at: &lt;a href="https://bit.ly/2KJK5V8" rel="noopener noreferrer"&gt;https://bit.ly/2KJK5V8&lt;/a&gt;&lt;/p&gt;

</description>
      <category>iot</category>
      <category>aws</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Query Optimization with MySQL</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 14:05:03 +0000</pubDate>
      <link>https://dev.to/mcloide/query-optimization-with-mysql-322i</link>
      <guid>https://dev.to/mcloide/query-optimization-with-mysql-322i</guid>
      <description>&lt;p&gt;There is a reason why DBA is a profession, the nuances of a database can lead anyone to insanity and MySQL nuances are no different.&lt;/p&gt;

&lt;p&gt;I don’t know these nuances at heart and, as many reading this article, I’m an engineer that works with MySQL, complex queries and large amounts of data on a daily basis. Talking about performance require placing a threshold to understand what is a poorly performant query and for that, I will use MySQL default parameter: 1 second.&lt;/p&gt;

&lt;p&gt;Even with larger datasets and a large number of rows, queries can perform under 1 second of time. MySQL query optimization engine is really good and, as long as you know how to properly write a query, it will optimize it to run as fast as it can.&lt;/p&gt;

&lt;p&gt;When it comes to making a query perform faster is important to have at least some basic knowledge of how the engine optimization works so one could leverage all the potential and avoid the caveats of the same. Unfortunately, unless you are a DBA is very unlikely that you will know those nuances at heart. Little things like understanding that MySQL runs every portion of the query or every query like it would run a JOIN statement.&lt;br&gt;
Query Construction&lt;/p&gt;

&lt;p&gt;To understand how to properly improve the performance of a query you must first understand how to properly break a query into portions. Every SELECT statement will have:&lt;/p&gt;

&lt;pre&gt;
SELECT
    columns
    sub-selects
    functions
    improper aggregations-- main table
    joins
    sub-selects-- where clauses-- aggregation
    groupping, ordering, having and limiting
&lt;/pre&gt;

&lt;h3&gt;
  
  
  Columns
&lt;/h3&gt;

&lt;p&gt;Each of these portions can impact the performance of your query in a way or another, but the columns are the only one that might impact less.&lt;/p&gt;

&lt;p&gt;The caveat on the columns portion will be if you have a sub-select or a function call. If the performance of that function is poor, then it will impact your query performance. The same can be said for the sub-selects.&lt;/p&gt;

&lt;p&gt;The last point that can impact on this portion of the query is improper aggregation. There are a considerable amount of resources from MySQL that performs extremely fast such as TOP, MAX, MIN, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Main-Table, Joins, and sub-selects
&lt;/h3&gt;

&lt;p&gt;The order that you place your joins or the sequence of the table that is being read here will not impact much on the performance, but the number of rows on each table, or on the result of the sub-select, will. Indexes will surely help with this portion of the query as long the whole query can be executed at memory or the needed portion of the indexes for the query is available in memory. At the moment that this information is not available in memory, the database will create a temporary resource to store the results and then filter out.&lt;/p&gt;

&lt;p&gt;The best way of handling this is to only bring the tables that you really need for the query. For example:&lt;/p&gt;

&lt;pre&gt;
SELECT f.id, f.name, f.description, f.type_id
  FROM foo f
  JOIN foo_type ft ON ft.id = f.type_id
 WHERE ft.id = 2;
&lt;/pre&gt;

&lt;p&gt;If you are only bringing from the foo table the rows that have the type_id equals 2, why joining the foo_type, especially if that type name is not available. Furthermore, if the type name would be necessary, could this be pre-loaded information that is displayed during run time?&lt;/p&gt;

&lt;pre&gt;
SELECT f.id, f.name, f.description, f.type_id
  FROM foo f
 WHERE f.type_id = 2;
&lt;/pre&gt;

&lt;p&gt;The easy trick here is to ask the questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do I really need this table to achieve the result?&lt;/li&gt;
&lt;li&gt;Am I only bringing this table because I might need it?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Where Clauses
&lt;/h3&gt;

&lt;p&gt;Where clauses are interesting when it comes to performance. Too little filtering will cause more rows and therefore a slowdown, but too much filtering will also cause a slowdown. The best way is to find a healthy medium.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;pre&gt;
select 
    distinct o.id as order_id
    , round(sum(oi.amount * oi.quantity), 2) as total
  from orders o
  join order_items oi on oi.order_id = o.id
  join order_type ot on ot.id = o.order_type_id
 where o.create_date between '2019-01-01 00:00:00' and '2019-01-01 23:59:59'
   and oi.type_id in (1,2,3)
   and o.type_id = 2
 group by oi.id
&lt;/pre&gt;

&lt;p&gt;That where clause has to filter the date range, the item type, and the order type. If all those tables (fictitious) had a large number of rows (consider over 2 million), that operation could be costly even if the indexes were properly created. With the exception of the range clause, which is expensive by itself, you can play with the other 2 conditions to see which one will perform faster.&lt;/p&gt;

&lt;p&gt;For example, if you remove the condition to check for the order type and move it to the SELECT block, it can perform faster.&lt;/p&gt;

&lt;pre&gt;
select 
    distinct o.id as order_id
    , round(sum(if o.type_id != 2, 0, oi.amount * oi.quantity), 2) as total
  from orders o
  join order_items oi on oi.order_id = o.id
  join order_type ot on ot.id = o.order_type_id
 where o.create_date between '2019-01-01 00:00:00' and '2019-01-01 23:59:59'
   and oi.type_id in (1,2,3)
 group by oi.id
&lt;/pre&gt;

&lt;p&gt;The end result of the query above is the same as the end result of the first query, but it performs better because it is doing fewer conditions on the where clause.&lt;/p&gt;

&lt;p&gt;The reason why this happens is simple. MySQL will traverse all the required rows from orders and then will match those to the order items, finally matching those to the order type. After this resultset is created, a temporary table or not, it needs to apply each of the conditions. Lesser conditions mean faster especially if you can add these as part of the column generation.&lt;/p&gt;

&lt;p&gt;The final query should look like this:&lt;/p&gt;

&lt;pre&gt;
select 
    distinct o.id as order_id
    , round(sum(if o.type_id != 2, 0, oi.amount * oi.quantity), 2) as total
  from orders o
  join order_items oi on oi.order_id = o.id
 where o.create_date between '2019-01-01 00:00:00' and '2019-01-01 23:59:59'
   and oi.type_id in (1,2,3)
 group by oi.id
&lt;/pre&gt;

&lt;p&gt;There is no need to having the order type joined there because it was doing nothing at all and if the justification of joining a table is to ensure that the foreign keys are valid, then, there is a whole new issue that needs to be addressed but that is not query performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ordering, grouping, having and limiting
&lt;/h3&gt;

&lt;p&gt;This last portion of the query can impact the query performance in a few ways. Ordering can be slow if being applied through filesort, the lack of grouping and limiting can bring more results than it is required and, finally, the having clause can be really expensive especially if they are doing mathematical comparison such as&lt;/p&gt;

&lt;pre&gt;
having sum(oi.amount * oi.quantity) &amp;gt; 999.00
&lt;/pre&gt;

&lt;p&gt;Depending on the query, the having clause will need to calculate overall entries and then filter out the result.&lt;br&gt;
Saved by Explain&lt;/p&gt;

&lt;p&gt;I know that the default answer for everything when it comes to query optimization is by adding indexes. Well, that is a lie. Indexes are a portion of the optimization that can be applied and if you really want to optimize something then you must understand the results from explain, even better, the results from explain extended.&lt;/p&gt;

&lt;p&gt;Executing explain is really simple you just add it in front of the query and let it’s magic tell you what is wrong.&lt;/p&gt;

&lt;pre&gt;
explain extended select 
    distinct o.id as order_id
    , round(sum(if o.type_id != 2, 0, oi.amount * oi.quantity), 2) as total
  from orders o
  join order_items oi on oi.order_id = o.id
 where o.create_date between '2019-01-01 00:00:00' and '2019-01-01 23:59:59'
   and oi.type_id in (1,2,3)
 group by oi.id
&lt;/pre&gt;

&lt;p&gt;From the results that it presents there are 6 columns you need to pay attention to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;type: look for the word ALL. That means that it scanned the whole table.&lt;/li&gt;
&lt;li&gt;table: look for something like  on it&lt;/li&gt;
&lt;li&gt;rows: these are the number of rows traversed to get to the result&lt;/li&gt;
&lt;li&gt;filtered: this is the percentage over the number of rows that were filtered to reach the result.&lt;/li&gt;
&lt;li&gt;extra: this column can get a bit big, so make sure you are reading it completely, but here look for temporary and filesort.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It does provide a lot more information but, these columns will be your direct indication on why the query is not performing correctly or if there is anything that can be done to improve the query performance. Keep in mind that the number of rows and the percentage filtered should be read together since one is an influencer to the other.&lt;br&gt;
All options exhausted&lt;/p&gt;

&lt;p&gt;Suppose that you did all that you could to improve your query performance and all options are now exhausted. Aside from beefing up the server with more power, there is nothing left to do. Well, this will be a direct dependency on the application you have, but there are options.&lt;/p&gt;

&lt;p&gt;The following are strategies that can be used when the query optimization is exhausted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break the query into smaller blocks and merge them together on the application layer.&lt;/li&gt;
&lt;li&gt;Return chunks of the result until there are no more chunks to be returned and merge the chunks together on the application layer. Similar to the first, just different approach.&lt;/li&gt;
&lt;li&gt;Create an ETL where the archived data can be stored in a pre-defined format and then the query would be performing 2 operations. One to read from ETL and then another to read live data.&lt;/li&gt;
&lt;li&gt;Application layer caching. Similar to ETL but instead storing the results on a Redis instance for example.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The important information here is that, if you can’t optimize the query then you must consider alternatives. There will be situations that the data is just too big to be handled in less than a second.&lt;br&gt;
Resources&lt;/p&gt;

&lt;p&gt;I could write till the end of time here and probably not be able to cover everything that is needed with query optimization. This is fairly the very basic of it and there is a lot more that can be achieved. The following resources I found to be essential to optimize queries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O’Reilly High-Performance MySQL chapter on query optimization goes really deep into the nuances of query optimization and how MySQL optimization query performs.&lt;/li&gt;
&lt;li&gt;MySQL Explain output documentation. Doesn’t matter how many times you run that command, this is still essential information.&lt;/li&gt;
&lt;li&gt;MySQL Select optimization documentation page. Provides a lot of good information on how MySQL engine optimization will execute and how you can take advantage of it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post was originally written at &lt;a href="https://bit.ly/2KJwEED" rel="noopener noreferrer"&gt;https://bit.ly/2KJwEED&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>optimization</category>
      <category>development</category>
      <category>software</category>
    </item>
    <item>
      <title>Some of my best posts from Medium</title>
      <dc:creator>Cristiano D. Silva</dc:creator>
      <pubDate>Tue, 20 Aug 2019 13:32:20 +0000</pubDate>
      <link>https://dev.to/mcloide/some-of-my-best-posts-from-medium-2c37</link>
      <guid>https://dev.to/mcloide/some-of-my-best-posts-from-medium-2c37</guid>
      <description>&lt;p&gt;Dev.to has been growing as a large software community that contains some good posts about our trade.&lt;/p&gt;

&lt;p&gt;I have written some posts at Medium that are quite popular and my next few posts will be the cloning of these to dev.to&lt;/p&gt;

&lt;p&gt;Hope you all like it.&lt;/p&gt;

</description>
      <category>introduction</category>
    </item>
  </channel>
</rss>
