<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: murleedas</title>
    <description>The latest articles on DEV Community by murleedas (@murleedas).</description>
    <link>https://dev.to/murleedas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/murleedas"/>
    <language>en</language>
    <item>
      <title>AWS Play Game</title>
      <dc:creator>murleedas</dc:creator>
      <pubDate>Wed, 16 Mar 2022 17:11:21 +0000</pubDate>
      <link>https://dev.to/murleedas/aws-play-game-45k5</link>
      <guid>https://dev.to/murleedas/aws-play-game-45k5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz5whqzt16iprzcraosb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz5whqzt16iprzcraosb.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
I am really excited to share a news that made me awestruck! I still remember the fun and excitement  I had while playing GTA Vice city during my childhood days. What if I say you will gonna have the same fun and excitement while learning cloud.&lt;/p&gt;

&lt;p&gt;Off-course I am not kidding. AWS has started first of such free initiatives, a 𝐧𝐞𝐰 𝐠𝐚𝐦𝐞-𝐛𝐚𝐬𝐞𝐝 𝐫𝐨𝐥𝐞-𝐩𝐥𝐚𝐲𝐢𝐧𝐠 𝐞𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞, 𝐜𝐚𝐥𝐥𝐞𝐝 𝐀𝐖𝐒 𝐂𝐥𝐨𝐮𝐝 𝐐𝐮𝐞𝐬𝐭: 𝐂𝐥𝐨𝐮𝐝 𝐏𝐫𝐚𝐜𝐭𝐢𝐭𝐢𝐨𝐧𝐞𝐫. This is going to be an absolute treat for the new cloud learners. With AWS Cloud Quest, you can learn the fundamentals of cloud computing concepts like you play a game. Expecting more such initiatives in all level which makes the learning very simple and engaging.&lt;/p&gt;

&lt;p&gt;AWS also introduced yet another free initiative, an 𝐞𝐧𝐡𝐚𝐧𝐜𝐞𝐝 𝐀𝐖𝐒 𝐄𝐝𝐮𝐜𝐚𝐭𝐞 𝐩𝐫𝐨𝐠𝐫𝐚𝐦 𝐨𝐟𝐟𝐞𝐫 𝐟𝐫𝐞𝐞 𝐡𝐚𝐧𝐝𝐬-𝐨𝐧 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠. More details on &lt;a href="https://www.aboutamazon.com/news/aws/two-new-free-aws-initiatives-help-build-foundational-cloud-skills" rel="noopener noreferrer"&gt;https://www.aboutamazon.com/news/aws/two-new-free-aws-initiatives-help-build-foundational-cloud-skills&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I would like to request all the new and upcoming cloud learners to enrol and explore this cool game on &lt;a href="https://explore.skillbuilder.aws/learn/public/catalog/view/51" rel="noopener noreferrer"&gt;https://explore.skillbuilder.aws/learn/public/catalog/view/51&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learning</category>
      <category>cloud</category>
      <category>cloudquest</category>
    </item>
    <item>
      <title>Brainboard</title>
      <dc:creator>murleedas</dc:creator>
      <pubDate>Tue, 15 Mar 2022 15:55:25 +0000</pubDate>
      <link>https://dev.to/murleedas/brainboard-bhg</link>
      <guid>https://dev.to/murleedas/brainboard-bhg</guid>
      <description>&lt;p&gt;I am glad to share a new automation tool that I explored recently. I had the same excitement when I used #Sourcetree few years back  made my life easy with GIT operations. &lt;/p&gt;

&lt;p&gt;This time for Terraform and the tool is 𝐁𝐫𝐚𝐢𝐧𝐛𝐨𝐚𝐫𝐝. It amazed me on every aspect. The best thing I like is to be get started with Terraform and deploy the resources just like that. It has everything under one switch. Be it the design to create our architecture, be it a code editor, be it a simple integration with all the popular source code repositories and so on. One another outstanding feature is its versioning of the entire architecture.&lt;/p&gt;

&lt;p&gt;𝐁𝐫𝐚𝐢𝐧𝐛𝐨𝐚𝐫𝐝 is bi-directional that is we can either create an architecture and automatically generate the terraform code or we can import the terraform code and generate its appropriate architecture which is the best part. It has vault integration to store and retrieve your credentials and access keys safely and securely. It has a beautiful UI to easily switch between multiple cloud providers that makes multi-cloud infrastructure management easy. Also the effective team collaboration is yet another highlight and it gives all the major cloud providers remote state management in ease.&lt;/p&gt;

&lt;p&gt;I can see a clean ecosystem with 𝐁𝐫𝐚𝐢𝐧𝐛𝐨𝐚𝐫𝐝 that results in simple and consistent operations and hence increase in the productivity. A good tool to use overall. It has free trial and starter packages to start with and enterprise edition to cover all the features.&lt;br&gt;
More details on &lt;a href="https://www.brainboard.co/"&gt;https://www.brainboard.co/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks to Chafik Belhaoues ☁ and Jeremy Albinet for creating such an awesome product.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>multiclouddeployment</category>
      <category>automationtool</category>
      <category>cloudnew</category>
    </item>
    <item>
      <title>AWS Migration Strategies</title>
      <dc:creator>murleedas</dc:creator>
      <pubDate>Wed, 02 Feb 2022 05:10:38 +0000</pubDate>
      <link>https://dev.to/murleedas/aws-migration-strategies-46io</link>
      <guid>https://dev.to/murleedas/aws-migration-strategies-46io</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iNT5SzEk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ab4nzsylv06moxie7024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iNT5SzEk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ab4nzsylv06moxie7024.png" alt="Image description" width="599" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Application Migration Strategies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS provides seven common migration strategies. We can call it as seven R’s. Since each application is unique, enterprises often use multiple strategies based on separate applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rehost&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A customer’s application is brought to AWS without changes to the operating system or database management system (DBMS). It is simply moved to Amazon Elastic Compute Cloud (Amazon EC2) instances. It moves workloads “as is” to the cloud with minimal changes. Examples include servers running packaged software and applications without an active development roadmap.&lt;br&gt;
Sometimes called lift and shift. Customers use lift-and-shift to quickly migrate and then focus on optimization. Using this method, a migration can be fast, predictable and economical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replatform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A customer’s application is brought to AWS but, as an example, uses Amazon Relational Database Service (Amazon RDS) rather than continuing to manage DBMS instances. It’s like changing an auto engine for higher performance or newer functionality. A cloud migration re-platform might involve upgrading the operating system, such as from Windows 2003 to Windows 2012, or upgrading an application to the latest release. Because of these types of changes, re-platform might trigger some application code changes. As such, more tests are required at the migration validation stage. This is sometimes called lift-tinker-shift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repurchase&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A customer’s application is moved as a software as a service (SaaS) platform that replaces all components of an application and assumes management tasks for the application’s infrastructure. Think of this as moving an application to a different product.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This strategy involves redesigning application architecture or rewriting an application, before migrating, to make it a cloud-native application. One example of re-factoring is changing from virtual machines (VMs) to containers for microservices. Another example is transitioning enterprise databases to cloud-optimized Amazon Aurora or Amazon DynamoDB. These major changes towards cloud-native capabilities require time, resources, and skills. It is often driven by a need to add features, performance, or scale to customer’s current resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retire&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During a migration, customers discover that an application is no longer necessary and must be decommissioned. Simply stop using the application, such as legacy databases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retain&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some applications might not be migrated due to licensing or other reasons. So, Retain for now, or revisit at a later date.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relocate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Applications running on VMWare and containerized applications can be quickly relocated to AWS using the host applications familiar to customers. Virtual machines and containers are copied to AWS, and run on AWS systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are the major strategies to be consider before migrating an application to AWS. By following these strategies, we can decrease the burden or any challenges that we could probably face during our application migration to AWS.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudmigration</category>
      <category>architecture</category>
    </item>
    <item>
      <title>ISTIO</title>
      <dc:creator>murleedas</dc:creator>
      <pubDate>Wed, 02 Feb 2022 04:57:57 +0000</pubDate>
      <link>https://dev.to/murleedas/istio-1ai2</link>
      <guid>https://dev.to/murleedas/istio-1ai2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Istio Service Mesh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mAuxErMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gnyizf6uo6u76yo6y0y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mAuxErMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gnyizf6uo6u76yo6y0y.jpg" alt="Image description" width="631" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Istio is nothing but the implementation of Service mesh. What is a Service mesh then! Service Mesh is a solution that manages communication between microservices. However, in which way, the service mesh is different from the existing services used for communication in microservices, is what we are going to discuss in this article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some drawbacks with the existing microservices architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is no any robust endpoint available for each microservice to have an effective communication with each other. With Istio’s endpoint configuration, for example for a webserver, it allows us to establish an effective and robust communications with all the other related services.&lt;br&gt;
When it comes to security, we typically have firewall and proxies configured outside the cluster to filter or restrict the unwanted access to our microservices cluster. However, inside the cluster, the services can communicate freely without any restrictions that makes the cluster insecure.&lt;br&gt;
Microservices also need retry logic to automatically retry the connection when one of the microservices are unreachable. One another challenge is with the metrics collection and tracing errors, request counts etc. Therefore, we may need to add a monitoring logic to provide those data to the monitoring solutions like Prometheus. All these logics should be added additionally to each services that results in much effort and adds more complexity to the services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Mesh in solving the problems&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to get rid of all the above challenges, we can implement Service Mesh with sidecar pattern. We can separate all the non-business logics from the microservices and should keep them in a sidecar container and call it as sidecar proxy. &lt;br&gt;
Since it is a third party application, anyone who manages the cluster can easily configure the same. Moreover, developers can predominantly focus and spend more time on actual business logics. Service mesh has a control plane that automatically injects the proxy in every microservice pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traffic Split feature&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Concisely, Traffic Split is very much similar to the Canary deployment. When you make some changes to your application and release a new version, you may have some bugs and that could result in breaking the production application when deployed.&lt;br&gt;
Even if you tested the new version thoroughly but still unsure about its flawless working, you can use Traffic Split to send your new version changes to only 10% of the application requests and the old standard version can serve the remaining 90% of the requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working of Istio&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Service Mesh is just a pattern or paradigm and Istio is one of its implementations. Istio uses envoy proxies (which the service mesh implementations generally use). The control plane components of Istio is Istiod that manages and injects the envoy proxies in each microservice pods. In earlier versions of Istio (prior to v1.5), the bundle of components like Pilot, Galley, Citadel and Mixer were separate. In the versions after v1.5 those components are embedded inside the istiod control plane.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some other features of Istio&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Configuration&lt;br&gt;
• Service discovery&lt;br&gt;
• Certificate management&lt;br&gt;
• Gather telemetry data&lt;br&gt;
Istio has internal registry for services and their endpoints. Moreover, the endpoints configuration is dynamic that helps new microservice to register automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Whenever user sends a request, it hits the Istio Gateway that evaluates the virtual service rule and routes the traffic to the microservice endpoint where the requests are being forwarded to the envoy proxy followed by the traffic routed to the service and vice versa. This is in high-level the traffic flows through Istio.&lt;/p&gt;

</description>
      <category>istio</category>
      <category>kubernetes</category>
      <category>servicemesh</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Data Lake</title>
      <dc:creator>murleedas</dc:creator>
      <pubDate>Wed, 02 Feb 2022 04:50:04 +0000</pubDate>
      <link>https://dev.to/murleedas/data-lake-hma</link>
      <guid>https://dev.to/murleedas/data-lake-hma</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Data Lake?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data lake is a place where all sort of data are stored or ingested in order to perform any kind of data analytics, business intelligence, machine learning, data warehousing etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why should we use Data lake?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the past few years, data have been growing exponentially and we are getting humongous data from different sources like mobile devices and extensively from IOT these days. Mostly those data are diverse and having open formats. Hence we use data lake as a central catalog for managing those increasingly diverse different format data like tables, images, videos and so on. Those data can be ingested into the data lake and can be managed and used for different purposes like stated earlier. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the different tools available for Data lake?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While we have some powerful tools in the market for data analytics like Power BI, Python, Knime, Tableau and so on, we do have some robust tools for the data lake as well mainly in the cloud. Few such offerings of the best data lake tools in the cloud are AWS Lake Formation and Azure Data Lake Storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What AWS has to offer for Data lake?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The simplicity in implementing or setting up of data lake makes AWS a super player in the cloud competition. AWS has S3 which is more robust and resilient that can store even years old data with a less expensive form. It includes many different tiers of storage that makes it more dynamic and use data in a more effective manner. Also as mentioned earlier, AWS offers the Lake Formation solution that very well integrates with different services like S3, Redshift, Athena, Kinesis, EMR and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data is everywhere nowadays and hence the necessity to manage the same effectively has become vital. It started with the traditional databases where we simply managed the tables and mainly focussed on the business transactions. Then came the data warehousing concept with which we ingested some important data from business for analysing and improving the businesses. In the modern days, as the technology transformed to a different level, the data processing for automation, machine learning, AI increases and so is the demand for data lake is increasing enormously.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>dataanalytics</category>
      <category>datalake</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Containers</title>
      <dc:creator>murleedas</dc:creator>
      <pubDate>Wed, 02 Feb 2022 04:45:03 +0000</pubDate>
      <link>https://dev.to/murleedas/containers-3i93</link>
      <guid>https://dev.to/murleedas/containers-3i93</guid>
      <description>&lt;p&gt;&lt;strong&gt;Why Containerization?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will try to understand the need of containerization in simple terms. Unlike the traditional method of provisioning and managing the resources, the technology advancement has brought as a situation where companies are moving towards or already managing their infrastructure in different places. Be it on premise or with different cloud service providers like AWS, Azure, and Google Cloud and so on.  Multi-infrastructure management comes along with some major challenges like migration from one place to another. We know the pain of migrating all our workloads from one infrastructure to the other. Hence, the use of containers makes it easy to build, ship and run anywhere saving lot of time and effort.&lt;/p&gt;

&lt;p&gt;Nowadays most of the companies are into transforming their applications from monolithic architecture to microservices architecture. Just think off the cost involved in provisioning the hardware/servers for each services in microservices. The containers can eliminate this challenge as well by running on a single server that too by isolating from each other. &lt;br&gt;
One another major advantage of using containers are in using along with CI/CD pipelines. Using containerization in the CI/CD pipelines saves us a lot of time and deliver high performance. Containers are lightweight and isolated and are easy scalable and provides seamless elasticity. One such popular Containerization tools in the market for a long time is Docker.&lt;br&gt;
Why Docker became so popular?&lt;/p&gt;

&lt;p&gt;Docker has attained popularity due to its simple interactive CLI. It has its own DSL (Domain Specific Language) and mainly it is similar to Linux. However, it is not only easy to learn for Linux users but also provides simplicity for the other new users. Docker is limited to run only through the command line although it is itself a VM or a Windows machine. Docker needs only the docker engine to be available and works in a similar way across all the operating systems (with the same CLI) that makes it platform independent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers are everywhere. Most of the services in Microsoft Azure are nowadays running on the Containers. Nonetheless, it eliminates many challenges of VMs, in some cases VMs are still standing out. Hence, it is a better approach to use both of them effectively considering the use case and requirements of your projects or infrastructure and in optimally deploy and managing your applications.&lt;/p&gt;

</description>
      <category>containers</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>ELK Basics</title>
      <dc:creator>murleedas</dc:creator>
      <pubDate>Tue, 01 Feb 2022 14:08:23 +0000</pubDate>
      <link>https://dev.to/murleedas/elk-basics-2155</link>
      <guid>https://dev.to/murleedas/elk-basics-2155</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Log analysis? Why we need it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this article, we are going to see an overview of ELK stacks.&lt;br&gt;
What exactly is ELK stack and its components, and what log analysis means!&lt;/p&gt;

&lt;p&gt;Let us consider you have hundreds of servers that you support in production and it becomes hard to debug especially if you are facing any issues and you need to find out which server is having issues. Moreover, to narrow down the actual error message among hundreds of servers is really a tedious task. To solve this problem, we have ELK Stack that comprises of Elastic Search, Logstash and Kibana.&lt;/p&gt;

&lt;p&gt;Normally when you have hundreds of web servers, you would like to see the web services log, which will give you specific information of those servers, but it is hard to log in to each server and check the logs. Instead if you have a centralized mechanism to visualize the logs that will be helpful. That is one scenario. The second scenario is going to be performance analysis. Assume that your application is getting slow and you want to check what is going on by checking the logs, in that case, you need to analyse the logs.&lt;/p&gt;

&lt;p&gt;These are some cases where you need log analysis to solve that and in such cases, we have ELK stack in place. Logs are of different formats. For example, consider we have different types of servers like database servers, application servers in your environment. So with each of such servers, we have different type of logs and log sources from which we need to make a structured and meaningful report out of them. We need to have a decentralized place to look at all the logs in one shot.&lt;br&gt;
Efficiency is the key here. Therefore, the process involves collection of data, cleaning up of the data, converting them into structured format, and analysing the data. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elasticsearch&lt;/strong&gt; is a search engine or search server and a no SQL database that uses indexes to search which is a very powerful mechanism in terms of providing the search functionality.&lt;/p&gt;

&lt;p&gt;Then we have &lt;strong&gt;Kibana&lt;/strong&gt; to visualize whatever data gathered by Elasticsearch. You can create and manage dashboards using Kibana and customise graphs according to your business requirements. Moreover, it is created to visualize all the health of the environment in one place.&lt;/p&gt;

&lt;p&gt;We have yet another component called &lt;strong&gt;Logstash&lt;/strong&gt; that allows you to ingest the unstructured data from a variety of data sources including system logs, website logs and application server logs. It also offers something called pre-built filters, so you can readily transform common data types, index them in Elastic search, and start querying without having to build custom data transformation pipelines. So briefly, the benefit of using Logstash is, you can easily load the unstructured data from various sources and can have pre-built filters, which can be used to do the transformations that are needed.&lt;/p&gt;

&lt;p&gt;In addition to that, we have one another component used in ELK stack that is &lt;strong&gt;Filebeat&lt;/strong&gt;. It is one of the best log file shippers out there today. It is lightweight, supports SSL and TLS encryption, supports back pressure with a good built-in recovery mechanism and it is extremely reliable.&lt;/p&gt;

&lt;p&gt;Moreover, this is the best log file Shipper that is commonly used in the live environment and then Logstash is used to aggregate the logs by pulling data from various sources before pushing it down the pipeline usually in elastic search.&lt;br&gt;
Therefore, we always need to remember that Filebeat and Logstash are used in conjunction.&lt;/p&gt;

&lt;p&gt;So let us look at the typical pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aXpgZJQF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o4xu1qcw0zgi2tm2ied8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aXpgZJQF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o4xu1qcw0zgi2tm2ied8.jpg" alt="Image description" width="880" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the left side is the Filebeat that runs on each server and ships the logs to a specific location, which is Logstash. Therefore, the Filebeat and other sources ship data to Logstash for processing and transform data access to pipeline and is forwarded to Elasticsearch. Then the Elasticsearch receives data from Logstash, Indexing it for faster searching and stores it in a No SQL database. Once the data is index and stored in the Elasticsearch, we use Kibana to provide visualization of dashboard.&lt;/p&gt;

&lt;p&gt;Below is a sample Kibana dashboard and you can create your own dashboards similar to this as per your business requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--whEHFLxW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wb1upl28yzlckmp92je6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--whEHFLxW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wb1upl28yzlckmp92je6.jpg" alt="Image description" width="880" height="421"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Therefore, you can see the dashboard for logs off many servers, and you can see the IP addresses which has issues categorized in a pie chart format.&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>elk</category>
      <category>elkstack</category>
      <category>loganalysis</category>
    </item>
  </channel>
</rss>
