<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: BPB Online</title>
    <description>The latest articles on DEV Community by BPB Online (@bpb_online).</description>
    <link>https://dev.to/bpb_online</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bpb_online"/>
    <language>en</language>
    <item>
      <title>Understanding Xamarin and Xamarin.Forms</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sun, 21 Aug 2022 22:21:37 +0000</pubDate>
      <link>https://dev.to/bpb_online/understanding-xamarin-and-xamarinforms-bpp</link>
      <guid>https://dev.to/bpb_online/understanding-xamarin-and-xamarinforms-bpp</guid>
      <description>&lt;p&gt;&lt;a href="https://bpbonline.com/products/xamarin-with-visual-studio?_pos=1&amp;amp;_sid=cbe7972eb&amp;amp;_ss=r"&gt;Xamarin&lt;/a&gt; is a development technology that makes it possible for developers skilled in C# and the Microsoft stack to build applications for several operating systems.&lt;/p&gt;

&lt;p&gt;More specifically, Xamarin groups the following development platforms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Xamarin.Android, a set of &lt;a href="//bpbonline.com/search?q=.NET&amp;amp;type=product"&gt;.NET&lt;/a&gt; libraries and tools that allow you to run C# code on Android devices. This is possible because Xamarin.Android translates, behind the scenes, all the work done in C# into the Java equivalentand invokes the proper Java and Google tools to create theapplication binaries. As an implication, Xamarin. Android requires the Java Software Development Kit (SDK) and tools like the Android SDK Manager and the Android Device Manager. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Xamarin.iOS, a set of .NET libraries and tools that allow you to run C# code on iPhone and iPad devices. Behind the scenes, Xamarin.iOS translates all the work done in C# into the Objective C equivalent and invokes the Apple developer tools to create the application binaries. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Xamarin.Mac, a set of .NET libraries and tools that allow you to run C# code on macOS machines, building desktop applications. Like Xamarin.iOS, it translates all the work done into the Objective C equivalent and invokes the Apple developer tools to create the application binaries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Xamarin.Forms, one library that makes it possible to target multiple platforms from a single, shared codebase. Behind the scenes, Xamarin. Forms invokes both Xamarin.Android and &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Xamarin.iOS, and then these twoplatforms do the job of invoking the appropriate native tools to create the application binaries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Xamarin.Android and Xamarin.iOS make it possible to access the native API of the operating systems they target, whereas Xamarin.Forms needs to pass through them. This is also why you will often hear developers talk about Xamarin native when referring to Xamarin.Android and Xamarin.iOS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Xamarin.Forms&lt;/strong&gt;&lt;br&gt;
Xamarin native platforms require you to know the operating systems' API in detail. Moreover, you will still need two different projects: one for Android and one for iOS. You will be able to share some logic between the two, but all the user interface and access to the device features need separate work and effort. &lt;/p&gt;

&lt;p&gt;With Xamarin.Forms, you have the following advantages instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You do not really need to know the native API in detail (though always recommended) because you have code that is common to all the platforms.&lt;/li&gt;
&lt;li&gt;You can target multiple platforms from one codebase.&lt;/li&gt;
&lt;li&gt;Xamarin.Forms can also target the Universal Windows Platform from the same codebase, allowing you to make your code run on Windows 10 as well.&lt;/li&gt;
&lt;li&gt;You write, debug, and maintain code once, not twice, with a reduced effort.&lt;/li&gt;
&lt;li&gt;You will still publish two different applications, but this is also what you would do with Xamarin.Android and Xamarin.iOS. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>xamarin</category>
      <category>xamarinforms</category>
      <category>mobile</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AI vs ML/DL</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sun, 21 Aug 2022 20:33:00 +0000</pubDate>
      <link>https://dev.to/bpb_online/ai-vs-mldl-1055</link>
      <guid>https://dev.to/bpb_online/ai-vs-mldl-1055</guid>
      <description>&lt;p&gt;&lt;a href="//bpbonline.com/collections/artificial-intelligence"&gt;Artificial Intelligence&lt;/a&gt;, &lt;a href="https://bpbonline.com/collections/machine-learning"&gt;Machine Learning &lt;/a&gt;, and &lt;a href="https://bpbonline.com/collections/deep-learning"&gt;Deep Learning &lt;/a&gt; have always been confusing buzzwords, which are often used interchangeably. It is important to study various AI branches to study within. This will help us in choosing the right framework to solve a real-world problem. Deep Learning and ML are the subfields of AI.&lt;/p&gt;

&lt;p&gt;Let us understand and differentiate these concepts under this topic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI&lt;/strong&gt;: AI is a big picture and an umbrella term that develops machines which can accomplish a task that requires human intelligence. AI does not imply learning. AI falls into one of the three stages, which we have discussed in the preceding topics about AI concepts and types of AI based on their capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine Learning&lt;/strong&gt;: This field is a subset of AI which deals with making the machines learn from past data without being explicitly programmed. But how machines can learn? It can learn just the way human learns. Humans can learn through communication, past experiences, analyzing the situation, or decision-making.&lt;/p&gt;

&lt;p&gt;A machine can learn the same way with the help of data and algorithms. The algorithm finds out the hidden patterns in the data and helps us to make future predictions or infer knowledge from the data. With more and more data you give to the model, it further gets improved, leading to accuracy. ML automates repetitive learning. ML is broadly categorized into three types:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Supervised learning&lt;/li&gt;
&lt;li&gt;Unsupervised learning&lt;/li&gt;
&lt;li&gt;Reinforcement learning&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VUuzrVzi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ncjqn1dwmw6hb5ml04n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VUuzrVzi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ncjqn1dwmw6hb5ml04n.png" alt="Image description" width="639" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep Learning (DL)&lt;/strong&gt;: It is a subset of ML, which mimics like human brain/ neurons while processing the data such as object recognition, language translation, decision-making, and so on. Geoffrey Hinton, with his fellow researchers, has triggered the success of Deep Learning. Just like neurons are the basic unit of the nervous system in the human brain, DL uses neural network architecture to solve the given problem without human intervention. The input data passes through multiple layers and classifies the information. It requires a huge amount of data, unlike ML, for learning. It solves complex machine learning problems. An example is a self-driving car that uses Deep Learning to detect any obstacle that comes while driving a car.&lt;/p&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Securing Internet of Things using a Blockchain</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sun, 21 Aug 2022 20:19:09 +0000</pubDate>
      <link>https://dev.to/bpb_online/securing-internet-of-things-using-a-blockchain-53e2</link>
      <guid>https://dev.to/bpb_online/securing-internet-of-things-using-a-blockchain-53e2</guid>
      <description>&lt;p&gt;IoT frameworks store, make, and cycle information and send this data over the Internet, making a lot of information to be utilized by different suppliers. Notwithstanding the central focus, basic issues identified with security could emerge. The blockchain would expect an essential occupation in the course of action of decentralized applications that will run into billions of contraptions. Seeing how and when this headway can be utilized to give security and protection is a test, and a few scholars feature these issues. The creators examined, unequivocally to the accompanying issues, the appropriateness of connecting blockchain, and IoT.&lt;/p&gt;

&lt;p&gt;A few of the features for IoT devices using Cloud-based applications are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There are negligible abilities for commonplace IoT gadgets.&lt;/li&gt;
&lt;li&gt;Transaction expenses can restrain connections.&lt;/li&gt;
&lt;li&gt;Sleepy IoT endpoints are likewise accessible.&lt;/li&gt;
&lt;li&gt;IoT-delivered data should be kept classified.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hence, it is important to examine whether all developments can be applied appropriately. &lt;/p&gt;

&lt;p&gt;In such a manner, the writing/storing tends to the accompanying:&lt;br&gt;
(i) A financially smart blockchain reasonable for low-limit gadgets&lt;br&gt;
(ii) Micropayments between information installment sensors&lt;br&gt;
(iii) Computing and extricating data from delicate information&lt;br&gt;
(iv) Integration into brilliant homes, shrewd urban areas, or shared economies.&lt;/p&gt;

&lt;p&gt;In the current world, IoT has been streaming around more than five billion interconnected nodes and devices across the globe with every device generating large amounts of data that leave behind their footprints on the Internet. &lt;/p&gt;

&lt;p&gt;In fact, the most challenging task of IoT is the distributed framework that is required to ensure resilience against attacks like DDoS, malware attacks, and so on. Similarly, one of the other major challenges often exposed in the field of IoT is with respect to efficient integration of data because the flow of data is essentially from numerous devices, sensors, and gadgets that have been interconnected in a particular network. This is exactly where a blockchain can come to the rescue and prove its worth because the main objective of this technology is to ensure trust and reliable services which are safe and secure.&lt;/p&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>iot</category>
      <category>internetofthing</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How does DAG work in Spark?</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sun, 21 Aug 2022 18:43:00 +0000</pubDate>
      <link>https://dev.to/bpb_online/how-does-dag-work-in-spark-4cm9</link>
      <guid>https://dev.to/bpb_online/how-does-dag-work-in-spark-4cm9</guid>
      <description>&lt;p&gt;Directed Acyclic Graph (DAG) is a finite directed graph with no directed cycles. There are finitely many vertices and edges, where each edge is directed from one vertex to another. DAG contains a finite set of vertices and edges in sequence. Every edge in DAG is directed from top to bottom in the sequence.&lt;/p&gt;

&lt;p&gt;This is why it's great for generating multistage scheduling layers that implement stage-based scheduling. As the number of layers and depth can be more than two but finite, it's better and optimized than the older Map and Reduce programs, which run in two stages only: Map and Reduce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working of DAG Scheduler&lt;/strong&gt;&lt;br&gt;
The interpreter is the first layer, using a Scala interpreter, Spark interprets the code with some modifications. Spark creates an operator graph when you enter your code in the Spark console. When we call an Action on Spark RDD at a high level, Spark submits the operator graph to the DAG scheduler.&lt;/p&gt;

&lt;p&gt;Divide the operators into stages of the task in the DAG scheduler. A stage contains tasks based on the partition of the input data. The DAG scheduler pipelines operators together. For example, map operators schedule in a single stage. The stages pass on to the task scheduler. It launches tasks through cluster manager. The dependencies of stages are unknown to the task scheduler.&lt;/p&gt;

&lt;p&gt;The workers execute the task on the slave. The following diagram briefly describes the steps of how DAG works in the Spark job execution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wxT6SDIz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbx5qa0i177hjtkavbbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wxT6SDIz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbx5qa0i177hjtkavbbg.png" alt="Image description" width="647" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Driver program:&lt;/strong&gt; The Apache Spark engine calls the main program of an application and creates Spark Context. A Spark Context consists of all the basic functionalities. RDD is created in the Spark Context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spark Driver:&lt;/strong&gt; It contains various other components, such as DAG scheduler, task scheduler, backend scheduler, and block manager, which are responsible for translating the user-written code into jobs that are actually executed on the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster manager:&lt;/strong&gt; Cluster manager does the resource allocating work.&lt;/p&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>spark</category>
      <category>computerscience</category>
      <category>mathematics</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Best Practices for Successful Data Quality</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sun, 19 Jun 2022 21:00:00 +0000</pubDate>
      <link>https://dev.to/bpb_online/best-practices-for-successful-data-quality-dif</link>
      <guid>https://dev.to/bpb_online/best-practices-for-successful-data-quality-dif</guid>
      <description>&lt;p&gt;According to the Data Warehouse Institute, "Data Quality problems cost US businesses more than 600 billion US dollars a year." And this is where data quality best practices come into play; they help to minimize the negative impact of poor data quality. Data quality best practices help in maximizing data quality by ensuring that the data is maintained in such a way that it helps organizations to meet their goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Profiling:&lt;/strong&gt; Also known as data assessment or data discovery, is a set of methods and tools used for collecting statistics and findings that are related to data quality. Profiling tools collect these statistics by assessing various aspects of data such as structure, content, and so on. With the help of these, organizations can pinpoint problems and challenges related to data quality. There are several profiling options such as pattern analysis, range analysis, completeness analysis, and so on that help to improve data quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Buy-in:&lt;/strong&gt; Getting approval and buy-in from all stakeholders is a must. Making the aspect of data quality an integral part of the corporate culture is needed to ensure each and everyone involved in the process is accountable for doing their part perfectly, and each and everyone will be equally responsible for data hygiene and quality successes and failures. Such a strategy will prevent stakeholders from playing games such as finger-pointing, passing the buck, and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Stewards:&lt;/strong&gt; A data Steward's main aim is to preserve data quality and integrity. Usually, data stewards are assigned to data sets that they maintain in terms of quality and integrity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance:&lt;/strong&gt; Using data quality monitoring tools and auditing processes can help companies meet not only compliance standards and mandates but also ensure data quality standards and safeguards against potential data leaks are put in place. Frequent, incremental audits are critical to capture data quality anomalies in a timely manner. They help to pinpoint inconsistency, incompleteness, inaccuracy, and so on in the datasets in a timely manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Eliminating duplicate data:&lt;/strong&gt; Duplicate data identification tools are needed to be used to bring about better consistency and accuracy of data. The concept of master data is very important in minimizing duplication of data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics:&lt;/strong&gt; There need to be clear and comprehensive metrics for evaluating the whole data quality paradigm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance:&lt;/strong&gt; Data governance is a framework of policies, guidelines, and standards that needs to be made a part of the corporate culture to establish data quality standards as an integral part of the workplace DNA.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training and certifications:&lt;/strong&gt; These aspects are important to understand the deeper dynamics of data quality in terms of tools, processes, techniques, principles, and practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud computing:&lt;/strong&gt; It helps to integrate multiple data streams in a seamless manner, which means fewer errors in the data. By adopting a cloud-native solution and moving all data quality tools into these solutions, it becomes easier for organizations to adopt these tools and implement centralized reusable rules management and preloaded templates across all data sources. These tools help in building integrated data pipelines so patterns, insights, trends, and so on can be got from the cloud itself. Newer hybrid cloud technologies such as cloud containers, data warehouses, and so on pinpoint, correct, and monitor data quality problems in an efficient and effective way, thereby introducing better data quality standards and practices.&lt;/p&gt;

&lt;p&gt;Did you know - According to Gartner's 2016 data quality market survey, the impact of poor quality data on the average annual financial costs of organizations worldwide increased by 10% in 2016. It rose from 8.8 million US dollars in 2015 to 9.8 million US dollars in 2016.&lt;/p&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>beginners</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>Popular ETL Tools Available In The Market</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Fri, 17 Jun 2022 21:51:18 +0000</pubDate>
      <link>https://dev.to/bpb_online/popular-etl-tools-available-in-the-market-58lh</link>
      <guid>https://dev.to/bpb_online/popular-etl-tools-available-in-the-market-58lh</guid>
      <description>&lt;p&gt;A &lt;a href="//bpbonline.com/collections/data-mining-warehousing"&gt;Data Warehouse (DW)&lt;/a&gt; is a database that collects data and information from different sources. It is subject-oriented and stores large amounts of data. It also stores a series of snapshots of an organization’s operational data generated over a period of time. The DW represents the flow of data through time. Data is periodically uploaded then time-dependent data is recomputed.&lt;/p&gt;

&lt;p&gt;&lt;a href="//bpbonline.com/collections/business-intelligence"&gt;Business Intelligence&lt;/a&gt; and Data Warehouse solutions and tools help to improve the availability of data and information. They significantly simplify the data analytics architecture. They also provide data security and business continuity. These tools and solutions impact the business bottom-line positively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ETL and other tools available in the market&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most organizations have their data stored in a variety of locations, right from in-house databases to external sources like cloud storage services, BI tools, and so on, and they do not want to construct and maintain a separate data pipeline; hence, they use ETL tools. Extract, transform, and load (ETL) is a data warehousing process that extracts and blends raw data from various sources, then transforms the data and eventually loads into a DW. One of the major aims of ETL is to reduce data complexity. &lt;/p&gt;

&lt;p&gt;Some of the popular ETL tools are as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Oracle data integrator (ODI):&lt;/strong&gt; ODI has great ETL capabilities that also leverage the advantages of the database. But still, it does not provide the full spectrum of ETL features. Oracle does have features that can support other ETL tools and solutions. ODI works in tandem with Oracle Warehouse Builder to handle the entire DW business workflow dynamics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skyvia:&lt;/strong&gt; It is an all-in-one, easy-to-use cloud data&lt;br&gt;
platform. It facilitates easy data integration, management, and visualization. Its data integration module allows &lt;br&gt;
organizations to easily integrate without the need for coding to integrate data with other cloud applications and databases like Amazon Redshift, Salesforce, Zendesk, Shopify, and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Voracity:&lt;/strong&gt; It is an all-in-one, fastest, most affordable data management platform. Apart from ETL, it has other robust features like data governance and analytics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Xplenty:&lt;/strong&gt; It is an easy-to-use, point-and-click platform. It has powerful transformation tools to transform data into an analysis-friendly form. It has excellent customer support features. It has a user-friendly graphical user interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CloverDX:&lt;/strong&gt; It is a fully customizable robust, lightweight, and flexible data platform. While other platforms offer low code or no code features, CloverDX goes one step further by allowing any aspect of the platform to be customizable, and this is possible using a simple built-in scripting language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Redshift:&lt;/strong&gt; It is a secure, scalable, and cost effective cloud-based DW solution that is a part of the Amazon Web Services cloud computing platform. It can handle large-scale data storage, data migration, and so on; it is all about Big Data. It uses massively parallel processing (MPP) architecture which loads data at a super-fast speed.&lt;/p&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>datawarehouse</category>
      <category>datascience</category>
      <category>bigdata</category>
      <category>database</category>
    </item>
    <item>
      <title>Key challenges that MLOps addresses</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sat, 11 Jun 2022 22:12:37 +0000</pubDate>
      <link>https://dev.to/bpb_online/key-challenges-that-mlops-addresses-5ad7</link>
      <guid>https://dev.to/bpb_online/key-challenges-that-mlops-addresses-5ad7</guid>
      <description>&lt;p&gt;An engineering discipline aiming to unify &lt;a href="//bpbonline.com/collections/machine-learning"&gt;Machine Learning &lt;/a&gt;systems development (dev) and ML systems deployment (ops) to standardize and streamline the continuous delivery of high-performing models in production is MLOps.&lt;/p&gt;

&lt;p&gt;In a wide range of applications today, we are embedding decision automation, which generates a lot of technical challenges that come from building and deploying ML-based systems.&lt;/p&gt;

&lt;p&gt;The ML systems lifecycle involves different teams of a data-driven organization.&lt;/p&gt;

&lt;p&gt;From start to bottom, the following teams are involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business development or product team—defining business objectives with KPIs&lt;/li&gt;
&lt;li&gt;Data engineering — data acquisition and preparation&lt;/li&gt;
&lt;li&gt;
&lt;a href="//bpbonline.com/collections/data-science"&gt;Data science&lt;/a&gt; — architecting ML solutions and developing models&lt;/li&gt;
&lt;li&gt;IT or &lt;a href="//bpbonline.com/search?q=DevOps&amp;amp;type=product"&gt;DevOps&lt;/a&gt;—complete deployment setup, monitoring alongside scientists&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are several bottlenecks to be taken care of, and it is not an easy task to manage such systems at scale. The following are the key challenges that the teams have come up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is a shortage of Data Scientists who are good at developing and deploying scalable web apps. The profile of ML Engineers aims to serve this need. It is at the intersection of Data Science and DevOps.&lt;/li&gt;
&lt;li&gt;Reflecting changing business objectives in the model—with the data continuously changing, maintaining performance standards of the model, and ensuring AI governance, there are many dependencies. It is hard to keep up with the continuous model training and evolving business objectives.&lt;/li&gt;
&lt;li&gt;The communication gaps between technical and business teams with a hard-to-find common language to collaborate. This gap is the reason for the failure of several projects, most often.&lt;/li&gt;
&lt;li&gt;Risk assessment—a lot of debate is doing rounds concerning the black-box nature of ML systems. Often models tend to drift away from what they were initially intended to do. Assessing the risk/cost of such failures is a very important and meticulous step.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Comparing DevOps and MLOps</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sat, 11 Jun 2022 22:05:49 +0000</pubDate>
      <link>https://dev.to/bpb_online/comparing-devops-and-mlops-396a</link>
      <guid>https://dev.to/bpb_online/comparing-devops-and-mlops-396a</guid>
      <description>&lt;p&gt;A popular practice in developing and operating large-scale software systems is &lt;a href="//bpbonline.com/search?q=DevOps&amp;amp;type=product"&gt;DevOps&lt;/a&gt;, which provides the benefits such as shortened development cycles, increased deployment velocity, and dependable releases.&lt;/p&gt;

&lt;p&gt;Similar practices apply as an &lt;a href="//bpbonline.com/collections/machine-learning"&gt;Machine Learning&lt;/a&gt; system is a software system that helps guarantee you to reliably build and operate ML systems at scale.&lt;/p&gt;

&lt;p&gt;The following, however, are the ways in which ML systems differ from other software systems:&lt;/p&gt;

&lt;p&gt;**Skills to work in a team: **The team’s focus in an ML project which includes data scientists or ML researchers, is on exploratory data analysis, model development, and experimentation, and these team members cannot build production—class services as they are not experienced, software engineers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application development:&lt;/strong&gt; Because ML is experimental in nature, you should attempt as many features, algorithms, modeling methodologies, and parameter settings as possible to identify what works best for the problem. The challenge is keeping track of what worked and what did not while maximizing code reusability and maintaining reproducibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application testing:&lt;/strong&gt; Compared with testing other software systems, an ML system is more involved. You need data validation, trained model quality evaluation, and model validation in addition to typical unit and integration tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application deployment:&lt;/strong&gt; Deploying an offline-trained ML model as a prediction service deployment is not as simple in ML systems. A multi-step pipeline may be required by you to deploy ML systems so as to automatically retrain and deploy the model. This requires you to automate steps that were manually done before deployment by data scientists to train and validate new models, as this pipeline adds complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application production:&lt;/strong&gt; Due to constantly evolving data profiles as also due to suboptimal coding, ML models may reduce performance. The models can decay in more ways than conventional software systems, and this degradation needs to be considered by you. Therefore, when values deviate from your expectations, you need to track summary statistics of your data and monitor the online performance of your model to send notifications or rollback.&lt;/p&gt;

&lt;p&gt;In continuous integration of source control, unit testing, integration testing, and continuous delivery of the software module or the package, ML and other software systems are similar. In ML; however, there are a few notable differences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI is not only about testing and validating code and components but also about testing and validating data, data schemas, and models.&lt;/li&gt;
&lt;li&gt;CD is not only about a single software package or a service but also a system (an ML training pipeline) that should automatically deploy another service (model prediction service).&lt;/li&gt;
&lt;li&gt;CT is concerned with automatically retraining and serving the models, and it is a new property that is unique to ML systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>machinelearning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Key Principles of a DataOps Ecosystem</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sat, 11 Jun 2022 21:52:54 +0000</pubDate>
      <link>https://dev.to/bpb_online/principles-of-dataops-2feb</link>
      <guid>https://dev.to/bpb_online/principles-of-dataops-2feb</guid>
      <description>&lt;p&gt;DataOps is a set of practices, procedures, and technologies that combines an integrated and process-oriented approach to data with automation and agile software engineering approaches to increase quality, speed, and collaboration while also encouraging a culture of continuous improvement.&lt;/p&gt;

&lt;p&gt;In short, DataOps (Data + Operations) are methods that add speed and Agility to end-to-end data pipelines processes, from collection to delivery.&lt;/p&gt;

&lt;p&gt;Let's look at the principles of DataOps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Satisfy your customer continually:&lt;/strong&gt; From a few minutes to weeks, through the early and continual release of useful analytic insights, which should be your top goal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analytics that is value working:&lt;/strong&gt; The primary metric of data analytics performance is the degree to which meaningful analytics are supplied, combining reliable data top study frameworks and processes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Embracing the change:&lt;/strong&gt; You welcome and, in fact, embrace changing client needs in order to gain a competitive advantage. Face-to-face communication with clients, you believe, is the most efficient, effective, and agile mode of communication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It is a team sport:&lt;/strong&gt; The roles, abilities, favorite tools, and titles on analytic teams will always be diverse.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Daily interactions:&lt;/strong&gt; The customers, analytic teams, and operations must work together daily throughout the project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-organize:&lt;/strong&gt; Your belief is that the best analytic insight, algorithms, architectures, requirements, and designs emerge from self-organizing teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduce heroism:&lt;/strong&gt; As the need for analytic insights grows in speed and breadth, you believe that analytic teams should seek to build sustainable and scalable data analytic teams and procedures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reflect:&lt;/strong&gt; Analytic teams can fine-tune their operational performance by self-reflecting on feedback from clients, themselves, and operational information at regular intervals.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analytics is code:&lt;/strong&gt; To acquire, integrate, model, and show data, the analytic teams employ a number of individual technologies. Each of these tools, which essentially generates code and configuration, describes the activities conducted on data to offer insight.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Orchestrate:&lt;/strong&gt; The beginning-to-end Orchestration of data, tools, code, environments, and the analytic team’s work is a key driver of analytic success.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Make it reproducible:&lt;/strong&gt; You version everything: data, low-level hardware and software configurations, and the code and configuration unique to each tool in the toolchain because reproducible results are essential.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disposable environments:&lt;/strong&gt; You feel it is critical to reducing the cost of experimentation for analytic team members by providing them with simple to develop, isolated, safe, and disposable technical environments that mirror their production environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplicity:&lt;/strong&gt; Continuous attention to technical perfection and good design improves Agility; simplicity, which is the art of maximizing the amount of work not done, is also important.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analytics is manufacturing:&lt;/strong&gt; Analytic pipelines are similar to lean manufacturing lines. DataOps is defined by an emphasis on process-thinking in order to achieve continual efficiencies in the production of analytic insight.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Quality is paramount:&lt;/strong&gt; For error avoidance, continuous feedback is provided to operators with a foundation capable of automatic detection of abnormalities and security issues in code; analytic pipelines should be built and configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Quality and performance monitoring:&lt;/strong&gt; The purpose is to have constant monitoring of performance, security, and quality indicators in order to detect unanticipated fluctuation and create operational statistics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reuse:&lt;/strong&gt; Avoiding the person or team from repeating earlier labor is a core feature of analytic insight manufacturing efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improve cycle times:&lt;/strong&gt; We should try to reduce the time and work it takes to turn a consumer demand into an analytic idea, develop it, deploy it as a repeatable production process, and then refactor and reuse it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>dataops</category>
      <category>devops</category>
      <category>agile</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The need for Kubernetes and what it does?</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sat, 11 Jun 2022 21:38:49 +0000</pubDate>
      <link>https://dev.to/bpb_online/the-need-for-kubernetes-and-what-it-does-5een</link>
      <guid>https://dev.to/bpb_online/the-need-for-kubernetes-and-what-it-does-5een</guid>
      <description>&lt;p&gt;Containers are a good way to bundle and run your applications and to ensure that there is no downtime in a production environment, you need to manage the containers that run the applications. As an example, if a container goes down, another container needs to start. The question is, would it not be easier if a system handles this behavior?&lt;/p&gt;

&lt;p&gt;Here, &lt;a href="//bpbonline.com/search?q=Kubernetes&amp;amp;type=product"&gt;Kubernetes&lt;/a&gt; come to the rescue! Kubernetes provides you with a framework taking care of scaling and failover for your application, providing deployment patterns to run distributed systems resiliently. A canary deployment for your system can easily be managed by Kubernetes, for example.&lt;/p&gt;

&lt;p&gt;With Kubernetes you are provided with:&lt;br&gt;
&lt;strong&gt;Service discovery and load balancing&lt;/strong&gt;&lt;br&gt;
Using the DNS name or using its own IP address, Kubernetes can expose a container. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Orchestration&lt;/strong&gt;&lt;br&gt;
Kubernetes allows you to automatically mount a storage system of your choice, such as local storage and public cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated rollouts and rollbacks &lt;/strong&gt;&lt;br&gt;
At a controlled rate, the desired state for your deployed containers can be described, and it can change the actual state to the desired state using Kubernetes. To create new containers for your deployment, remove existing containers and adopt all their resources to the new container; you can automate Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automatic bin packing&lt;/strong&gt;&lt;br&gt;
To run containerized tasks, Kubernetes is provided with a cluster of nodes that it can use. To make the best use of your resources, Kubernetes can fit containers onto your nodes upon your telling Kubernetes how much CPU and memory (RAM) each container needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-heal&lt;/strong&gt;&lt;br&gt;
The restarting of containers that fail, replacement of containers, and killing of containers that do not respond to your user-defined health check is done by Kubernetes, and until they are ready to serve, it does not advertise them to clients.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Management of secret and configuration&lt;/strong&gt;&lt;br&gt;
Kubernetes can be used to store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images and without exposing secrets in your stack configuration.&lt;/p&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>softwaredevelopment</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Popular Tools For DevOps</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Sat, 11 Jun 2022 18:46:05 +0000</pubDate>
      <link>https://dev.to/bpb_online/popular-tools-for-devops-2hfd</link>
      <guid>https://dev.to/bpb_online/popular-tools-for-devops-2hfd</guid>
      <description>&lt;p&gt;&lt;a href="//bpbonline.com/search?q=DevOps&amp;amp;type=product"&gt;DevOps&lt;/a&gt;, which is a software development strategy, bridges the gap between the developers and operations team or teams responsible for the deployment, using which organizations can release small features very quickly and incorporate the feedback that they receive very quickly into their application. To achieve automation at various stages are Puppet, Jenkins, GIT, Chef, Docker, Selenium, and &lt;a href="//bpbonline.com/search?q=AWS&amp;amp;type=product"&gt;AWS&lt;/a&gt;, which help in achieving continuous development, continuous integration, continuous testing, continuous deployment, and continuous monitoring, which expedites and actualizes the DevOps process apart from culturally accepting it to deliver quality software to the customer at a very fast pace.&lt;/p&gt;

&lt;p&gt;The version control system for source code management is enabled by Git and GitHub&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plug-ins for server automation built for developing CI/CD pipelines are enabled by Jenkins&lt;/li&gt;
&lt;li&gt;
&lt;a href="//bpbonline.com/search?q=Selenium&amp;amp;type=product"&gt;Selenium&lt;/a&gt; enables automated testing&lt;/li&gt;
&lt;li&gt;
&lt;a href="//bpbonline.com/search?q=Docker&amp;amp;type=product"&gt;Docker&lt;/a&gt; is the platform for software containerization&lt;/li&gt;
&lt;li&gt;
&lt;a href="//bpbonline.com/search?q=Kubernetes&amp;amp;type=product"&gt;Kubernetes&lt;/a&gt; is the tool for container Orchestration&lt;/li&gt;
&lt;li&gt;Puppet enables configuration management and deployment used for deploying, configuring, and managing services.&lt;/li&gt;
&lt;li&gt;Chef enables configuration management and deployment and is used in more complex infrastructure with less effort and is an open-source integration framework supporting Linux and Unix variants and Windows.&lt;/li&gt;
&lt;li&gt;
&lt;a href="//bpbonline.com/search?q=Ansible&amp;amp;type=product"&gt;Ansible&lt;/a&gt; also enables configuration management and deployment.&lt;/li&gt;
&lt;li&gt;Nagios enables continuous monitoring and is an open-source computer system and network monitoring application tool used for troubleshooting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The various stages into which these tools are categorized by DevOps are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Eclipse, Jira, Git, and Subversion are used for code development&lt;/li&gt;
&lt;li&gt;Maven, Gradle, and Ant are used for code build&lt;/li&gt;
&lt;li&gt;Puppet, Chef, Ansible, and RANCID are used for configuration management&lt;/li&gt;
&lt;li&gt;Selenium and JUnit are used for testing&lt;/li&gt;
&lt;li&gt;
&lt;a href="//bpbonline.com/search?q=Jenkins&amp;amp;type=product"&gt;Jenkins&lt;/a&gt;, Maven, and Ant are used for testing and building systems&lt;/li&gt;
&lt;li&gt;Bamboo, Jenkins, and Hudson are used for release&lt;/li&gt;
&lt;li&gt;Capistrano, SaltStack, Ansible, Chef, Puppet, Docker, and Vacrant are used for application deployment&lt;/li&gt;
&lt;li&gt;ActiveMQ, RabbitMQ, Memcache enable Queues, Caches, and so on.&lt;/li&gt;
&lt;li&gt;New Relic, Nagios, Graphite, Ganglia, Cacti, PagerDuty, Splunk, and Sensu are used for monitoring, alerting, and trending&lt;/li&gt;
&lt;li&gt;PaperTrail, Logstash, Loggly, and Splunk enable logging&lt;/li&gt;
&lt;li&gt;Monit, Runit, Supervisor, and God are process supervisors&lt;/li&gt;
&lt;li&gt;Snorby Threat Stack, Tripwire, and Snort are used for security&lt;/li&gt;
&lt;li&gt;Multihost SSH Wrapper and Code Climate are miscellaneous tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which one is your preferred tool? Tell us know in the comments section.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>softwaredevelopment</category>
      <category>developers</category>
    </item>
    <item>
      <title>Terraform Workflow: Simplified</title>
      <dc:creator>BPB Online</dc:creator>
      <pubDate>Fri, 10 Jun 2022 20:20:23 +0000</pubDate>
      <link>https://dev.to/bpb_online/terraform-workflow-simplified-2037</link>
      <guid>https://dev.to/bpb_online/terraform-workflow-simplified-2037</guid>
      <description>&lt;p&gt;&lt;a href="//bpbonline.com/products/infrastructure-automation-with-terraform?_pos=1&amp;amp;_sid=670ffeb27&amp;amp;_ss=r"&gt;Terraform&lt;/a&gt; is an open-source "Infrastructure as Code" tool by HashiCorp and is written in the &lt;a href="//bpbonline.com/search?q=GO&amp;amp;type=product"&gt;GO language&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Terraform is an infrastructure automation tool that uses high-level configuration language previously called HashiCorp Configuration Language (HCL) to describe the desired state of the infrastructure on multiple cloud or on-premise environments. From this configuration, Terraform then generates a plan to reach the desired state and then executes the plan for infrastructure provisioning. &lt;/p&gt;

&lt;p&gt;Currently, Terraform is one of the most popular open-source, cloud-agnostic Infrastructure-as-code (IaC) tools because it uses simple syntax, provisions infrastructure across multiple cloud and on-premises, and safely re-provision infrastructure for any configuration changes. Terraform is mostly used by DevOps engineers to automate infrastructure creation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform workflow&lt;/strong&gt;&lt;br&gt;
Terraform core workflow consists of lifecycle stages init, plan, apply, and destroy. These stages are executed as commands to perform the operation expected by them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpbs4k6zsvdgsyskitnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpbs4k6zsvdgsyskitnq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the description and command for the execution of these stages in the Terraform workflow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform init&lt;/strong&gt; runs a number of initialization tasks to enable Terraform for use in the current working directory. This is usually executed only once per session.&lt;br&gt;
&lt;code&gt;Command:&lt;/code&gt;&lt;br&gt;
&lt;code&gt;# Initialize Terraform&lt;/code&gt;&lt;br&gt;
&lt;code&gt;$ terraform init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform plan&lt;/strong&gt; compares the desired Terraform state with the current state in the cloud and builds and displays an execution plan for infrastructure creation or destruction. This command is just for the creation of a plan for you to verify/review the work done by you and your teammates; it does not change the deployment. So, you can update the Terraform configuration if the plan is not as per your requirements.&lt;br&gt;
&lt;code&gt;Command:&lt;br&gt;
$ terraform plan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform apply&lt;/strong&gt; executes the plan that is created, comparing the current and desired states. This will create or destroy your resources, which means it potentially changes the deployment by adding or removing resources based on changes in the plan. Terraform apply will run the plan command automatically to execute the most recent changes. However, you can always tell it to execute a specific terraform plan command by providing a plan file.&lt;br&gt;
&lt;code&gt;Command:&lt;br&gt;
$ terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform destroy&lt;/strong&gt; will delete all resources that are created/managed by the Terraform environment you are working in.&lt;br&gt;
&lt;code&gt;Command:&lt;br&gt;
$ terraform destroy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These are important sets of commands that you cannot work without, and they are dependent on the sequences shown in figure.&lt;/p&gt;

&lt;p&gt;Hope this was helpful.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>iac</category>
      <category>infrastructureascode</category>
      <category>infrastructure</category>
    </item>
  </channel>
</rss>
