<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ava Parker</title>
    <description>The latest articles on DEV Community by Ava Parker (@parkerava).</description>
    <link>https://dev.to/parkerava</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/parkerava"/>
    <language>en</language>
    <item>
      <title>Boost Testing Speed: Master Sanity vs Regression Today!</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Sun, 13 Oct 2024 21:42:14 +0000</pubDate>
      <link>https://dev.to/parkerava/boost-testing-speed-master-sanity-vs-regression-today-2c42</link>
      <guid>https://dev.to/parkerava/boost-testing-speed-master-sanity-vs-regression-today-2c42</guid>
      <description>&lt;p&gt;Have you ever heard of "sanity" in software testing? What does it mean, and why is it crucial? How did it originate in relation to regression testing?&lt;/p&gt;

&lt;p&gt;As a project manager or team lead, it's essential to understand not only how to use sanity testing but also when to apply it. Sanity testing is a valuable tool, similar to regression testing, due to their shared characteristics and ultimate objectives. This is why project managers should understand the distinctions between each method to deploy a test team effectively and provide them with the appropriate tools, thereby avoiding project delays and budget overruns.&lt;/p&gt;

&lt;h2&gt;Unlocking Sanity Testing: Its Essence and Guidelines for Effective Implementation&lt;/h2&gt;

&lt;p&gt;This method assesses the product's quality to determine its readiness for further testing. As a subset of regression testing, it examines various aspects. The primary objective is to evaluate the program's performance after functionality enhancements and changes. The focus is not on detecting specific errors but on analyzing the system's behavior after fixing previously identified errors.&lt;/p&gt;

&lt;p&gt;This testing approach saves time by alerting developers to subpar product quality, thereby reducing unnecessary testing efforts. To learn more about the benefits of sanity testing, check out this article on &lt;a href="https://t8tech.com/it/architecture/unlock-4x-faster-testing-sanity-vs-regression-testing-what-you-need-to-know/" rel="noopener noreferrer"&gt;t8tech&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Although it may seem straightforward at first glance, sanity testing has its unique challenges, like any other testing type. QA experts share their insights on how to perform this testing with maximum efficiency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid creating test cases, as the testing process is largely intuitive;&lt;/li&gt;
&lt;li&gt;Identify new functional elements, amendments, or fixed bugs;&lt;/li&gt;
&lt;li&gt;Verify that newly applied changes do not compromise the program's proper performance;&lt;/li&gt;
&lt;li&gt;Conduct random checks of related functions to examine their operation;&lt;/li&gt;
&lt;li&gt;After completing the previous steps, proceed to planned testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;The Rationale Behind Sanity Testing&lt;/h2&gt;

&lt;p&gt;What if modified code affects the entire system's functionality? What if failures arise with each subsequent amendment?&lt;/p&gt;

&lt;p&gt;This is where sanity testing comes into play, saving your efforts. Here are the most compelling reasons to utilize this testing type:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-Saving.&lt;/strong&gt; Sanity testing saves time by identifying issues early, allowing for quick fixes and reducing unnecessary testing efforts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Effort Reduction.&lt;/strong&gt; Sanity testing prevents unnecessary actions by determining whether additional tests are required, eliminating extra efforts and providing more time for the test team.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External Issue Identification.&lt;/strong&gt; Sanity testing reveals deployment-related problems, such as inaccurate user interfaces or missing critical features.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rapid Decision-Making.&lt;/strong&gt; Sanity testing quickly determines the product's status, predicting further steps and enabling rapid decision-making.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>regression</category>
      <category>testing</category>
      <category>sanity</category>
      <category>check</category>
    </item>
    <item>
      <title>Streamline Dev in 10 Steps: Publish Artifacts with Jenkins &amp; Nexus</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Sun, 13 Oct 2024 14:48:18 +0000</pubDate>
      <link>https://dev.to/parkerava/streamline-dev-in-10-steps-publish-artifacts-with-jenkins-nexus-2k3a</link>
      <guid>https://dev.to/parkerava/streamline-dev-in-10-steps-publish-artifacts-with-jenkins-nexus-2k3a</guid>
      <description>&lt;p&gt;In this in-depth tutorial, we'll harness the power of Jenkins as a continuous integration hub and Nexus as a centralized build repository, revolutionizing the way we develop software.&lt;/p&gt;

&lt;p&gt;By integrating these two tools, we can significantly enhance our development workflow, ensuring faster and more reliable builds. To learn more about optimizing your development process, check out our expert insights on &lt;a href="https://computerstechnicians.com/it/testing-deployment/automate-your-builds-how-to-publish-artifacts-to-sonatype-nexus-using-jenkins-pipelines-in-10-steps/" rel="noopener noreferrer"&gt;computerstechnicians.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>artifact</category>
      <category>uml</category>
      <category>continuous</category>
      <category>integrationdeployment</category>
    </item>
    <item>
      <title>Unlock 4 Essential Quartz Scheduler Plugins to Skyrocket Productivity!</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Sun, 13 Oct 2024 09:38:15 +0000</pubDate>
      <link>https://dev.to/parkerava/unlock-4-essential-quartz-scheduler-plugins-to-skyrocket-productivity-4fod</link>
      <guid>https://dev.to/parkerava/unlock-4-essential-quartz-scheduler-plugins-to-skyrocket-productivity-4fod</guid>
      <description>&lt;p&gt;Unlock the Power of Quartz Plugins: 4 Hidden Gems to Boost Productivity &lt;a href="https://carsnewstoday.com/programming/testing/unlock-4-hidden-quartz-scheduler-plugins-to-boost-productivity/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://carsnewstoday.com" rel="noopener noreferrer"&gt;https://carsnewstoday.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quartz plugins are often overlooked, despite their immense potential to streamline your workflow. In this article, we'll delve into the existing plugins that come bundled with Quartz, exploring their capabilities and benefits.&lt;/p&gt;

&lt;h4&gt;LoggingTriggerHistoryPlugin: Uncover the Secrets of Job Execution&lt;/h4&gt;

&lt;p&gt;To understand the significance of plugins, let's first consider the two primary abstractions in Quartz: jobs and triggers. A job represents a piece of code that you wish to schedule, while a trigger dictates when this code should be executed. You can associate multiple triggers with a single job, making it a flexible and powerful tool.&lt;/p&gt;

&lt;p&gt;Surprisingly, Quartz doesn't provide built-in logging or monitoring of executed jobs and triggers by default. Although an API is available, no logging is implemented, making it difficult to track job execution. To address this, you can add the following lines to your quartz.properties file:&lt;/p&gt;

&lt;pre&gt;org.quartz.plugin.triggerHistory.class=org.quartz.plugins.history.LoggingTriggerHistoryPlugin
 
org.quartz.plugin.triggerHistory.triggerFiredMessage=Trigger [{1}.{0}] fired job [{6}.{5}] scheduled at: {2, date, dd-MM-yyyy HH:mm:ss.SSS}, next scheduled at: {3, date, dd-MM-yyyy HH:mm:ss.SSS}
 
org.quartz.plugin.triggerHistory.triggerCompleteMessage=Trigger [{1}.{0}] completed firing job [{6}.{5}] with resulting trigger instruction code: {9}. Next scheduled at: {3, date, dd-MM-yyyy HH:mm:ss.SSS}
 
org.quartz.plugin.triggerHistory.triggerMisfiredMessage=Trigger [{1}.{0}] misfired job [{6}.{5}]. Should have fired at: {3, date, dd-MM-yyyy HH:mm:ss.SSS}&lt;/pre&gt;

The first line loads the plugin class LoggingTriggerHistoryPlugin, while the remaining lines configure the plugin, customizing the logging messages. By adding these extra few lines, you can make debugging and monitoring significantly easier, as demonstrated in the example below:

&lt;pre&gt;LoggingTriggerHistoryPlugin | Trigger [Demo.Every-few-seconds] fired job [Demo.Print-message] scheduled at:  04-04-2012 23:23:47.036, next scheduled at:  04-04-2012 23:23:51.036
//...job output
LoggingTriggerHistoryPlugin | Trigger [Demo.Every-few-seconds] completed firing job [Demo.Print-message] with resulting trigger instruction code: DO NOTHING. Next scheduled at:  04-04-2012 23:23:51.036&lt;/pre&gt;

&lt;p&gt;You now grasp the significance of assigning descriptive names to your triggers (Demo.Every-few-seconds) and jobs (Demo.Print-message), which greatly facilitates their identification.&lt;/p&gt;
&lt;h4&gt;LoggingJobHistoryPlugin&lt;/h4&gt;
&lt;p&gt;Another valuable plugin related to logging is worth exploring:&lt;/p&gt;
&lt;pre&gt;org.quartz.plugin.jobHistory.class=org.quartz.plugins.history.LoggingJobHistoryPlugin
org.quartz.plugin.jobHistory.jobToBeFiredMessage=Job [{1}.{0}] scheduled to be fired by trigger [{4}.{3}], re-fire: {7}
org.quartz.plugin.jobHistory.jobSuccessMessage=Job [{1}.{0}] execution completed successfully and reports: {8}
org.quartz.plugin.jobHistory.jobFailedMessage=Job [{1}.{0}] execution failed with exception: {8}
org.quartz.plugin.jobHistory.jobWasVetoedMessage=Job [{1}.{0}] was vetoed. It was to be fired by trigger [{4}.{3}] at: {2, date, dd-MM-yyyy HH:mm:ss.SSS}&lt;/pre&gt;
&lt;p&gt;The underlying principle is the same – plugin + extra configuration. For more details and possible placeholders, refer to the JavaDoc of LoggingJobHistoryPlugin. A quick glance at the logs reveals very descriptive output:&lt;/p&gt;
&lt;pre&gt;Trigger [Demo.Every-few-seconds] fired job [Demo.Print-message] scheduled at:  04-04-2012 23:34:53.739, next scheduled at:  04-04-2012 23:34:57.739
Job [Demo.Print-message] to be fired by trigger [Demo.Every-few-seconds], re-fire: 0
//...job output
Job [Demo.Print-message] execution complete and reports: null
Trigger [Demo.Every-few-seconds] completed firing job [Demo.Print-message] with resulting trigger instruction code: DO NOTHING. Next scheduled at:  04-04-2012 23:34:57.739&lt;/pre&gt;
&lt;p&gt;I find it puzzling that these plugins aren’t enabled by default. After all, if you don’t want such a verbose output, you can simply turn it off in your logging framework. Nevertheless, I believe it’s a good idea to have them in place when troubleshooting Quartz execution.&lt;/p&gt;
&lt;h4&gt;XMLSchedulingDataProcessorPlugin&lt;/h4&gt;
&lt;p&gt;This comprehensive plugin is particularly useful. It reads an XML file (by default named quartz_data.xml) containing jobs and triggers definitions and adds them to the scheduler. This is especially useful when you have a global job that you need to add once. The plugin can either update the existing jobs/triggers or ignore the XML file if they already exist – very useful when JDBCJobStore is used.&lt;/p&gt;
&lt;pre&gt;org.quartz.plugin.xmlScheduling.class=org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin&lt;/pre&gt;
&lt;p&gt;
In the aforementioned article, we manually added a job to the scheduler:&lt;/p&gt;
&lt;pre&gt;val trigger = newTrigger().
        withIdentity("Every-few-seconds", "Demo").
        withSchedule(
            simpleSchedule().
                    withIntervalInSeconds(4).
                    repeatForever()
        ).
        build()

val job = newJob(classOf[PrintMessageJob]).
        withIdentity("Print-message", "Demo").
        usingJobData("msg", "Hello, world!").
        build()

scheduler.scheduleJob(job, trigger)&lt;/pre&gt;

&lt;p&gt;
An alternative approach to achieve the same outcome is by configuring XML, which involves placing the quartz_data.xml file in your CLASSPATH as follows:
&lt;/p&gt;
&lt;pre&gt;

 
    
        false
        true
    
 
    
        
            
                Every-few-seconds
                Demo
                Print-message
                Demo
                -1
                4000
            
        
 
        
            Print-message
            Demo
            com.blogspot.nurkiewicz.quartz.demo.PrintMessageJob
            
                
                    msg
                    Hello, World!
                
            
        
 
    
 
 
&lt;/pre&gt;

&lt;p&gt;
This XML file supports both simple and CRON triggers, and its structure is thoroughly documented using XML Schema.
&lt;/p&gt;

&lt;p&gt;
Additionally, it is possible to reference XML files located in the file system and periodically scan them for changes (note the use of XMLSchedulingDataProcessorPlugin.setScanInterval()). Interestingly, Quartz utilizes its own scheduling mechanism for periodic scanning.
&lt;/p&gt;

&lt;pre&gt;org.quartz.plugin.xmlScheduling.fileNames=/etc/quartz/system-jobs.xml,/home/johnny/my-jobs.xml
org.quartz.plugin.xmlScheduling.scanInterval=60&lt;/pre&gt;

&lt;h4&gt;ShutdownHookPlugin&lt;/h4&gt;

&lt;p&gt;
Lastly, there is the ShutdownHookPlugin, a compact yet useful plugin that registers a shutdown hook in the JVM, enabling a gentle stop of the scheduler. However, it is advisable to disable cleanShutdown – if the system is already attempting to abruptly terminate the application (typically, scheduler shutdown is triggered by Spring via SchedulerFactoryBean) or the user presses Ctrl+C, waiting for currently running jobs seems ill-advised. After all, perhaps we are terminating the application because some jobs are running for too long or hanging?
&lt;/p&gt;

&lt;pre&gt;org.quartz.plugin.shutdownHook.class=org.quartz.plugins.management.ShutdownHookPlugin
org.quartz.plugin.shutdownHook.cleanShutdown=false&lt;/pre&gt;

&lt;p&gt;Evidently, Quartz boasts an array of captivating plugins. Although they lack comprehensive documentation, they operate exceptionally and prove to be a valuable asset to the scheduler.&lt;/p&gt;

&lt;p&gt;The source code, incorporating these plugins, is accessible on GitHub.&lt;/p&gt;

</description>
      <category>job</category>
      <category>scheduling</category>
      <category>quartz</category>
      <category>scheduler</category>
    </item>
    <item>
      <title>94% of Workloads Will Be in Cloud by 2021: Expert Insights to Avoid Costly Mistakes</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Sat, 12 Oct 2024 12:45:16 +0000</pubDate>
      <link>https://dev.to/parkerava/94-of-workloads-will-be-in-cloud-by-2021-expert-insights-to-avoid-costly-mistakes-1a2b</link>
      <guid>https://dev.to/parkerava/94-of-workloads-will-be-in-cloud-by-2021-expert-insights-to-avoid-costly-mistakes-1a2b</guid>
      <description>&lt;p&gt;As the world becomes increasingly digital, businesses are rapidly embracing cloud technology to stay ahead of the curve. The driving forces behind this trend are clear: enhanced flexibility, scalability, reliability, and cost-effectiveness. These benefits are revolutionizing the way companies operate, enabling them to scale their computing resources in line with their growth and ensuring operational excellence. By leveraging cloud computing, businesses can meet the evolving demands of their customers and stay competitive in today's fast-paced market. For expert insights on best practices and common pitfalls to avoid, visit &lt;a href="https://computerstechnicians.com/it/architecture/unlock-the-power-of-cloud-computing-expert-insights-on-best-practices-and-common-pitfalls-to-avoid/" rel="noopener noreferrer"&gt;computerstechnicians.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;According to Cisco's statistical projections, a staggering 94 percent of workloads and compute instances will be processed in cloud data centers by 2021. Whether you're considering migrating your business operations to the cloud or not, it's crucial to conduct thorough research. Having a comprehensive understanding of the benefits and potential pitfalls will empower you to make an informed decision. With the right approach, you can unlock the full potential of cloud computing and take your business to the next level.&lt;/p&gt;

&lt;h2&gt;Best Practices for a Smooth Cloud Migration&lt;/h2&gt;

&lt;h3&gt;Develop a Comprehensive Cloud Migration Strategy&lt;/h3&gt;

&lt;p&gt;Before making the move to the cloud, it's essential to devise a detailed strategy. Conduct extensive research to gain a deeper understanding of the cloud environment, security, and computing services. Moreover, you need to have a thorough grasp of your existing business challenges and future objectives. By doing so, you can create a tailored strategy that meets your unique needs and sets you up for success.&lt;/p&gt; 

&lt;p&gt;Formulate a strategy to maximize the benefits of cloud adoption by tracking the rapidly evolving standards in cloud computing. Additionally, it's vital to analyze various migration strategies and consider the financial implications. Before making the transition, crunch the numbers and compare cloud computing expenses with in-house IT expenditures. This will help you determine the most suitable solution for your business and ensure a seamless migration.&lt;/p&gt;

&lt;h3&gt;Evaluate Cloud Computational Models to Find the Best Fit&lt;/h3&gt;

&lt;p&gt;Each business has unique requirements, so it's crucial to assess different cloud computing models in detail and determine which one best suits your business needs. There are four cloud deployment models to choose from – public, private, hybrid, and community. Furthermore, there are three fundamental models of cloud service used for different types of computing – Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). By understanding the key differences between various cloud environments and service models, you can make an informed decision for your organization.&lt;/p&gt;

&lt;h3&gt;Tap into the Expertise of Seasoned IT Professionals&lt;/h3&gt;

&lt;p&gt;Streamlining your business operations and processes doesn't necessitate any hardware installation, which is why many companies opt to undertake the procedure in-house. Nevertheless, enlisting the services of seasoned IT experts can be a significant boon. Ensure you select a Cloud Service Provider (CSP) that aligns perfectly with your business, operational, security, and compliance requirements. With a plethora of CSPs available, it can be a daunting task to identify the ideal one. In addition to the prominent players like Microsoft, Amazon, and Google, the cloud service sphere is also populated by smaller niche players offering tailored services. Make sure to conduct thorough research, as it empowers you to evaluate potential cloud partners accurately.&lt;/p&gt;
&lt;h3&gt;Consider the Consequences: Risk and Compliance Factors&lt;/h3&gt;
&lt;p&gt;Introducing new technology inevitably brings inherent risks, and it's your duty to mitigate these factors and ensure adherence to industry regulations. Before transitioning to the cloud, it's vital to have an in-depth understanding of potential risks and regulatory compliance issues. If your business operates in a heavily regulated industry, such as healthcare, legal, or e-commerce, which handles sensitive user information, then it's crucial to maintain compliance with regulations and standards. Therefore, make it a point to evaluate the terms and conditions of your cloud vendor beforehand.&lt;/p&gt;

&lt;h2&gt;Pitfalls to Avoid&lt;/h2&gt;

&lt;h3&gt;Don't Overlook Critical Elements&lt;/h3&gt;

&lt;p&gt;Migrating your business operations to the cloud environment is a significant step. However, you need to pay attention to every minute detail and have a comprehensive understanding of each factor. Gathering more knowledge not only helps in devising better strategies and making informed decisions but also makes the transition process seamless and hassle-free.&lt;/p&gt;

&lt;h3&gt;Security Should Never Be Compromised&lt;/h3&gt;

&lt;p&gt;Whether you're moving your entire business operations to the cloud or only a part of it, security should always be the top priority. Performing automated testing before the cloud migration will help in assessing and reporting performance issues. Since security is an integral part of automated testing, it's essential to extend it to the DevOps tools and organization as well. How does the cloud service provider address your security concerns? Do they provide a guarantee in terms of the safety of authentication and authorization? Make sure to discuss all the security and compliance requirements with them in detail, before making the move.&lt;/p&gt;

&lt;h3&gt;Avoid the "One-Size-Fits-All" Approach&lt;/h3&gt;

&lt;p&gt;It's a common mistake for companies to attempt to migrate their entire application portfolio to the cloud simultaneously. However, some programs and files may still require traditional data center storage for security reasons. Moreover, certain applications built on legacy technology may need significant modifications to integrate with the cloud. Therefore, it's crucial to assess these applications to determine the time and effort required for necessary changes. Prioritize applications based on their business value derived from cloud migration, and consider seeking the expertise of a chief technology officer to guide your migration strategy.&lt;/p&gt;
&lt;h3&gt;The Pivotal Function of Governance Frameworks&lt;/h3&gt;
&lt;p&gt;Establishing a robust governance infrastructure is often overlooked by many organizations, yet its absence can have far-reaching consequences, including stalled initiatives, regulatory fines, budget excesses, and more. According to Forrester, governance encompasses "the capacity to provide strategic guidance, monitor performance, allocate resources, and implement adjustments to ensure that organizational goals are achieved, while adhering to risk tolerance and compliance requirements." A well-crafted governance framework empowers organizations to effectively track, safeguard, and manage their services and resources.&lt;/p&gt;

&lt;h2&gt;Key Takeaways&lt;/h2&gt;

&lt;p&gt;As businesses leverage the potential of cloud computing, they must meticulously evaluate, plan, implement, and refine their adoption strategy. By adhering to the guidelines outlined here, companies can mitigate disruption, ensure regulatory adherence, and optimize costs.&lt;/p&gt;

</description>
      <category>adoption</category>
      <category>application</category>
      <category>cloud</category>
      <category>computing</category>
    </item>
    <item>
      <title>Unlock Scalable Messaging: 4-Step Guide to Fault-Tolerant Concurrency with RabbitMQ &amp; Spring Boot</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Thu, 10 Oct 2024 16:49:26 +0000</pubDate>
      <link>https://dev.to/parkerava/unlock-scalable-messaging-4-step-guide-to-fault-tolerant-concurrency-with-rabbitmq-spring-boot-nph</link>
      <guid>https://dev.to/parkerava/unlock-scalable-messaging-4-step-guide-to-fault-tolerant-concurrency-with-rabbitmq-spring-boot-nph</guid>
      <description>&lt;p&gt;Building on my previous discussion on harnessing the power of asynchronous messaging with RabbitMQ, this article delves into the practical implementation of RabbitMQ in a real-world setting using Java and Spring Boot. For a thorough understanding of the fundamental concepts of async messaging and RabbitMQ, I recommend checking out my previous article at &lt;a href="https://computerstechnicians.com/it/coding/unlock-scalable-messaging-integrate-rabbitmq-with-spring-boot-for-fault-tolerant-concurrency/" rel="noopener noreferrer"&gt;computerstechnicians.com&lt;/a&gt;. This article will focus on the following key aspects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Seamless integration of RabbitMQ into a Spring Boot application&lt;/li&gt;
&lt;li&gt;Creating efficient producers and consumers to send and receive messages in various formats, including String, JSON, and Java Objects&lt;/li&gt;
&lt;li&gt;Implementing robust fault tolerance mechanisms&lt;/li&gt;
&lt;li&gt;Providing scalable concurrency support&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;&lt;strong&gt;Getting Started with RabbitMQ in Your Spring Boot Project&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;The first step in unlocking the full potential of RabbitMQ is to set up a Spring Boot project and add the necessary RabbitMQ dependencies. If you already have a Spring Boot application, you can simply add the dependencies without creating a new project. However, creating a separate project for RabbitMQ offers the advantage of keeping your message queue-related code separate from your main application, making it easily shareable or pluggable into other applications if needed.&lt;/p&gt;

&lt;p&gt;You can create a Spring Boot project using the Spring initializer and import it into your IDE, or create one directly from the Spring Tool Suite IDE (if you’re using it). To integrate RabbitMQ, add the spring-boot-starter-amqp dependency to your pom.xml file.&lt;/p&gt; 

&lt;pre&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;spring-boot-starter-amqp&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Configure and Initialize Application Properties&lt;/h2&gt;
&lt;p&gt;These configuration parameters are intuitively named, and their purpose is to facilitate the exchange of data within the application, including specifying exchange names, queue names, and binding configurations.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Message Queue specific configurations for app1
app1.exchange.identifier=app1-exchange
app1.queue.identifier=app1-queue
app1.routing.identifier=app1-routing-key

# Message Queue specific configurations for app2
app2.exchange.identifier=app2-exchange
app2.queue.identifier=app2-queue
app2.routing.identifier=app2-routing-key

#AMQP RabbitMQ configuration 
spring.rabbitmq.host=localhost
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

# Additional RabbitMQ properties
spring.rabbitmq.listener.simple.concurrency=4
spring.rabbitmq.listener.simple.max-concurrency=8
spring.rabbitmq.listener.simple.retry.initial-interval=5000

&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Develop a Properties File Reader Class&lt;/h2&gt;

&lt;p&gt;Now that we have created the properties file, let's design a class to read these properties and make them accessible within the application.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;package com.dpk.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;

@Configuration@PropertySource("classpath:application.properties")public class ApplicationConfigReader {

@Value("${app1.exchange.identifier}")private String app1Exchange;

@Value("${app1.queue.identifier}")private String app1Queue;

@Value("${app1.routing.identifier}")private String app1RoutingKey;

@Value("${app2.exchange.identifier}")private String app2Exchange;

@Value("${app2.queue.identifier}")private String app2Queue;

@Value("${app2.routing.identifier}")private String app2RoutingKey;

// All getters and setters

}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;&lt;strong&gt;Lay the Groundwork for Queue, Exchange, Routing Key, and Binding Configurations&lt;/strong&gt;&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;package com.dpk;

import org.springframework.amqp.core.Binding;
import org.springframework.amqp.core.BindingBuilder;
import org.springframework.amqp.core.Queue;
import org.springframework.amqp.core.TopicExchange;
import org.springframework.amqp.rabbit.annotation.EnableRabbit;
import org.springframework.amqp.rabbit.annotation.RabbitListenerConfigurer;
import org.springframework.amqp.rabbit.connection.ConnectionFactory;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.amqp.rabbit.listener.RabbitListenerEndpointRegistrar;
import org.springframework.amqp.support.converter.Jackson2JsonMessageConverter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
import org.springframework.context.annotation.Bean;
import org.springframework.messaging.converter.MappingJackson2MessageConverter;
import org.springframework.messaging.handler.annotation.support.DefaultMessageHandlerMethodFactory;

import com.dpk.config.ApplicationConfigReader;


@EnableRabbit@SpringBootApplicationpublic class MsgqApplication extends SpringBootServletInitializer implements RabbitListenerConfigurer {

@Autowiredprivate ApplicationConfigReader applicationConfig;

public ApplicationConfigReader getApplicationConfig() {
return applicationConfig;
}

public void setApplicationConfig(ApplicationConfigReader applicationConfig) {
this.applicationConfig = applicationConfig;
}

public static void main(String[] args) {
SpringApplication.run(MsgqApplication.class, args);
}

protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(MsgqApplication.class);
}

/* This bean is to read the properties file configs */
@Beanpublic ApplicationConfigReader applicationConfig() {
return new ApplicationConfigReader();
}

/* Creating a bean for the Message queue Exchange */
@Beanpublic TopicExchange getApp1Exchange() {
return new TopicExchange(getApplicationConfig().getApp1Exchange());
}

/* Creating a bean for the Message queue */
@Beanpublic Queue getApp1Queue() {
return new Queue(getApplicationConfig().getApp1Queue());
}

/* Binding between Exchange and Queue using routing key */
@Beanpublic Binding declareBindingApp1() {
return BindingBuilder.bind(getApp1Queue()).to(getApp1Exchange()).with(getApplicationConfig().getApp1RoutingKey());
}

/* Creating a bean for the Message queue Exchange */
@Beanpublic TopicExchange getApp2Exchange() {
return new TopicExchange(getApplicationConfig().getApp2Exchange());
}

/* Creating a bean for the Message queue */
@Beanpublic Queue getApp2Queue() {
return new Queue(getApplicationConfig().getApp2Queue());
}

/* Binding between Exchange and Queue using routing key */
@Beanpublic Binding declareBindingApp2() {
return BindingBuilder.bind(getApp2Queue()).to(getApp2Exchange()).with(getApplicationConfig().getApp2RoutingKey());
}

/* Bean for rabbitTemplate */
@Beanpublic RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}

@Beanpublic Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}

@Beanpublic MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}

@Beanpublic DefaultMessageHandlerMethodFactory messageHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(consumerJackson2MessageConverter());
return factory;
}

@Overridepublic void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(messageHandlerMethodFactory());
}

}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;strong&gt;Architect a Message Broker&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;The MessageSender class is strikingly simple. It harnesses the convertAndSend() method of the RabbitTemplate to channel messages to a queue, defining the exchange, routing key, and data payload.&lt;/p&gt;


&lt;pre&gt;&lt;code&gt;package com.dpk;

&lt;p&gt;import org.slf4j.Logger;&lt;br&gt;
import org.slf4j.LoggerFactory;&lt;br&gt;
import org.springframework.amqp.rabbit.core.RabbitTemplate;&lt;br&gt;
import org.springframework.stereotype.Component;&lt;/p&gt;

&lt;p&gt;/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A message broker responsible for relaying messages to a queue via an exchange.
*/&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;@Componentpublic class MessageSender {&lt;/p&gt;

&lt;p&gt;private static final Logger log = LoggerFactory.getLogger(MessageSender.class);&lt;/p&gt;

&lt;p&gt;/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;@param rabbitTemplate&lt;/li&gt;
&lt;li&gt;@param exchange&lt;/li&gt;
&lt;li&gt;@param routingKey&lt;/li&gt;
&lt;li&gt;@param data
*/
public void sendMessage(RabbitTemplate rabbitTemplate, String exchange, String routingKey, Object data) {
log.info("Routing message to the queue using routingKey {}. Message= {}", routingKey, data);
rabbitTemplate.convertAndSend(exchange, routingKey, data);
log.info("The message has been successfully relayed to the queue.");
}&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;}&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;strong&gt;Architecting Message Handlers&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Constructing a message handler can be a multifaceted task, as it entails addressing diverse scenarios, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically converting messages into Java objects&lt;/li&gt;
&lt;li&gt;Managing REST call failures to inaccessible APIs or errors occurring during request processing&lt;/li&gt;
&lt;li&gt;Enabling multiple handlers to concurrently retrieve and process messages from queues&lt;/li&gt;
&lt;li&gt;Determining when and how to re-queue messages in the event of failure&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Java Object Deserialization&lt;/h3&gt;

&lt;p&gt;Spring provides the &lt;code&gt;@RabbitListener&lt;/code&gt; annotation, which simplifies message reception from queues and offers automatic Java object deserialization. The following example demonstrates this feature.&lt;/p&gt;

&lt;h3&gt;Error Handling and Message Re-Queuing in Handlers&lt;/h3&gt;


&lt;li&gt;The handler attempts to call an unreachable API to process the request&lt;/li&gt;
&lt;br&gt;
&lt;li&gt;The handler calls the API, but an error occurs during request processing&lt;/li&gt;

&lt;p&gt;In such situations, depending on your business requirements, you may choose not to re-queue the message or re-queue it with a maximum number of retry options to process it up to a limit.&lt;/p&gt;

&lt;p&gt;To prevent re-queuing the message, you can throw the &lt;code&gt;AmqpRejectAndDontRequeueException&lt;/code&gt;. For maximum retry handling, you can add an additional parameter to the message, setting the maximum number of retries and incrementing its value while receiving the message, ensuring the total number of retries does not exceed the limit.&lt;/p&gt;

&lt;p&gt;An alternative approach is to add these properties to the &lt;code&gt;application.properties&lt;/code&gt; file, specifying the maximum number of attempts:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;spring.rabbitmq.listener.simple.retry.max-attempts=3&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;Concurrency Capabilities&lt;/h2&gt;

&lt;p&gt;Concurrency can be achieved in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a thread pool with a specified maximum number of threads and using a ThreadExecutor to call the methods/APIs for request processing.&lt;/li&gt;
&lt;li&gt;Leveraging the built-in concurrency feature, which requires setting two properties in the application.properties file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can adjust the values of these properties according to your application’s scalability requirements.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;spring.rabbitmq.listener.simple.concurrency=4&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;spring.rabbitmq.listener.simple.max-concurrency=8&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;package com.dpk;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.AmqpRejectAndDontRequeueException;
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.stereotype.Service;
import org.springframework.web.client.HttpClientErrorException;
import com.dpk.config.ApplicationConfigReader;
import com.dpk.dto.UserDetails;
import com.dpk.util.ApplicationConstant;

/**
 * Message Listener for RabbitMQ
 */

@Servicepublic class MessageListener {

    private static final Logger log = LoggerFactory.getLogger(MessageListener.class);

    @Autowired    ApplicationConfigReader applicationConfigReader;


    /**
     * Message listener for app1
     * @param UserDetails a user defined object used for deserialization of message
     */
    @RabbitListener(queues = "${app1.queue.name}")
    public void receiveMessageForApp1(final UserDetails data) {
    log.info("Received message: {} from app1 queue.", data);

    try {
    log.info("Making REST call to the API");
    //TODO: Code to make REST call
        log.info("&amp;lt;&amp;lt; Exiting receiveMessageForApp1() after API call.");
    } catch(HttpClientErrorException  ex) {

    if(ex.getStatusCode() == HttpStatus.NOT_FOUND) {
        log.info("Delay...");
        try {
    Thread.sleep(ApplicationConstant.MESSAGE_RETRY_DELAY);
    } catch (InterruptedException e) { }

    log.info("Throwing exception so that message will be requed in the queue.");
    // Note: Typically Application specific exception should be thrown below
    throw new RuntimeException();
    } else {
    throw new AmqpRejectAndDontRequeueException(ex); 
    }

    } catch(Exception e) {
    log.error("Internal server error occurred in API call. Bypassing message requeue {}", e);
    throw new AmqpRejectAndDontRequeueException(e); 
    }

    }


    /**
     * Message listener for app2
     * 
     */

    @RabbitListener(queues = "${app2.queue.name}")
    public void receiveMessageForApp2(String reqObj) {
    log.info("Received message: {} from app2 queue.", reqObj);

    try {
    log.info("Making REST call to the API");
    //TODO: Code to make REST call
        log.info("&amp;lt;&amp;lt; Exiting receiveMessageCrawlCI() after API call.");
    } catch(HttpClientErrorException  ex) {

    if(ex.getStatusCode() == HttpStatus.NOT_FOUND) {
        log.info("Delay...");
        try {
    Thread.sleep(ApplicationConstant.MESSAGE_RETRY_DELAY);
    } catch (InterruptedException e) { }

    log.info("Throwing exception so that message will be requed in the queue.");
    // Note: Typically Application specific exception can be thrown below
    throw new RuntimeException();
    } else {
    throw new AmqpRejectAndDontRequeueException(ex); 
    }

    } catch(Exception e) {
    log.error("Internal server error occurred in python server. Bypassing message requeue {}", e);
    throw new AmqpRejectAndDontRequeueException(e); 
    }

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Set Up the Service Interface&lt;/h2&gt;
&lt;p&gt;Finally, create the service class that wraps the service interface, which will be triggered by the user. This service class utilizes the MessageSender to forward the message to the queue.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;package com.dpk;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import com.dpk.config.ApplicationConfigReader;
import com.dpk.dto.UserDetails;
import com.dpk.util.ApplicationConstant;


@RestController@RequestMapping(path = "/userservice")
public class UserService {

private static final Logger log = LoggerFactory.getLogger(UserService.class);

private final RabbitTemplate rabbitTemplate;
private ApplicationConfigReader applicationConfig;
private MessageSender messageSender;

public ApplicationConfigReader getApplicationConfig() {
return applicationConfig;
}

@Autowiredpublic void setApplicationConfig(ApplicationConfigReader applicationConfig) {
this.applicationConfig = applicationConfig;
}

@Autowiredpublic UserService(final RabbitTemplate rabbitTemplate) {
this.rabbitTemplate = rabbitTemplate;
}

public MessageSender getMessageSender() {
return messageSender;
}

@Autowiredpublic void setMessageSender(MessageSender messageSender) {
this.messageSender = messageSender;
}


@RequestMapping(path = "/add", method = RequestMethod.POST, produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity&amp;lt;?&amp;gt; sendMessage(@RequestBody UserDetails user) {

String exchange = getApplicationConfig().getApp1Exchange();
String routingKey = getApplicationConfig().getApp1RoutingKey();

/* Sending to Message Queue */
try {
messageSender.sendMessage(rabbitTemplate, exchange, routingKey, user);
return new ResponseEntity&amp;lt;String&amp;gt;(ApplicationConstant.IN_QUEUE, HttpStatus.OK);

} catch (Exception ex) {
log.error("Exception occurred while sending message to the queue. Exception= {}", ex);
return new ResponseEntity(ApplicationConstant.MESSAGE_QUEUE_SEND_ERROR,
HttpStatus.INTERNAL_SERVER_ERROR);
}

}

}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The complete code is accessible on GitHub, where you can log in and review it at your convenience. Should you have any queries or feedback, please feel free to express your thoughts in the comments section below. I appreciate the chance to interact with you. Thank you!&lt;/p&gt;

</description>
      <category>fault</category>
      <category>technology</category>
      <category>tolerance</category>
      <category>integration</category>
    </item>
    <item>
      <title>Unlock Efficient Data Exchange: Import &amp; Export Excel in ASP.NET Core</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Thu, 10 Oct 2024 00:51:09 +0000</pubDate>
      <link>https://dev.to/parkerava/unlock-efficient-data-exchange-import-export-excel-in-aspnet-core-484f</link>
      <guid>https://dev.to/parkerava/unlock-efficient-data-exchange-import-export-excel-in-aspnet-core-484f</guid>
      <description>&lt;p&gt;This article will guide you through the process of seamlessly importing and exporting Excel files in ASP.NET Core 3.1 Razor Pages. We will cover the following essential topics:&lt;/p&gt;

&lt;pre&gt;&lt;strong&gt;&amp;gt;&amp;gt; &lt;/strong&gt; &lt;strong&gt;Importing an Excel file in .NET Core and previewing the uploaded file&lt;/strong&gt;
&lt;strong&gt;&amp;gt;&amp;gt; &lt;/strong&gt; &lt;strong&gt;Exporting the Excel file&lt;/strong&gt; &lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;NPOI Package: A Powerful Tool for Excel File Management&lt;/strong&gt;NPOI is a versatile, open-source tool that supports xls, xlsx, and docx extensions. As the .NET equivalent of the POI Java project at http://poi.apache.org/, NPOI enables you to read and write XLS, DOC, PPT files. It offers a wide range of features, including styling, formatting, data formulas, and image extraction, among others. A significant advantage of NPOI is that it does not require &lt;strong&gt;Microsoft Office to be installed on the server.&lt;/strong&gt; You can utilize NPOI anywhere, making it a valuable component for your projects. For more information on how to unlock efficient data exchange, check out &lt;a href="https://computerstechnicians.com/it/coding/unlock-efficient-data-exchange-seamless-import-and-export-of-excel-files-in-asp-net-core-3-1-razor-pages/" rel="noopener noreferrer"&gt;computerstechnicians&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This post demonstrates the fundamental functionalities of NPOI, but you can accomplish much more with NPOI, such as styling individual cells or rows, creating Excel formulas, and other tasks. The NPOI package supports both “xls” and “xlsx” extensions, utilizing HSSFWorkbook and XSSFWorkbook classes. The HSSFWorkbook class is for “xls”, whereas the other is for “xlsx”. To learn more about NPOI, refer to the official documentation.&lt;/p&gt;

&lt;h2&gt;Streamlining Excel Import and Export in ASP.NET Core 3.1 Razor Pages&lt;/h2&gt;

&lt;p&gt;Let’s create a .NET Core web application with .NET Core 3.1. Open VS 2019 ➠ Select ASP.NET Core web application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fimport-and-export-excel-file-in-asp-net-core-31-ra_img_0.png" class="article-body-image-wrapper"&gt;&lt;img alt="Creating an ASP.NET Core application with Excel import and export functionality" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fimport-and-export-excel-file-in-asp-net-core-31-ra_img_0.png" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, we create an input file for uploading the Excel file, and after uploading it, append the Excel data into a DIV. Then, we download that Excel data and also create an Excel file using dummy data. The design is shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fimport-and-export-excel-file-in-asp-net-core-31-ra_img_1.png" class="article-body-image-wrapper"&gt;&lt;img alt="Application design for seamless Excel import and export" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fimport-and-export-excel-file-in-asp-net-core-31-ra_img_1.png" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Import and Export Excel in ASP.NET Core 3.1 Razor Pages &lt;/h3&gt;
    
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            &amp;lt;a href="@Url.Action("&amp;gt;Retrieve File&amp;lt;/a&amp;gt;





     
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;Code Fragment Breakdown&lt;/h2&gt;

&lt;p&gt;This HTML code fragment integrates a file upload interface and a corresponding upload button. The accompanying jQuery script enables the asynchronous upload of Excel files, while also conducting client-side validation to verify file selection and extension accuracy.&lt;/p&gt;

&lt;p&gt;Upon successful completion of the request, the server response is appended to the HTML. For a detailed understanding of handling Ajax requests with ASP.NET Core 3.1 razor pages, refer to this article, and for uploading files in ASP.NET Core 3.1 Razor Pages, consult this resource.&lt;/p&gt;

&lt;p&gt;To facilitate file import, create a post method in the &lt;strong&gt;Homecontroller.cs&lt;/strong&gt; file, saving the uploaded file to the &lt;strong&gt;wwwroot&lt;/strong&gt; folder, and subsequently append it to the div.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public ActionResult Import()
        {
            IFormFile file = Request.Form.Files[0];
            string folderName = "UploadExcel";
            string webRootPath = _hostingEnvironment.WebRootPath;
            string newPath = Path.Combine(webRootPath, folderName);
            StringBuilder sb = new StringBuilder();
            if (!Directory.Exists(newPath))
            {
                Directory.CreateDirectory(newPath);
            }
            if (file.Length &amp;gt; 0)
            {
                string sFileExtension = Path.GetExtension(file.FileName).ToLower();
                ISheet sheet;
                string fullPath = Path.Combine(newPath, file.FileName);
                using (var stream = new FileStream(fullPath, FileMode.Create))
                {
                    file.CopyTo(stream);
                    stream.Position = 0;
                    if (sFileExtension == ".xls")
                    {
                        HSSFWorkbook hssfwb = new HSSFWorkbook(stream); //This will read the Excel 97-2000 formats
                        sheet = hssfwb.GetSheetAt(0); //get first sheet from workbook
                    }
                    else
                    {
                        XSSFWorkbook hssfwb = new XSSFWorkbook(stream); //This will read 2007 Excel format
                        sheet = hssfwb.GetSheetAt(0); //get first sheet from workbook
                    }
                    IRow headerRow = sheet.GetRow(0); //Get Header Row
                    int cellCount = headerRow.LastCellNum;
                    sb.Append("&amp;lt;table class='table table-bordered'&amp;gt;&amp;lt;tr&amp;gt;");
                    for (int j = 0; j &amp;lt; cellCount; j++)
                    {
                        NPOI.SS.UserModel.ICell cell = headerRow.GetCell(j);
                        if (cell == null || string.IsNullOrWhiteSpace(cell.ToString())) continue;
                        sb.Append("&amp;lt;th&amp;gt;" + cell.ToString() + "&amp;lt;/th&amp;gt;");
                    }
                    sb.Append("&amp;lt;/tr&amp;gt;");
                    sb.AppendLine("&amp;lt;tr&amp;gt;");
                    for (int i = (sheet.FirstRowNum + 1); i &amp;lt;= sheet.LastRowNum; i++) //Read Excel File
                    {
                        IRow row = sheet.GetRow(i);
                        if (row == null) continue;
                        if (row.Cells.All(d =&amp;gt; d.CellType == CellType.Blank)) continue;
                        for (int j = row.FirstCellNum; j &amp;lt; cellCount; j++)
                        {
                            if (row.GetCell(j) != null)
                                sb.Append("&amp;lt;td&amp;gt;" + row.GetCell(j).ToString() + "&amp;lt;/td&amp;gt;");
                        }
                        sb.AppendLine("&amp;lt;/tr&amp;gt;");
                    }
                    sb.Append("&amp;lt;/table&amp;gt;");
                }
            }
            return this.Content(sb.ToString());
        }&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To access the file, follow the instructions outlined in the code snippet below:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public ActionResult Download()
        {
            string filePath = "wwwroot/UploadExcel/CoreProgramm_ExcelImport.xlsx";
            byte[] fileBytes = System.IO.File.ReadAllBytes(filePath);
            System.IO.File.WriteAllBytes(filePath, fileBytes);
            MemoryStream ms = new MemoryStream(fileBytes);
            return File(fileBytes, System.Net.Mime.MediaTypeNames.Application.Octet, "employee.xlsx");
        }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To create and export the file, let’s leverage sample employee data and export it in a corresponding manner.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public async Task&amp;lt;IActionResult&amp;gt; Export()
        {
            string sWebRootFolder = _hostingEnvironment.WebRootPath;
            string sFileName = @"Employees.xlsx";            string URL = string.Format("{0}://{1}/{2}", Request.Scheme, Request.Host, sFileName);
            FileInfo file = new FileInfo(Path.Combine(sWebRootFolder, sFileName));
            var memory = new MemoryStream();
            using (var fs = new FileStream(Path.Combine(sWebRootFolder, sFileName), FileMode.Create, FileAccess.Write))
            {
                IWorkbook workbook;
                workbook = new XSSFWorkbook();
                ISheet excelSheet = workbook.CreateSheet("employee");
                IRow row = excelSheet.CreateRow(0);

                row.CreateCell(0).SetCellValue("EmployeeId");
                row.CreateCell(1).SetCellValue("EmployeeName");
                row.CreateCell(2).SetCellValue("Age");
                row.CreateCell(3).SetCellValue("Sex");
                row.CreateCell(4).SetCellValue("Designation");

                row = excelSheet.CreateRow(1);
                row.CreateCell(0).SetCellValue(1);
                row.CreateCell(1).SetCellValue("Jack Supreu");
                row.CreateCell(2).SetCellValue(45);
                row.CreateCell(3).SetCellValue("Male");
                row.CreateCell(4).SetCellValue("Solution Architect");

                row = excelSheet.CreateRow(2);
                row.CreateCell(0).SetCellValue(2);
                row.CreateCell(1).SetCellValue("Steve khan");
                row.CreateCell(2).SetCellValue(33);
                row.CreateCell(3).SetCellValue("Male");
                row.CreateCell(4).SetCellValue("Software Engineer");

                row = excelSheet.CreateRow(3);
                row.CreateCell(0).SetCellValue(3);
                row.CreateCell(1).SetCellValue("Romi gill");
                row.CreateCell(2).SetCellValue(25);
                row.CreateCell(3).SetCellValue("FeMale");
                row.CreateCell(4).SetCellValue("Junior Consultant");

                row = excelSheet.CreateRow(4);
                row.CreateCell(0).SetCellValue(4);
                row.CreateCell(1).SetCellValue("Hider Ali");
                row.CreateCell(2).SetCellValue(34);
                row.CreateCell(3).SetCellValue("Male");
                row.CreateCell(4).SetCellValue("Accountant");

                row = excelSheet.CreateRow(5);
                row.CreateCell(0).SetCellValue(5);
                row.CreateCell(1).SetCellValue("Mathew");
                row.CreateCell(2).SetCellValue(48);
                row.CreateCell(3).SetCellValue("Male");
                row.CreateCell(4).SetCellValue("Human Resource");

                workbook.Write(fs);
            }
            using (var stream = new FileStream(Path.Combine(sWebRootFolder, sFileName), FileMode.Open))
            {
                await stream.CopyToAsync(memory);
            }
            memory.Position = 0;
            return File(memory, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", sFileName);
        }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let's initiate the application and examine the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2F1.bp.blogspot.com%2F-t4ABUbaXHp8%2FXiyaclPDoqI%2FAAAAAAAABP0%2FKBB_tyTZuCYGrQVud_K3Or34oendzqmGQCLcBGAsYHQ%2Fs1600%2FCoreProgramm_importexcel_dotnetcore_5.gif" class="article-body-image-wrapper"&gt;&lt;img alt="Application output screenshot" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2F1.bp.blogspot.com%2F-t4ABUbaXHp8%2FXiyaclPDoqI%2FAAAAAAAABP0%2FKBB_tyTZuCYGrQVud_K3Or34oendzqmGQCLcBGAsYHQ%2Fs1600%2FCoreProgramm_importexcel_dotnetcore_5.gif" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Access the Source Code at Github.com/CoreProgramm/.&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fimport-and-export-excel-file-in-asp-net-core-31-ra_img_3.png" class="article-body-image-wrapper"&gt;&lt;img alt="Project file structure" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fimport-and-export-excel-file-in-asp-net-core-31-ra_img_3.png" width="432" height="307"&gt;&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;

 Upon uploading the file, it is saved to the specified path. For a more in-depth look, visit the following link:&lt;br&gt;
 https://www.coreprogramm.com/2020/01/import-and-export-excel-file-in-dotnet-core.html.&lt;br&gt;














&lt;span&gt;&lt;/span&gt;




&lt;h3&gt;Securing Your Data: Creating Immutable Copies of SQL and MySQL Databases&lt;/h3&gt;






&lt;span&gt;&lt;/span&gt;




&lt;h3&gt;Top Discounts on Dell's Premium Laptops: Unbeatable Offers on XPS 13, XPS 15, and XPS 17 Models&lt;/h3&gt;





&lt;h2&gt;Recommended Reading&lt;/h2&gt;



&lt;h3&gt;Unlock Angular’s Full Potential: Inject Services with Ease in 5 Minutes!&lt;/h3&gt;
&lt;span&gt;&lt;time&gt;5 days ago&lt;/time&gt;&lt;/span&gt; 



 
&lt;h2 id="reply-title"&gt;Leave a Reply &lt;small&gt;Cancel reply&lt;/small&gt;
&lt;/h2&gt;
&lt;p&gt;&lt;span id="email-notes"&gt;Your email address will not be published.&lt;/span&gt; &lt;span&gt;Required fields are marked &lt;span&gt;*&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Comment &lt;span&gt;*&lt;/span&gt; &lt;/p&gt;
&lt;p&gt;Name &lt;span&gt;*&lt;/span&gt; &lt;/p&gt;
&lt;p&gt;Email &lt;span&gt;*&lt;/span&gt; &lt;/p&gt;
&lt;p&gt; Save my name, email, and website in this browser for the next time I comment.&lt;/p&gt;
 

</description>
      <category>net</category>
      <category>active</category>
      <category>server</category>
      <category>pages</category>
    </item>
    <item>
      <title>Unlock 6-Figure Data Science Career in 4 Proven Steps</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Wed, 09 Oct 2024 23:04:31 +0000</pubDate>
      <link>https://dev.to/parkerava/unlock-6-figure-data-science-career-in-4-proven-steps-538b</link>
      <guid>https://dev.to/parkerava/unlock-6-figure-data-science-career-in-4-proven-steps-538b</guid>
      <description>&lt;p&gt;In today's data-driven era, businesses are hungry for valuable insights from their vast data reserves. Consequently, the demand for data scientists has soared, making them a vital component of corporate strategy.&lt;/p&gt;

&lt;p&gt;This surge in demand has paved the way for an exciting career path in data science, with opportunities to progress from junior to senior levels, including principal data scientist and director, in just a few years.&lt;/p&gt;

&lt;p&gt;If you're new to data science and aspire to reach the top, it's crucial to understand the career progression and the skills required to succeed. According to &lt;a href="https://computerstechnicians.com/it/data/unlock-a-6-figure-data-science-career-from-junior-to-director-in-4-proven-steps/" rel="noopener noreferrer"&gt;computerstechnicians&lt;/a&gt;, a well-planned approach can help you unlock a 6-figure data science career.&lt;/p&gt;

&lt;h2&gt;&lt;strong&gt;Navigating the Data Scientist Career Path&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;To excel as a data scientist, you need a solid foundation in data science, statistics, and engineering. Here's a typical career path to follow:&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Associate/Junior Data Scientist: Level 1.0&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;As a junior data scientist, your role involves testing new ideas, debugging, and refining existing models. You'll also be expected to collaborate with your team, pitch innovative ideas, and take ownership of improving code quality and impact.&lt;/p&gt;

&lt;p&gt;If you're still in college, you can get a head start by developing skills in programming languages like Python, Java, R, and SQL/MySQL, while refreshing your knowledge of applied mathematics and statistics. Early exposure to the field will help you determine if a data science career is right for you.&lt;/p&gt;

&lt;p&gt;Focus on subjects like computer science, information technology, mathematics, statistics, and data science to increase your chances of success. You should possess skills in data science, machine learning, Python, R, research, SQL, data analysis, analytical skills, teamwork, and communication skills.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Data Scientists Mid-Level-I Roles: Level 2.0 &lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;With one to three years of work experience, you can take your career to the next level as a senior data scientist or machine learning and AI engineer, if AI is your area of interest. At this stage, certifications in data science can give you a competitive edge, so it's recommended to earn one or two relevant certifications.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Senior Data Scientist&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;As a senior data scientist, you're expected to develop well-designed products that meet the highest standards. Seasoned professionals in this field avoid rookie mistakes, eliminate logical flaws in models, and refine high-performing systems. They write reusable code, build robust data pipelines in the cloud, and prepare impeccable data. Additionally, they mentor junior colleagues and provide insightful answers to business questions for top management.&lt;/p&gt;

&lt;p&gt;Many seasoned data scientists boast a Master's degree, while others have even earned a Ph.D. and obtained senior data scientist certification, underscoring their expertise.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;AI/Machine Learning Engineer&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Data scientists must tap into the vast potential of Machine Learning and Artificial Intelligence (AI), which are rapidly advancing fields. As machine learning becomes a core mission for organizations, data scientists must develop end-to-end machine learning solutions. This involves designing, creating, evaluating, and deploying models to production, monitoring and logging decisions, and visualizing data to drive business outcomes.&lt;/p&gt;

&lt;p&gt;To excel, you'll need to possess in-depth knowledge and skills in Artificial Intelligence, Deep Learning, Machine Learning, Natural Language Processing, Data Science, Python, C++, SQL, Java, and software engineering. Obtaining Machine Learning or Artificial Engineering certifications, in addition to top-tier data scientist certifications, is highly prized.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Data Scientists Mid-Level-II Roles: Level 3.0 &lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;In this mid-tier level, soft skills take center stage. You should be both tech-savvy and business-savvy, with a profound understanding of business and various data analytics technologies. You'll need to apply methods to validate data, prevent fraud, and manage budgets. It's essential to understand parallelization, scalability, and complexity analysis. You'll shape data products to align with corporate strategy and provide data insights that inform business decisions, driving growth and revenue.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Principal Data Scientist&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;The Principal Data Scientist is the most seasoned member of the data science team, with over 5 years of experience and expertise in data science models. They focus on high-impact business projects and often hold a Ph.D. and principal data scientist certification. The Principal Data Scientist (PDS) leverages their machine learning expertise to provide strategic direction at scale, driving business transformation.&lt;/p&gt;

&lt;p&gt;They're expected to understand challenges across multiple business domains, identify new business opportunities, and demonstrate leadership excellence in data science methodologies. They must also possess scientific and industrial maturity, delivering designs and algorithms that make and quantify cross-organization trade-offs, driving business value.&lt;/p&gt;

&lt;p&gt;In addition to their core responsibilities, Principal Data Scientists (PDS) play a vital role in mentoring junior colleagues, serving as technical advisors to product managers, and are highly valued in any data science project, driving business success.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;Executive Data Science Positions&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;This high-level role demands proficiency in both database management systems and programming languages. Data Science Leaders/Architects spearhead teams, establish priorities, and present insights to senior management.&lt;/p&gt;

&lt;p&gt;Many possess certifications such as Microsoft Certified Professional, Certified Analytics Professional, or SAS/SQL certified practitioner, depending on their organization’s requirements. A Master’s in Business Administration is often recommended for this role, which involves team leadership and project management.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Senior Data Science Positions: Level 4.0&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;To attain this level, professionals must demonstrate the ability to mentor teams, oversee strategic data analysis, and stay current with the latest technologies. Directing an organization’s entire data science operations is a fulfilling challenge that requires the right blend of skills. The director’s decisions have a profound impact on the organization’s success or failure.&lt;/p&gt;

&lt;h2&gt;&lt;strong&gt;Core Insights&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Pursuing a data scientist career is an exciting, challenging, and rewarding journey. To thrive, acquire in-depth knowledge to become a top associate. Be prepared to deploy models into production to become a senior. Continuously assess and upgrade your skills, and strive to make data work for you and your organization.&lt;/p&gt;

</description>
      <category>career</category>
      <category>data</category>
      <category>science</category>
      <category>machine</category>
    </item>
    <item>
      <title>Unlock 100s of Microservices in 5 Easy Steps with Spring Cloud Config &amp; Kotlin!</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Mon, 07 Oct 2024 22:17:10 +0000</pubDate>
      <link>https://dev.to/parkerava/unlock-100s-of-microservices-in-5-easy-steps-with-spring-cloud-config-kotlin-209a</link>
      <guid>https://dev.to/parkerava/unlock-100s-of-microservices-in-5-easy-steps-with-spring-cloud-config-kotlin-209a</guid>
      <description>&lt;p&gt;Building a microservice architecture with Java and Spring Boot has gained immense popularity in recent years. However, managing hundreds of services for each profile can be a daunting task. In this article, we will delve into the world of Spring Cloud Config Server using Kotlin, a powerful tool that simplifies this process.&lt;/p&gt;

&lt;p&gt;Spring Boot has revolutionized Spring projects, offering a much-needed boost in terms of efficiency and productivity.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Spring Cloud Config provides a &lt;strong&gt;unified&lt;/strong&gt; and &lt;strong&gt;flexible&lt;/strong&gt; approach to &lt;strong&gt;external configuration&lt;/strong&gt; in a &lt;strong&gt;distributed environment&lt;/strong&gt;. With the &lt;strong&gt;Config Server&lt;/strong&gt;, you have a single point to manage external properties for applications across all environments, making it an essential tool for any microservice setup.&lt;/p&gt; 
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcarsnewstoday.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fconfiguration-as-a-service-spring-cloud-config-usi_img_0.jpeg" class="article-body-image-wrapper"&gt;&lt;img alt="spring cloud config" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcarsnewstoday.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fconfiguration-as-a-service-spring-cloud-config-usi_img_0.jpeg" width="500" height="500"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;As illustrated in the diagram above, managing configuration as a central service in distributed systems can be a complex task. Spring Cloud Config provides a client-server architecture mechanism to simplify this process, making it an ideal solution for managing hundreds of services.&lt;/p&gt;

&lt;p&gt;Let’s visit &lt;a href="https://start.spring.io/" rel="noopener noreferrer"&gt;https://start.spring.io/&lt;/a&gt;, a popular platform for building Spring-based applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcarsnewstoday.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fconfiguration-as-a-service-spring-cloud-config-usi_img_1.png" class="article-body-image-wrapper"&gt;&lt;img alt="spring initializer" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcarsnewstoday.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fconfiguration-as-a-service-spring-cloud-config-usi_img_1.png" width="800" height="435"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Whenever we make changes to any service, we need to restart the services to apply the changes, which can be time-consuming and inefficient.&lt;/p&gt;

&lt;p&gt;Let’s create a Git repository to manage our configuration, a crucial step in building a microservice architecture. To achieve this, we’ll create a Git repository, which will serve as a central hub for our configuration.&lt;/p&gt;

&lt;p&gt;So, we’ll create a small Spring Boot microservice called “&lt;strong&gt;springbootclient&lt;/strong&gt;” to read the username from the Spring Cloud Config central configuration server system, which is our Git repository. This approach allows us to manage our configuration in a centralized and efficient manner.&lt;/p&gt;

&lt;p&gt;We have created three distinct properties files for each of our different environments:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;springbootclient.properties&lt;/li&gt;
&lt;li&gt;springbootclient-dev.properties&lt;/li&gt;
&lt;li&gt;springbootclient-prod.properties&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For a more detailed example, you can visit &lt;a href="https://carsnewstoday.com/programming/software-design/unlock-microservice-architecture-manage-100s-of-services-with-spring-cloud-config-kotlin-in-5-easy-steps/" rel="noopener noreferrer"&gt;carsnewstoday&lt;/a&gt;, which provides a comprehensive guide to building a microservice architecture using Spring Cloud Config and Kotlin.&lt;/p&gt;

&lt;p&gt;https://github.com/maheshwarLigade/cloud-common-config-server&lt;/p&gt;

&lt;p&gt;Here, you can find our Spring Cloud Config properties. You can clone or use this repository directly, which will give you a head start in building your microservice architecture.&lt;/p&gt;

&lt;p&gt;Now that we have created a Spring Config Server application using Spring Starter, let’s download and import the project into your preferred IDE or editor. The Git repository is used to store our configuration, and the Spring Cloud Config Server application serves these properties to the client, making it an essential tool for managing hundreds of services.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In essence, Git operates as a central data storage, whereas Spring Cloud Config Server assumes the role of a server-side application, providing configuration support to multiple microservices.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having established our Git data storage, we have developed a sample client application, dubbed &lt;em&gt;springbootclient&lt;/em&gt;, within this repository. In our forthcoming microservice article, we will utilize the same Spring Cloud Config as a configuration server.&lt;/p&gt;

&lt;p&gt;Let us delve into the code base for the client application.&lt;/p&gt;

&lt;p&gt;This is a sample &lt;em&gt;application.properties&lt;/em&gt; file:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;server.port=8888
logging.level.org.springframework.cloud.config=DEBUG
spring.cloud.config.server.git.uri=https://github.com/maheshwarLigade/cloud-common-config-server.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.searchPaths=springbootclient&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Example Code for &lt;em&gt;SpringCloudConfigServerexApplication.kt&lt;/em&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
import org.springframework.cloud.config.server.EnableConfigServer

@SpringBootApplication@EnableConfigServerclass SpringCloudConfigServerexApplication

fun main(args: Array&amp;lt;String&amp;gt;) {
   runApplication&amp;lt;SpringCloudConfigServerexApplication&amp;gt;(*args)
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, start and initialize the spring cloud-config server and verify the following URL:&lt;/p&gt;

&lt;p&gt;http://localhost:8888/springbootclient/dev/master&lt;/p&gt;

&lt;h2&gt;Configuring a Spring Boot Client Application&lt;/h2&gt;

&lt;p&gt;Let’s develop a compact microservice that fetches configuration from the spring cloud config server and exposes the property value via a REST endpoint.&lt;/p&gt;

&lt;p&gt;Visit https://start.spring.io/ and create a spring boot client microservice using Kotlin.&lt;/p&gt;

&lt;p&gt;Sample &lt;strong&gt;POM.xml &lt;/strong&gt;dependencies.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
   &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
   &amp;lt;artifactId&amp;gt;spring-boot-starter-web&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
   &amp;lt;groupId&amp;gt;com.fasterxml.jackson.module&amp;lt;/groupId&amp;gt;
   &amp;lt;artifactId&amp;gt;jackson-module-kotlin&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
   &amp;lt;groupId&amp;gt;org.jetbrains.kotlin&amp;lt;/groupId&amp;gt;
   &amp;lt;artifactId&amp;gt;kotlin-reflect&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
   &amp;lt;groupId&amp;gt;org.jetbrains.kotlin&amp;lt;/groupId&amp;gt;
   &amp;lt;artifactId&amp;gt;kotlin-stdlib-jdk8&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;
&amp;lt;dependency&amp;gt;
   &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
   &amp;lt;artifactId&amp;gt;spring-cloud-starter-config&amp;lt;/artifactId&amp;gt;
&amp;lt;/dependency&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now check the &lt;em&gt;SpringCloudClientAppApplication.kt &lt;/em&gt;code&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication

@SpringBootApplicationclass SpringCloudClientAppApplication

fun main(args: Array&amp;lt;String&amp;gt;) {
    runApplication&amp;lt;SpringCloudClientAppApplication&amp;gt;(*args)
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now create one sample REST controller which is serving REST request. We want to check ” &lt;em&gt;/whoami” &lt;/em&gt;this endpoint is returning which is the user based on active profile dev, prod, etc.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;UserController.kt&lt;/em&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;import org.springframework.beans.factory.annotation.Value
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.RestController


@RestControllerclass UserController {

    @Value("\${app.adminusername}")    var username="Test"
//get request serving
    @GetMapping("/whoami")    fun whoami() = "I am a  "+ username

}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a &lt;em&gt;bootstrap.properties&lt;/em&gt; file to configure the Spring Cloud Config server settings, specifying the Git branch and active profile, such as development, local, production, and more.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;spring.application.name=springbootclient
spring.profiles.active=dev
spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.fail-fast=true
spring.cloud.config.label=master&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Each property is intuitively named, clearly indicating its purpose and function.&lt;/p&gt;

&lt;p&gt;When accessing this URL http://localhost:9080/whoami&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Response: &lt;/strong&gt;I am a DevUser&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Github repository link:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Config Server repository: &lt;/strong&gt;https://github.com/maheshwarLigade/cloud-common-config-server&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code repository:&lt;/strong&gt; https://github.com/maheshwarLigade/spring-cloud-config-kotlin-ex&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>programming</category>
      <category>language</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Unlock 100% Persistent Storage for Docker Containers in 5 Steps</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Mon, 07 Oct 2024 20:26:59 +0000</pubDate>
      <link>https://dev.to/parkerava/unlock-100-persistent-storage-for-docker-containers-in-5-steps-3h0k</link>
      <guid>https://dev.to/parkerava/unlock-100-persistent-storage-for-docker-containers-in-5-steps-3h0k</guid>
      <description>&lt;p&gt;In an ideal scenario, Docker containers should be ephemeral and independent of external storage. This is achievable in microservice architecture when services connect to external databases, queues, and other services. However, certain services like Jenkins, Prometheus, or Postgres require persistent storage, making it essential to set up a reliable storage solution.&lt;/p&gt;

&lt;p&gt;Fortunately, attaching EBS volumes to ECS Tasks using Docker volume drivers is now a straightforward process. This article will guide you through the step-by-step process of setting up persistent storage for your Docker containers, ensuring that your ECS Task automatically detaches and reattaches when it restarts. For more information on Docker container storage, visit &lt;a href="https://computerstechnicians.com/it/architecture/unlock-persistent-storage-for-your-docker-containers-a-step-by-step-guide-to-attaching-aws-ebs-volumes-to-ecs-tasks/" rel="noopener noreferrer"&gt;https://computerstechnicians.com&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Understanding Persistent Storage in ECS&lt;/h2&gt;

&lt;p&gt;By default, ECS Tasks have a temporary storage area on the host instance, known as the ECS Container Instance, which is essentially an EC2 instance. While this is suitable for temporary data, it's not ideal for persistent storage. To overcome this limitation, we need to connect to external storage solutions like AWS EBS or AWS EFS.&lt;/p&gt;

&lt;p&gt;Docker volume plugins, such as REX-Ray, enable us to achieve persistent storage. The REX-Ray plugin can configure AWS services, including creating volumes and attaching them to EC2 instances.&lt;/p&gt;

&lt;p&gt;As shown in the diagram below, when an ECS Task runs on an EC2 Instance, the volume (e.g., EBS) needs to be attached to that instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F10%2Fhow-to-attach-an-aws-ebs-storage-volume-to-your-do_img_0.png" class="article-body-image-wrapper"&gt;&lt;img alt="virtual private cloud" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcomputerstechnicians.com%2Fwp-content%2Fuploads%2F2024%2F10%2Fhow-to-attach-an-aws-ebs-storage-volume-to-your-do_img_0.png" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;REX-Ray simplifies the process by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;creating the volume if it doesn’t already exist, including configuring volume type and size&lt;/li&gt;
&lt;li&gt;ensuring our Docker container/ECS Task is mounted with the volume&lt;/li&gt;
&lt;li&gt;detaching and re-attaching the volume when the ECS Task moves from one EC2 instance to another&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;ECS Launch Types&lt;/h4&gt;

&lt;p&gt;ECS offers two launch types: &lt;em&gt;EC2&lt;/em&gt; and &lt;em&gt;Fargate&lt;/em&gt;. With EC2, you are responsible for provisioning the underlying EC2 instances on which your ECS Tasks will be deployed. With Fargate, you only need to specify the CPU and memory requirements, and AWS provisions everything needed to run your ECS Task.&lt;/p&gt;

&lt;p&gt;It's essential to note that persistent storage is only compatible with the EC2 launch type, not with Fargate. Therefore, in this article, we will focus solely on the EC2 launch type.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>software</category>
      <category>entity</category>
    </item>
    <item>
      <title>Fix Nulls in 2 Mins: Cleaner JSON Data with Jolt Code</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Sun, 06 Oct 2024 23:37:15 +0000</pubDate>
      <link>https://dev.to/parkerava/fix-nulls-in-2-mins-cleaner-json-data-with-jolt-code-e50</link>
      <guid>https://dev.to/parkerava/fix-nulls-in-2-mins-cleaner-json-data-with-jolt-code-e50</guid>
      <description>&lt;h2&gt;Simplifying Big Data Processing: A 2-Minute Jolt Code Fix for Cleaner JSON Data&lt;/h2&gt;

&lt;p&gt;When working with big data streams, null values can hinder efficient and accurate data processing. In this article, we'll explore the benefits of using Jolt code to eliminate these null values, ensuring a smoother data processing experience.&lt;/p&gt;

&lt;p&gt;Jolt code is specifically designed for big data streams, and it's surprisingly easy to use. For instance, you can utilize Jolt to remove null values with the following code:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[
  {
    "operation": "default",
    "spec": {
      "address": "",
      "somesensorvalues[]": {
        "*": {
          "sensor1": false
        }
      },
      "startTime": "",
      "onStartTime": "",
      "markId": "",
      "markName": "",
      "stoppedTime": "",
      "startTime2": "",
      "powerSetting": "false",
      "speed": 0,
      "id": 0,
      "city": "",
      "state": ""
    }
  },
  {
    "operation": "shift",
    "spec": {
      "*": "&amp;amp;"
    }
  }
]&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To put this concept into perspective, let's consider an example of source data:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{
  "address" : "2000 Electric Avenue",
  "somesensordata" : [ {
    "sensor1" : null
  } ],
  "city" : "hightstown",
  "deviceId" : 5454545,
  "dateTime" : "2017-08-07 14:56:09",
  "id" : 6831491,
  "idle" : false,
  "startTime" : null,
  "onStartTime" : null,
  "markId" : null,
  "markName" : null,
  "zipCode" : "08520"
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;By applying Jolt code to remove null values, you can significantly improve the efficiency and accuracy of your big data processing. For more insights on streamlining big data, check out &lt;a href="https://t8tech.com/it/data/remove-nulls-in-big-data-streams-a-2-minute-jolt-code-fix-for-cleaner-json-data/" rel="noopener noreferrer"&gt;t8tech&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>big</category>
      <category>data</category>
      <category>stream</category>
      <category>computing</category>
    </item>
    <item>
      <title>Unlock Mule Flows: Send SOAP &amp; Transport Headers Like a Pro</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Sun, 06 Oct 2024 23:19:28 +0000</pubDate>
      <link>https://dev.to/parkerava/unlock-mule-flows-send-soap-transport-headers-like-a-pro-2334</link>
      <guid>https://dev.to/parkerava/unlock-mule-flows-send-soap-transport-headers-like-a-pro-2334</guid>
      <description>&lt;p&gt;Unlock the Power of Mule Flows: Learn How to Send SOAP and Transport Headers Like a Pro&lt;/p&gt;

&lt;p&gt;As a seasoned MuleSoft expert, I recently led API implementations for a prominent client using MuleSoft's CloudHub. One common requirement that emerged was the need to integrate an external web service into Mule flows. The MuleSoft HTTP Request Connector and Web Service Consumer are the go-to connectors for developers seeking to tap into external web services from Mule flows. When making HTTP requests to an external web service, these connectors require configuration with all requisite parameters, including endpoint URL, HTTP method/operations, headers, and authentications.&lt;/p&gt;

&lt;p&gt;In this article, I will explore the process of sending headers (SOAP headers and transport headers) when invoking external web services from Mule flows.&lt;/p&gt;

&lt;p&gt;Sending SOAP Headers&lt;/p&gt;

&lt;p&gt;When consuming external SOAP services from a Mule flow, we can transmit SOAP headers to the external web service by leveraging the MuleSoft property transformer. To do this, we need to establish outbound properties with the prefix "soap." using the MuleSoft property transformer. Outbound properties that commence with a "soap." prefix will be treated as SOAP headers and disregarded by the transport. Conversely, all properties that aren't named with a "soap." prefix will be treated as HTTP transport headers (by default, the WSC employs the HTTP transport).&lt;/p&gt;

&lt;p&gt;For example, we can use the MuleSoft property transformer to create a SOAP header named "cccmoiheaders." The value of these headers can be static or dynamic (by reading from flowVars or property placeholders).&lt;/p&gt;

&lt;p&gt;To learn more about sending SOAP and transport headers in Mule flows, check out this in-depth guide: &lt;a href="https://t8tech.com/it/architecture/unlock-the-power-of-mule-flows-learn-how-to-send-soap-and-transport-headers-like-a-pro/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://t8tech.com/it/architecture/unlock-the-power-of-mule-flows-learn-how-to-send-soap-and-transport-headers-like-a-pro/" rel="noopener noreferrer"&gt;https://t8tech.com/it/architecture/unlock-the-power-of-mule-flows-learn-how-to-send-soap-and-transport-headers-like-a-pro/&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;set-property propertyName="soap.cccmoiheaders" value="&amp;amp;lt;wsse:Security&lt;br&gt;
xmlns:soapenv=&amp;amp;quot;&lt;a href="http://schemas.xmlsoap.org/soap/envelope/&amp;amp;quot" rel="noopener noreferrer"&gt;http://schemas.xmlsoap.org/soap/envelope/&amp;amp;amp;quot&lt;/a&gt;;&lt;br&gt;
xmlns:wsu=&amp;amp;quot;&lt;a href="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;amp;quot" rel="noopener noreferrer"&gt;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;amp;amp;quot&lt;/a&gt;;&lt;br&gt;
xmlns:wsse=&amp;amp;quot;&lt;a href="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;amp;quot" rel="noopener noreferrer"&gt;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;amp;amp;quot&lt;/a&gt;; soapenv:mustUnderstand=&amp;amp;quot;1&amp;amp;quot;&amp;amp;gt;&amp;amp;lt;&lt;br&gt;
    wsse:UsernameToken wsu:Id=&amp;amp;quot;UsernameToken-03184DA938DBE5406314344062579892&amp;amp;quot;&amp;amp;gt;&amp;amp;lt;wsse:Username&amp;amp;gt;${BW.UserName}&amp;amp;lt;/wsse:Username&amp;amp;gt;&amp;amp;lt;&lt;br&gt;
    wsse:Password Type=&amp;amp;quot;&lt;a href="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText&amp;amp;quot;&amp;amp;gt;$%7BBW.Password%7D&amp;amp;lt;/wsse:Password&amp;amp;gt;&amp;amp;lt" rel="noopener noreferrer"&gt;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText&amp;amp;amp;quot;&amp;amp;amp;gt;${BW.Password}&amp;amp;amp;lt;/wsse:Password&amp;amp;amp;gt;&amp;amp;amp;lt&lt;/a&gt;;&lt;br&gt;
    /wsse:UsernameToken&amp;amp;gt;&amp;amp;lt;/wsse:Security&amp;amp;gt;" &lt;br&gt;
doc:name="BW_Headers"/&amp;gt;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The output of this code snippet will generate a SOAP header named “&lt;strong&gt;cccmoiheaders&lt;/strong&gt;” with the value below.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;wsse:Security

            xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"

            xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"

            xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soapenv:mustUnderstand="1"&amp;gt;

            &amp;lt;wsse:UsernameToken wsu:Id="UsernameToken-03184DA938DBE5406314344062579892"&amp;gt;

                        &amp;lt;wsse:Username&amp;gt;abcddddd&amp;lt;/wsse:Username&amp;gt;

                        &amp;lt;wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText"&amp;gt;absbbsbs&amp;lt;/wsse:Password&amp;gt;

            &amp;lt;/wsse:UsernameToken&amp;gt;

&amp;lt;/wsse:Security&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2. Customizing Transport Headers&lt;/strong&gt;: When integrating Mule flows with external web services (REST via HTTP Connector or SOAP via Web Service Consumer), various options are available for sending transport headers. Let's delve into the process of transmitting transport-level headers from Mule flows when utilizing these connectors to invoke external web services.&lt;/p&gt;
&lt;p&gt; &lt;strong&gt;2.1 Utilizing the Web Service Consumer:&lt;/strong&gt; To transmit transport-level headers while consuming an external SOAP service using the WS Consumer, MuleSoft's property transformer is the optimal solution. Here, we need to establish outbound properties (excluding the “&lt;strong&gt;soap.&lt;/strong&gt;” prefix) using the MuleSoft property transformer prior to the WS Consumer. These outbound properties will be treated as transport headers (by default, the WSC employs the HTTP transport).&lt;/p&gt;
&lt;p&gt;The following code snippet will create a transport header named &lt;strong&gt;“Content-Type”&lt;/strong&gt; with the value &lt;strong&gt;“application/json”&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;     &amp;lt;set-property propertyName="Content-Type" value="application/json" doc:name="Property"/&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2.2 Harnessing the Power of the HTTP Request Connector:&lt;/strong&gt; When integrating external web services via the HTTP Request Connector, you can transmit transport-level headers by leveraging the MuleSoft property transformer (as outlined in section 2.1) or by explicitly defining the transport headers within the HTTP Request Connector's configuration settings. Outbound properties present in the Mule message that reaches the HTTP Request Connector are automatically appended as HTTP request headers. The screenshot below illustrates the header configuration in the HTTP Requester Configuration, where the header values are static, but can be dynamically retrieved from flowVars or a property placeholder.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ft8tech.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fworking-with-headers-in-mule-flows_img_1.png" class="article-body-image-wrapper"&gt;&lt;img alt="Configuring HTTP Request Headers" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ft8tech.com%2Fwp-content%2Fuploads%2F2024%2F09%2Fworking-with-headers-in-mule-flows_img_1.png" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The approaches outlined above play a crucial role in sending headers when invoking external web services from a Mule flow. Furthermore, in Mule flows, you can replicate all existing properties from the inbound scope to the outbound scope of the message using the “&lt;strong&gt;copy-properties&lt;/strong&gt;” feature. Please refer to the code snippet below for configuring copy-properties in Mule.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;copy-properties propertyName="http.*" doc:name="Copy All HTTP Headers"/&amp;gt;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let’s share our expertise to enrich our MuleSoft community.&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;



</description>
      <category>connector</category>
      <category>mathematics</category>
      <category>flow</category>
      <category>web</category>
    </item>
    <item>
      <title>Boost Your iOS App's Speed: 7 Essential Performance Testing Tips</title>
      <dc:creator>Ava Parker</dc:creator>
      <pubDate>Sun, 06 Oct 2024 23:08:17 +0000</pubDate>
      <link>https://dev.to/parkerava/boost-your-ios-apps-speed-7-essential-performance-testing-tips-52l6</link>
      <guid>https://dev.to/parkerava/boost-your-ios-apps-speed-7-essential-performance-testing-tips-52l6</guid>
      <description>&lt;p&gt;In the cutthroat world of iOS app development, speed is the name of the game. With users having no patience for slow and unresponsive apps, it's crucial to evaluate an app's performance before its release. A single misstep can lead to a poor user experience, ultimately affecting an app's success.&lt;/p&gt;

&lt;p&gt;At Apple's 2014 Worldwide Developers Conference (WWDC), the tech giant introduced performance testing support for the XCTest framework. This framework allows developers to measure code performance within unit or UI tests. In this article, we'll explore the process of conducting automated performance testing using the XCTest framework, a game-changer in the world of iOS development.&lt;/p&gt;

&lt;h2&gt;The Evolution of Performance Testing in iOS&lt;/h2&gt;

&lt;p&gt;Apple provides a range of developer tools to assess the performance of iOS apps, including Instruments, Leaks, Profiler, and Allocations. These tools help detect performance issues, memory leaks, and other bottlenecks. However, traditional manual monitoring and execution of these checks are no longer viable in modern iOS development. With a single code change capable of rendering an app unresponsive, unstable, and unusable, an automated approach is necessary to regularly evaluate an app's performance. Fortunately, there are ways to achieve automated performance testing of iOS apps, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduling periodic Instruments checks.&lt;/li&gt;
&lt;li&gt;Implementing automated performance tests using the XCTest framework, a more efficient approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we'll delve into automated performance tests using the XCTest framework, as regular Instruments checks are not a suitable solution due to the repetitive nature of the task. For more information on ongoing quality assurance of iOS applications, visit &lt;a href="https://carsnewstoday.com/programming/testing/ongoing-quality-assurance-of-ios-applications-via-xctest-driven-performance-evaluation/" rel="noopener noreferrer"&gt;https://carsnewstoday.com&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Streamlining Performance Testing with XCTest&lt;/h2&gt;

&lt;p&gt;The XCTest framework features measure blocks that can be applied to any code within a test method to evaluate its performance. Here's how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The measure block executes the code multiple times, depending on the set number of invocations – the default being ten.&lt;/li&gt;
&lt;li&gt;Upon completion, it calculates the average value, which can be used as a baseline for future test executions. This baseline value can also be set manually.&lt;/li&gt;
&lt;li&gt;The performance test will fail if the average time exceeds the baseline.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Applying Performance Tests at the UI Level&lt;/h2&gt;

&lt;p&gt;Identifying suitable candidates for performance testing is a team-based decision. While XCTest documentation suggests that performance tests can be applied to any XCTest, regardless of the test level, we'll focus on XCUItests for this demo. This approach enables communication with all application endpoints, providing an opportunity to analyze which service or API responds faster or slower. Applying performance tests to UI tests has its advantages and disadvantages. While XCUItests cover diverse areas, the performance test may become exceedingly slow. In contrast, applying performance tests at the unit level yields quick results.&lt;/p&gt;
&lt;h2&gt;Assessing App Efficiency with Customizable Measurement Tools&lt;/h2&gt;
&lt;p&gt;To demonstrate this concept in a real-world scenario, let's develop a demo application, XCUItest-Performance, featuring a UI test target. Begin by adding a button to the main storyboard, assigning it the accessibility identifier "press" without linking it to any IBAction. At this point, we will have a functional demo app with a button. In the XCUItest template, create a test specifically designed for performance assessment, incorporating a measure block. Within this block, we can initiate the app launch sequence and simulate a button press. A typical test would resemble the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;func evaluateAppPerformance() {
    self.measure {
     XCUIApplication().launch()
     XCUIApplication().buttons["press"].tap()
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We can execute the test from Xcode and observe that it will iterate the code within the block ten times. During the initial test run, it will fail, prompting the developer to establish an appropriate benchmark value for future test executions. The test will subsequently pass or fail based on the comparison of the average value and the benchmark value.&lt;/p&gt;

&lt;p&gt;Observe this in the GIF below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdz2cdn1.dzone.com%2Fstorage%2Ftemp%2F14018258-preformace_baseline-1.gif" class="article-body-image-wrapper"&gt;&lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdz2cdn1.dzone.com%2Fstorage%2Ftemp%2F14018258-preformace_baseline-1.gif" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have established the benchmark value for our future performance tests, these tests will run in tandem with our normal unit tests, failing if the average value exceeds the benchmark. This enables us to automatically test the performance of our code alongside unit test execution.&lt;/p&gt;

&lt;h2&gt;Refined Performance Tests with Measure Metrics&lt;/h2&gt;

&lt;p&gt;In the preceding test, we calculated the average time from the app launch until the user presses the button. However, we can delve deeper and specify the exact points from which to start and stop measuring performance. This allows us to apply performance tests to specific areas of the test code within the unit test by utilizing the start/stop method, achievable through the measure metric method.&lt;/p&gt;

&lt;p&gt;Let’s apply performance tests solely when the button is pressed. We can launch the app normally and then employ the measure metric method to initiate and terminate performance measurement. We can add the test as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;func testperformancemeasuremetrics() {
   xcuiapplication().launch()
   self.measuremetrics([xctperformancemetric.wallclocktime], automaticallystartmeasuring: false) {
      startmeasuring()
      xcuiapplication().buttons["press"].tap()
      stopmeasuring()

   }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Upon executing this test, we can observe that the application launches only once, yet we can calculate its performance by invoking it multiple times.&lt;/p&gt;

&lt;p&gt;The measure metrics method offers a convenient means of optionally setting up start and end points for performance measurement during test execution, thereby providing valuable insights.&lt;/p&gt;

&lt;h2&gt;Performance Test Result Analysis&lt;/h2&gt;

&lt;p&gt;Xcode visually represents the results of performance tests, with a typical result resembling this:&lt;/p&gt;

&lt;p&gt;The resulting dialogue reports average, baseline, and max stddev values, indicating that the performance test is 0.779% worse than the previous execution. These results can also be accessed in the console output when running performance tests from a continuous integration server.&lt;/p&gt;

&lt;h2&gt;Pros and Cons of XCTest as a Performance Test Tool&lt;/h2&gt;

&lt;p&gt;XCTest, a unit and UI level functional testing tool, is also capable of performing performance testing of iOS modules. However, there are certain advantages and disadvantages to using XCTest as a performance testing tool.&lt;/p&gt;

&lt;h3&gt;Advantages&lt;/h3&gt;

&lt;p&gt;Developers can benefit from using XCTest for performance testing in several ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a core Apple framework, XCTest eliminates the need to add third-party dependencies for performance testing, thereby streamlining the process.&lt;/li&gt;
&lt;li&gt;Unit tests written using XCTest can be easily extended to performance tests by adding measure blocks around code snippets, eliminating the need to learn additional performance test tools.&lt;/li&gt;
&lt;li&gt;Performance tests can be written using Swift, eliminating the need to learn additional languages.&lt;/li&gt;
&lt;li&gt;XCTest is easily pluggable to any continuous integration server, allowing for seamless execution using Xcodebuild or Fastlane Scan.&lt;/li&gt;
&lt;li&gt;It offers the ability to match the baseline using device types, eliminating the need to worry about comparing results with valid data sets.&lt;/li&gt;
&lt;li&gt;XCTest provides the ability to utilize the Xcode IDE for performance testing, making it a convenient option.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Disadvantages&lt;/h3&gt;

&lt;p&gt;Despite its benefits, using XCTest for performance testing also has some drawbacks:&lt;/p&gt;


&lt;ul&gt;

&lt;li&gt;The performance test results generated by XCTest are not easily interpretable by non-technical individuals, and cannot be exported to a readable format.&lt;/li&gt;

&lt;li&gt;The performance test results cannot be shared or hosted easily, unlike other performance test tools.&lt;/li&gt;

&lt;li&gt;As the number of performance tests increases, the test suite becomes slower, necessitating separate execution from the development workflow.&lt;/li&gt;

&lt;li&gt;XCTest is incapable of simulating a large number of users to check the performance of an iOS app.&lt;/li&gt;

&lt;li&gt;XCTest cannot alter hardware device settings like CPU or memory to cover complex behaviors for performance tests.&lt;/li&gt;

&lt;/ul&gt;
&lt;h2&gt;Mastering Performance Testing: Expert Techniques for Optimal Results&lt;/h2&gt;

&lt;p&gt;In Xcode, performance tests are seamlessly integrated with unit and UI tests via the XCTest framework. Consequently, developers must ensure that their tests are properly configured and functioning as intended. When setting up performance tests, several crucial factors come into play:&lt;/p&gt;

&lt;h3&gt;Establish Clear Testing Objectives&lt;/h3&gt;

&lt;p&gt;Before initiating performance testing, it's vital to define what and how specific code will be evaluated for performance. Identifying the testing level – whether unit, integration, or UI – is also essential. Fortunately, the XCTest framework allows for testing at any level.&lt;/p&gt;

&lt;h3&gt;Prevent Interference Between Test Iterations&lt;/h3&gt;

&lt;p&gt;Performance tests involve executing multiple iterations of the same code block. Ensuring that each iteration is self-contained and independent, with minimal deviation, is critical. Xcode will flag any significant deviations as errors.&lt;/p&gt;

&lt;h3&gt;Optimize Test Data for Reliable Results&lt;/h3&gt;

&lt;p&gt;Predefined test data is more likely to yield consistent results compared to random data when running performance tests. Random data can produce erratic results, which are of limited value.&lt;/p&gt;

&lt;h3&gt;Execute Performance Tests in a Dedicated Scheme&lt;/h3&gt;

&lt;p&gt;As the number of performance tests grows, test suite execution time increases substantially. Configuring a separate scheme to run only performance tests, without impacting unit and UI tests, is a practical solution.&lt;/p&gt;

&lt;h3&gt;Integrate Performance Tests into Your CI Workflow&lt;/h3&gt;

&lt;p&gt;With a dedicated scheme in place, integrating performance tests into your continuous integration server is relatively straightforward. If you're using the Xcode server, the process is even simpler. Create a separate CI job to run performance tests regularly, such as daily or overnight.&lt;/p&gt;

&lt;h2&gt;Access the Demo Source Code&lt;/h2&gt;

&lt;p&gt;The source code for the performance test discussed in this article is available on GitHub. Clone the repository and try running performance tests yourself.&lt;/p&gt;

&lt;p&gt;GitHub repo: xctest-performance&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Apple's extension of the XCTest framework for performance testing enables developers to identify performance bottlenecks early in the development cycle. With the latest Xcode, getting started with performance testing is easier than ever. We encourage you to try this approach and integrate performance tests into your iOS apps soon!&lt;/p&gt;

</description>
      <category>app</category>
      <category>unit</category>
      <category>test</category>
    </item>
  </channel>
</rss>
