<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: NIX United</title>
    <description>The latest articles on DEV Community by NIX United (@nix_united).</description>
    <link>https://dev.to/nix_united</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nix_united"/>
    <language>en</language>
    <item>
      <title>Pagination of an Infinite List of Records in Salesforce.</title>
      <dc:creator>NIX United</dc:creator>
      <pubDate>Thu, 02 Nov 2023 10:56:15 +0000</pubDate>
      <link>https://dev.to/nix_united/pagination-of-an-infinite-list-of-records-in-salesforce-2dmo</link>
      <guid>https://dev.to/nix_united/pagination-of-an-infinite-list-of-records-in-salesforce-2dmo</guid>
      <description>&lt;p&gt;Ievgen Kyselov, Salesforce developer at &lt;a href="https://nix-united.com/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=pagination"&gt;NIX&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This pagination approach is not new or uncommon, but it's rarely discussed in detail. What I am showcasing differs from the methods you might find through a simple Google search.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Pagination?
&lt;/h2&gt;

&lt;p&gt;Put simply, it's page-by-page navigation. It's a way to display a large amount of homogeneous information by dividing the content into pages. Many Salesforce developers, myself included, often encounter pagination when displaying a significant volume of data on the user interface. On one of my projects, we were presenting phone numbers in a data table. However, in certain cases, the data wouldn't display, as the information retrieval took too long. Users were unable to access any data. So, why did this happen?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;1. Contacts were selected through several nested database queries.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We were dealing with multiple levels of parent-child relationships between Contacts and child objects. Due to business logic requirements, we needed to filter contacts based on filters applied both to the contacts themselves and to filters applied to the child objects of the contacts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoo0cuwmnm9gs7x8duik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoo0cuwmnm9gs7x8duik.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example of Parent-Child Relationship&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;2. A vast number of records in Contacts (several hundred thousand) and their child objects.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To elucidate the necessity of the pagination approach I've chosen, I'll list and compare four other methods proposed by the Salesforce platform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pagination using the List Controller for Visualforce pages. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pagination using the Database.getQueryLocator class.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pagination employing SOQL query and the OFFSET operator. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pagination via Apex code to retrieve all parent records into a single list using an SOQL query. Subsequently, the necessary records can be selected from this list according to the page.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first three tools didn't suit me for the following reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The List Controller for Visualforce pages is not applicable for LWC components and has a limited number of records it can process – 10,000 records.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;getQueryLocator also has a limit of 10,000 records and isn't compatible with the task's requirements. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The SOQL query with the OFFSET operator is limited to 2,000 records that can be provided with an OFFSET in the query. Therefore, it can't be used for pagination in the case of a large amount of data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's take a closer look at the 4th pagination method. And let's highlight an important detail upfront – we don't have the ability to fetch all data, as there's a limit of 50,000 records that we can retrieve across all queries within a single transaction.&lt;/p&gt;

&lt;p&gt;What does this lead to? If we query child records, based on which we subsequently query contacts, we might obtain, for example, 47,000 total child records. But then we can only retrieve 3,000 contacts. Even if there are actually more, let's say 6,000. Essentially, we're providing the user with knowingly inaccurate data. They won't know how many records they can actually get in the data table. They won't see part of the Contacts and won't interact with them, assuming they don't have all the data. I call this the "User's Data Iceberg."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkzwd6y50alkz9oghjur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkzwd6y50alkz9oghjur.png" alt="Image description" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;User Data Iceberg&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this way, the user receives distorted and incomplete information, which will negatively impact their data work.&lt;/p&gt;

&lt;p&gt;The second point is that aggregating data into lists from nested database queries takes a lot of time. This leads to exceeding the CPU Time limit. As a result, the user doesn't receive any data at all. We can't reduce the processing time for nested queries or overcome the limits on records in a single transaction. We are constrained by the limitations imposed by the Salesforce database (I will explain how these limits can be circumvented later). Therefore, I decided to reduce the number of records in the query for the contacts themselves to at least shorten the time for data retrieval and processing. So, if I previously wrote a query for contacts:&lt;/p&gt;

&lt;p&gt;SELECT Id FROM Contact WHERE Id IN :ids ORDER BY Next_Contact_Date_Time__c &lt;strong&gt;LIMIT 50000&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;where &lt;strong&gt;ids&lt;/strong&gt; is a list including the Ids of child records from the nested query, now I need to write:&lt;/p&gt;

&lt;p&gt;SELECT Id FROM Contact WHERE Id IN :ids ORDER BY Next_Contact_Date_Time__c &lt;strong&gt;LIMIT 50&lt;/strong&gt;, limiting the number of contacts per page to 50.&lt;/p&gt;

&lt;p&gt;This allowed me to reduce the overall time the code works on retrieving the necessary records for the user. It also allows me to fetch either all Contact records or a significantly larger portion than in the first query.&lt;/p&gt;

&lt;p&gt;However, this is only the first 50 contacts, and I need all contacts for the data table…&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;On some resources, they suggest ordering records by a specific field, like ID. Then pagination is done by comparing the field's value with the value of the last or first record on the previous page. It all depends on the direction of pagination.&lt;/p&gt;

&lt;p&gt;The ideas I came across were either not fully developed or not sufficiently elaborated for a more general application. Moreover, such a method (as far as I can judge from personal experience) is quite rarely used. The majority of recommendations online concern the application of one of the 4 pagination tools mentioned earlier. In my opinion, this is unjustified. Therefore, I took this idea into consideration and practically developed it for many cases of code implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;For "paging" from the previous to the next page, it is necessary to order the records in each database query by one of the fields (e.g., ID) in ascending order. In the query condition, it is specified that the value of this field for N records on the next page should be greater than the value of the field for the last record on the previous page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For "paging" back, it is necessary to order the records in each database query by one of the fields in descending order. Now, in the query condition, the value of the field for N records on the next page should be smaller than the value of the field for the first record on the previous page.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first and last pages somewhat deviate from this concept. The first page lacks a previous page because there isn't one, and the last page lacks a next page. Also, the last page almost always contains fewer records than all the previous ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does this concept look in code?
&lt;/h2&gt;

&lt;p&gt;I'll provide an example below. Please note that the methods below have a fixed page size of 50 records. If you need a different size, you can replace 50 with the value you need. Alternatively, you can introduce an additional parameter into the methods where you pass the required page size.&lt;/p&gt;

&lt;p&gt;I used two methods related to an LWC component that contains a data table. The filters intended for querying contacts from the database were stored in records of a separate object. However, you can use a JSON object generated by you in the LWC component's code instead.&lt;/p&gt;

&lt;p&gt;The first method 'getFirstPage' of the LWC component is designed to retrieve records for the first page during the initial load of the table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** LWC component method that initially retrieves the data for the first page */
getFirstPage(event){
/** Imported method to component @salesforce/apex/ApexClassName.getContacts 
* Parameters:
* recId: filterRecordId - ID of the record that contains filters for retrieving contacts
* pageRecords: null - array of the records IDs from the page to pass them into the Apex methods as the previous page IDs, it has the NULL value when the first page is loaded first time as we have no previous page yet
* comparingSign: null - the symbol for the copmarison of the records by the field which is used for ordering the records(contacts), it has the NULL value when the first page is loaded the first time
* order: ordering symbol which can be 'ASC' or 'DESC' depending on your wishes
*/
/* The variable that defines in ASC or DESC order records should be sorted. You can use some drop-down menu on the UI for selecting it. */
let currentSorting = event.detail.sortingOrder;
getContacts({recId: filterRecordId, pageRecords: null, comparingSign: null, order: currentSorting, currentPage: 1, sortingOrder: currentSorting})
.then((result) =&amp;gt; {
if(result.contacts.length &amp;gt; 0){
                    //the first page contacts
                    this.contacts = [...this.result.contacts];  
                    //the total pages count
                    this.totalPages = this.result.totalPages;  
                    /** The returned number of the first page.
                     * You can modify and assign the first page directly if you want.*/
                    this.currentPage = this.result.currentPage; 
                    /* Your code that processes the data */
}
else{
this.contacts = [];
this.totalPages = 0;
this.currentPage = 0;
}
})
.catch((error) =&amp;gt; {
console.log(error);
console.log('error due to request' + error);
}); 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The provided method of the component calls the Apex class method getContacts, which returns an object with contacts for the initial page.&lt;/p&gt;

&lt;p&gt;The second method, handlePageChange, of the LWC component is intended for event handling. Specifically, it handles user clicks on control buttons to navigate to the next, previous, last, and first pages after the initial first page has been retrieved.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** LWC method that retrieves the data when on of the pagination buttons (next page/previous page/last page/fist page) is clicked */
handlePageChange = (message) =&amp;gt;{
/* The variable that defines in ASC or DESC order records should be sorted. You can use some drop-down menu on the UI for selecting it. */
let currentSorting = message.sortingOrder; 
/* Selecting the comparison sign and ordering direction depending on the pagination direction */
let compSign;
let sortOrder;
if(currentSorting = 'ASC'){
compSign = this.currentPage &amp;gt; message.currentPage ? '&amp;lt;' : '&amp;gt;';
sortOrder = this.currentPage &amp;gt; message.currentPage ? 'DESC' : 'ASC';
}else{
compSign = this.currentPage &amp;gt; message.currentPage ? '&amp;gt;' : '&amp;lt;';
sortOrder = this.currentPage &amp;gt; message.currentPage ? 'ASC' : 'DESC';
        }
/* Combining the record IDs from the page to pass them into the Apex methods as the previous page IDs */
let contIds = [];
this.contacts.forEach(cont =&amp;gt; {
contIds.push(cont.contactId);
});
let conditions = {};
conditions.recId = this.dialListId;

/** Selecting the comparison sign and ordering direction depending on the pagination direction and checking if the next page is the first/last page */
if(message.currentPage == 1 || message.currentPage == this.totalPages){
conditions.pageRecords = null;
conditions.comparingSign = null;
if(currentSorting = 'ASC'){
conditions.order = message.currentPage == 1 ? 'ASC' : 'DESC';
}else{
conditions.order = message.currentPage == 1 ? 'DESC' : 'ASC';
}
}else{
conditions.pageRecords = contIds;
if(currentSorting = 'ASC'){
conditions.comparingSign = this.currentPage &amp;gt; message.currentPage ? '&amp;lt;' : '&amp;gt;';
conditions.order = this.currentPage &amp;gt; message.currentPage ? 'DESC' : 'ASC';                
}else{
conditions.comparingSign = this.currentPage &amp;gt; message.currentPage ? '&amp;gt;' : '&amp;lt;';
conditions.order = this.currentPage &amp;gt; message.currentPage ? 'ASC' : 'DESC';
}
}
this.currentPage = message.currentPage;

/* Imported method to component @salesforce/apex/ApexClassName.getContacts */
getContacts({recId: conditions.recId, pageRecords: conditions.pageRecords, 
comparingSign: conditions.comparingSign, order: conditions.order, 
currentPage: message.currentPage, sortingOrder: currentSorting})
.then((result) =&amp;gt; {
this.contacts = this.formatContacts(result.contacts);
this.totalPages = result.totalPages;
                /* &amp;lt; Your code that processes the data &amp;gt; */
})
.catch((error) =&amp;gt; {
console.log(error);
console.log('error due to request' + error);
});
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The handlePageChange method of the component also calls the Apex class method getContacts, which returns an object with contacts for the initial first page.&lt;/p&gt;

&lt;p&gt;It's worth noting that due to the existing code and to maintain similarity between different interfaces, I've used two methods. However, you can slightly modify the second handlePageChange method and use only that method to load the initial page.&lt;/p&gt;

&lt;p&gt;The Apex class mentioned in the code of the LWC component as ApexClassName has the following methods:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. getContacts:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static Integer totalRecordsCount;
public static String CONTACT_FIELDS = 'FirstName, LastName';

/** getContacts is the method that returns the contacts to front-end logic in LWC 
* If you want you can combine getContacts and getData methods into one method */
@AuraEnabled(cacheable=true)
public static PayLoad getContacts(String recId, List&amp;lt;String&amp;gt; pageRecords, String comparingSign, String order, Integer currentPage, String sortingOrder){
PayLoad payloadResult = new PayLoad();
/* Sending the parameters to the getData method that returns contacts for the current page */
List&amp;lt;Contact&amp;gt; contacts = getData(recId, pageRecords, comparingSign, order, currentPage, sortingOrder);
Double pagesCount = Double.valueOf(totalRecordsCount);
Double totalPages = Decimal.valueOf(pagesCount/50).round(System.RoundingMode.UP); 
// The returned total pages number
payloadResult.contacts = contacts;
payloadResult.totalPages = Integer.valueOf(totalPages);
payloadResult.totalContacts = totalRecordsCount;
payloadResult.currentPage = currentPage;
return payloadResult;
    }
    /** The returned type of data to LWC component */
    public class PayLoad{
        @AuraEnabled public Integer totalPages;
        @AuraEnabled public Integer totalContacts;
        @AuraEnabled public Integer currentPage;
        @AuraEnabled public List&amp;lt;Contact&amp;gt; contacts;
    } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method simply passes parameters to the getData method and returns the processed result to the LWC component as an object of the PayLoad class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. getData:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** The method that retrieves the IDs of the child records and prepares the part of the input parameters for the method getRecords that retrieves contacts */
public static List&amp;lt;Contact&amp;gt; getData(String recId, List&amp;lt;String&amp;gt; pageRecords, String comparingSign, String order, Integer pageNumber, String sortingOrder){
List &amp;lt;Id&amp;gt; ids = new List &amp;lt;Id&amp;gt;();
/* Some custom code that retrieves the IDs of the child records using the filtering logic saved in the record with the id='recId' and adds them to the 'ids' variable*/
/** Instead of the 'recId' and saved logic in the record of some SObject you can pass filtering logic inside JSON object for example. It depends on how you want to build your application */
String idAsString = idSetAsSting(ids);//This is the conversion of the list of IDs into a string 
/* the countQuery string is to define the total scope of the contacts that are corresponding to our condition. Here you can use your own condition for the definition of the records total count */
String countQuery = 'SELECT count() FROM  Contact WHERE Id IN ' + idAsString; 
totalRecordsCount = database.countQuery(countQuery);
/** The queryLimit is the required parameter  for specifying the number  of records per page. This is required because the last page may have a  different quantity of records than  the other pages have */
Integer queryLimit = findCurrentLimit(totalRecordsCount, pageNumber);
String query = 'SELECT Id, ' + CONTACT_FIELDS + ' FROM Contact' + ' WHERE Id IN ' + idAsString;
/** The previous page Contacts are required to compare the last or the first record ID depending on pagination direction */
String queryPreviousPage = 'SELECT ID,Next_Contact_Date_Time__c FROM Contact WHERE ID IN :pageRecords ORDER BY Id ' + sortingOrder;
List&amp;lt;Contact&amp;gt; previousContacts = database.query(queryPreviousPage);
/** The next string is the contacts retrieved for the page */
List&amp;lt;Contact&amp;gt; contacts = getRecords(previousContacts, comparingSign, order, queryLimit, query, 'Id', sortingOrder);
return contacts;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The getData method is used for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Executing all nested queries in the database (if necessary, along with associated logic).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generating a list of contact IDs that satisfy the search results within the executed nested queries (this list is denoted by the variable 'ids').&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Formulating parameters for retrieving records of the current page, namely: queryLimit — the number of records displayed per page; previousContacts — a list of records (in my case, a list of contacts).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Obtaining the number of records displayed per page is necessary to limit the records for the last page. This ensures that the sequence of records is not disrupted when paging from the previous to the next.&lt;/p&gt;

&lt;p&gt;Obtaining records of the previous page, previousContacts, through a SOQL query is not mandatory. This is convenient when you are working with relatively static data that doesn't change too frequently. Additionally, this slightly reduces the amount of information transmitted to the server for further processing. In other cases, it's better to directly pass the data list from the page. It's important to consider the possibility of changing the position of records on the page or moving records to other pages when modifying data within the records. By the way, this consideration also applies to pagination using other methods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. getRecords:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** The method that retrieves contacts for the current page.
* Its input parameters are: 
* pageRecords - the list of the records from the previous page
* comparingSign - one the signs '&amp;gt;' or '&amp;lt;'
* order - the order in which the records are ordered in the particular request
* newLimit - the quantity of the records for the current page
* query - the query with filters that will be modified to get the records for the current page
* orderBy - the name of the object field according to which the records are ordered
* sortingOrder - the order in which the records are ordered for the pages 
*/
public static List&amp;lt;SObject&amp;gt; getRecords(List&amp;lt;SObject&amp;gt; pageRecords, 
String comparingSign, String order, Integer newLimit, String query, String orderBy, String sortingOrder){
String lastId; //the variable that stores the ID that will be used in the query for comparison
String orderByString = orderBy; //the necessity of the orderByString variable will be explained further 
String firstQuery = query; //the necessity of the firstQuery variable will be explained further 
if(pageRecords != null &amp;amp;&amp;amp; !pageRecords.isEmpty()){
if(order == sortingOrder){
//if records are ordered in ascending order the lastId equals to the ID of the last record from the previous page
lastId = String.valueOf(pageRecords[pageRecords.size() - 1].get(orderByString));
}else{
//if records are ordered in descending order the lastId equals to the ID of the first record from the previous page
lastId = String.valueOf(pageRecords[0].get(orderByString));
}
lastId = '\'' + lastId + '\'';
}
//if the current page is not the first or the last then we need to add a comparison substring to the query
if(lastId != null &amp;amp;&amp;amp; comparingSign != null){
//but first we need to check  that query contains keyword WHERE
if(query.toLowerCase().substringAfterLast('from').contains('where')){
query = query + ' AND ' + orderByString + ' ' + comparingSign + ' ' + lastId;
}else{
query = query + ' WHERE ' + orderByString + ' ' + comparingSign + ' ' + lastId;
}
} 
//adding the ordering by the field to the query
query = query + ' ORDER BY ' + orderByString + ' ' + order + ' LIMIT ' + newLimit;
//querying  the records
Map&amp;lt;Id, SObject&amp;gt; records = new Map&amp;lt;Id,SObject&amp;gt;((List&amp;lt;SObject&amp;gt;)Database.query(query));
List&amp;lt;SObject&amp;gt; recordsToReturn = new List&amp;lt;SObject&amp;gt;();
//if there are queried records then sorting them in ascending order
if(records.size() &amp;gt; 0) recordsToReturn.addAll(sortByIdAndSortingOrder(records, orderByString, sortingOrder));
return recordsToReturn; //the returned records
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The getRecords method fetches records of one page and presents them in the order specified by the user. In the first and last pages, the variable 'lastId' is ignored. The query doesn't perform a comparison but simply constructs the records in ascending (ASC) or descending (DESC) order. It always displays the first page, but with records ordered in ASC or DESC. The 'sortByIdAndSortingOrder' method needs to be invoked to ensure the delivery of records in the desired order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. sortByIdAndSortingOrder:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** The method that guaranteed returns the records in the ascending  order ordering them by orderBy field */
public static List&amp;lt;SObject&amp;gt; sortByIdAndSortingOrder(Map&amp;lt;Id, SObject&amp;gt; pageRecords, String orderBy, String sortingOrder){
Set&amp;lt;ID&amp;gt; idSet = new Set&amp;lt;ID&amp;gt;();
for (ID recId : pageRecords.keySet()) {
idSet.add(recId);
}
String sObjName = pageRecords.values()[0].Id.getSObjectType().getDescribe().getName();
String rightOrderQuery = 'SELECT Id FROM ' + sObjName + ' WHERE Id in :idSet ORDER BY ' + orderBy + ' ' + sortingOrder;
List&amp;lt;SObject&amp;gt; records = Database.query(rightOrderQuery);
List&amp;lt;SObject&amp;gt; recordsToReturn = new List&amp;lt;SObject&amp;gt;();
for (SObject obj : records) {
recordsToReturn.add(pageRecords.get(obj.Id));
}
return recordsToReturn;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 'sortByIdAndSortingOrder' method is purely utilitarian. Its purpose is to ensure the delivery of records gathered in the required order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. idSetAsSting&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** The method transforms the list of IDs into a string with quotes and brackets */
public static String idSetAsSting(List&amp;lt;String&amp;gt; ids){
String stringSet = '(';
if(!ids.isEmpty()){
for(String id : ids) {
stringSet = stringSet + '\'' + id + '\'' + ',';
}            
}else{
stringSet = stringSet + '\'' + '\'';
}
stringSet = stringSet.removeEnd(',') + ')';
return stringSet;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method is also utilitarian. It is necessary to convert the list of IDs into a string for querying the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. findCurrentLimit:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** The method defines the limit of the records for the current page request */
public static Integer findCurrentLimit(Integer totalRecords, Integer pageNum){
Double pagesCount = Double.valueOf(totalRecords);
Double totalPages = Decimal.valueOf(pagesCount/50).round(System.RoundingMode.UP);
Integer queryLimit = pageNum == Integer.valueOf(totalPages) ? totalRecords - ((Integer.valueOf(totalPages) - 1) * 50): 50;
return queryLimit;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The method 'findCurrentLimit' indicates the record limit for the page and determines their count for the last page.&lt;/p&gt;

&lt;p&gt;The approach described uses the sorting of records by ID. In my case, as in most others, records are sorted by a specific field. For my example, this is the 'Next Contact Date Time' field with a datetime data type.&lt;/p&gt;

&lt;p&gt;The issue with such non-system fields is that in practice, many records have NULL values for this field. Therefore, records cannot be sorted by this field. Thus, when trying to sort by a non-system field in ascending order, records with a NULL value in this field are likely to appear first in the overall list of all records in the database.&lt;/p&gt;

&lt;p&gt;When sorting records by a non-system field, it's worth splitting all records into two parts: records with a field value equal to NULL, and records with a field value that is not equal to NULL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v8pwltkcbc1a13wf99k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v8pwltkcbc1a13wf99k.png" alt="Image description" width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each of the two segments within the overall list of object records, we will apply the aforementioned mechanism separately. For records where the field value is NULL, we can use sorting by ID, and for the second case, we will sort by that field.&lt;/p&gt;

&lt;p&gt;A distinct case is a page that includes records with both filled and NULL field values. This page is common when dividing all records into pagination pages in most scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6fox1vpnbjthpzizkb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6fox1vpnbjthpzizkb1.png" alt="Image description" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The entire list of retrievable records from the database is divided into three types of pages based on the values of the non-system field used for sorting the records:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pages where all records have only the NULL value in the field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pages where all records have a non-NULL field value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A page where some records have the NULL value in the field, while the rest have different values.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below, I will provide the changes in the code for the getData and getRecords methods that take into account these described nuances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;getData method:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** The method that retrieves the IDs of the child records and prepares the part of the input parameters for the method getRecords that retrieves contacts */
public static List&amp;lt;Contact&amp;gt; getData(String recId, List&amp;lt;String&amp;gt; pageRecords, String comparingSign, String order, Integer pageNumber, String sortingOrder){
List &amp;lt;Id&amp;gt; ids = new List &amp;lt;Id&amp;gt;();
/* Some custom code that retrieves the IDs of the child records using the filtering logic saved in the record with the id='recId' and adds them to the 'ids' variable*/
/** Instead of the 'recId' and saved logic in the record of some SObject you can pass filtering logic inside JSON object for example. It depends on how you want to build your application */
String idAsString = idSetAsSting(ids); // Conversion of the list of IDs into string
/* The countQuery string is to define the total scope of the contacts that are corresponding to our condition. Here you can use your own condition for the definition of the records total count */
String countQuery = 'SELECT count() FROM  Contact WHERE Id IN ' + idAsString;
totalRecordsCount = database.countQuery(countQuery);
/** The queryLimit is the required parameter for specifying the number of records per page. This is required because the last page may have a different quantity of records than the other pages have */
Integer queryLimit = findCurrentLimit(totalRecordsCount, pageNumber);
String query = 'SELECT Id, ' + CONTACT_FIELDS + ' FROM Contact' + ' WHERE Id IN ' + idAsString;

/*Counting the number of records with the NULL values according to the applied filters*/
String countNullQuery = 'SELECT count() FROM  Contact WHERE Id IN ' + idAsString + ' AND Next_Contact_Date_Time__c = null';
Integer pageRecordsCount = totalRecordsCount &amp;gt; pageNumber*50 ? pageNumber*50 : totalRecordsCount;
Integer nullRecordsCount = database.countQuery(countNullQuery);

/** The previous page Contacts are required to compare the last or the first record ID depending on pagination direction. The difference is in double-select action. This is necessary for dividing the previous page into two parts in the case when the page contains records with a NULL value for the field Next_Contact_Date_Time__c and records with a non-null value for the field Next_Contact_Date_Time__c */

String queryPreviousPageNull = 'SELECT ID,Next_Contact_Date_Time__c FROM Contact WHERE ID IN :pageRecords AND Next_Contact_Date_Time__c = null ORDER BY Id ' + sortingOrder;
List&amp;lt;Contact&amp;gt; previousContacts = database.query(queryPreviousPageNull);
String queryPreviousPageNotNull = 'SELECT ID,Next_Contact_Date_Time__c FROM Contact WHERE ID IN :pageRecords AND Next_Contact_Date_Time__c != null ORDER BY Next_Contact_Date_Time__c ' + sortingOrder;
previousContacts.addAll(database.query(queryPreviousPageNotNull));
List&amp;lt;Contact&amp;gt; contacts;
/** If there are null and non-null values on the current page then we call the getRecords method twice. Once with the 'Next_Contact_Date_Time__c' field, second time with the 'ID' field */
if(pageRecordsCount &amp;gt; nullRecordsCount &amp;amp;&amp;amp; (pageRecordsCount - 50) &amp;lt;= nullRecordsCount){
contacts = getRecords(previousContacts, comparingSign, order, queryLimit, query, 'BOTH', sortingOrder);
}
/** When the current page is in the 'Next_Contact_Date_Time__c' field null value area we call the getRecords with the 'ID' field */
if(pageRecordsCount &amp;lt;= nullRecordsCount){
contacts = getRecords(previousContacts, comparingSign, order, queryLimit, query, 'Id', sortingOrder);
}
/** When the current page is in the 'Next_Contact_Date_Time__c' field non-null value area we call the getRecords with the 'Next_Contact_Date_Time__c' field */  if((pageRecordsCount - 50) &amp;gt; nullRecordsCount){
contacts = getRecords(previousContacts, comparingSign, order, queryLimit, query, 'Next_Contact_Date_Time__c', sortingOrder);
}
return contacts;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;getRecords method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static List&amp;lt;SObject&amp;gt; getRecords(List&amp;lt;SObject&amp;gt; pageRecords, String comparingSign, String order, Integer newLimit, String query, String orderBy, String sortingOrder){
String lastId;
String orderByString = orderBy;
/** For  the second call of the getRecords method when the current page is in NULL and non-null areas we save the initial query into variable firstQuery */
String firstQuery = query;
/** When  the current page is in NULL and non-null areas we select the field for the first time query; it depends on the direction of the pagination and on the ordering of the records for the data table (ASC or DESC) that we want to see on the screen */
if(orderBy == 'BOTH' &amp;amp;&amp;amp; order == sortingOrder){
orderByString = 'Id';
}else if(orderBy == 'BOTH' &amp;amp;&amp;amp; order != sortingOrder){
orderByString = 'Next_Contact_Date_Time__c';
}
if(pageRecords != null &amp;amp;&amp;amp; !pageRecords.isEmpty()){
if(orderByString.toLowerCase() == 'id'){
if(order == sortingOrder){
//if records are ordered in ascending order the lastId equals to ID of the last record from the previous page
lastId = String.valueOf(pageRecords[pageRecords.size() - 1].get(orderByString));
}else{
//if records are ordered in descending order the lastId equals to ID of the first record from the previous page
lastId = String.valueOf(pageRecords[0].get(orderByString));
}
lastId = '\'' + lastId + '\'';
}else if(orderByString.toLowerCase() == 'Next_Contact_Date_Time__c'){
if(order == sortingOrder){
//if records are ordered in ascending order the lastId equals to field of the last record from the previous page
lastId = Datetime.valueOfGmt(String.valueOf(pageRecords[pageRecords.size() - 1].get(orderByString))).formatGMT('yyyy-MM-dd\'T\'HH:mm:ss.SSSZ');
}else{
//if records are ordered in descending order the lastId equals to field of the first record from the previous page
lastId = Datetime.valueOfGmt(String.valueOf(pageRecords[0].get(orderByString))).formatGMT('yyyy-MM-dd\'T\'HH:mm:ss.SSSZ');
}
}
}
//if the current page is not the first or the last then we need to add a comparison substring to the query
if(lastId != null &amp;amp;&amp;amp; comparingSign != null){
//but first we need to check that query contains keyword WHERE
if(query.toLowerCase().substringAfterLast('from').contains('where')){
query = query + ' AND ' + orderByString + ' ' + comparingSign + ' ' + lastId;
}else{
query = query + ' WHERE ' + orderByString + ' ' + comparingSign + ' ' + lastId;
}
}
String nextContactDateTimeCondition;
//selecting the field filtering equation
if(orderByString.toLowerCase() == 'id'){
/** if I have orderByString variable equals to 'id' it means that the current page inside the NULL value area and I have to select the records with the field with the NULL values only */
nextContactDateTimeCondition = 'Next_Contact_Date_Time__c = null';
}else {
/** If  I have an orderByString variable equal  to 'id' it means that the current page is inside the non-null value area and I have to select the records with the field using the custom filter for this field. If there are no filter s you can use something like 'Next_Contact_Date_Time__c != null' */
nextContactDateTimeCondition = 'Next_Contact_Date_Time__c &amp;lt; '+ getDateTimeString(datetime.now());
}
//adding the field filtering equation to the query
if(query.toLowerCase().substringAfterLast('from').contains('where')){
query = query + ' AND ' + nextContactDateTimeCondition;
}else{
query = query + ' WHERE ' + nextContactDateTimeCondition;
}
//adding the ordering by the field to the query
query = query + ' ORDER BY ' + orderByString + ' ' + order + ' LIMIT ' + newLimit;
//querying the records
Map&amp;lt;Id, SObject&amp;gt; records = new Map&amp;lt;Id, SObject&amp;gt;((List&amp;lt;SObject&amp;gt;)Database.query(query));
List&amp;lt;SObject&amp;gt; recordsToReturn = new List&amp;lt;SObject&amp;gt;();
//if there are queried records then sort  them in ascending order
if(records.size() &amp;gt; 0) recordsToReturn.addAll(sortByIdAndSortingOrder(records, orderByString, sortingOrder));
/** If the current page contains null and non-null areas then depending on the pagination direction and on the accepted order for pages we call the getRecords method for the second time for the Next_Contact_Date_Time__c field or for the ID field using the initial query marked as firstQuery variable */
if(orderBy == 'BOTH' &amp;amp;&amp;amp; order == sortingOrder){
Integer nextQueryLimit = newLimit - recordsToReturn.size();
List&amp;lt;SObject&amp;gt; recordsToAdd = getRecords(pageRecords, comparingSign, order, nextQueryLimit, firstQuery, 'Next_Contact_Date_Time__c', sortingOrder);
recordsToReturn.addAll(recordsToAdd);
}else if(orderBy == 'BOTH' &amp;amp;&amp;amp; order != sortingOrder){
Integer nextQueryLimit = newLimit - recordsToReturn.size();
List&amp;lt;SObject&amp;gt; recordsToAdd = getRecords(pageRecords, comparingSign, order, nextQueryLimit, firstQuery, 'Id', sortingOrder);
recordsToReturn.addAll(recordsToAdd);
}
return recordsToReturn; //the returned records
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this article, I attempted to systematize my practices and described a solution to the pagination problem using a rare technique based on comparing records in one of the fields.&lt;/p&gt;

&lt;h2&gt;
  
  
  In my opinion, this approach has the following advantages:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fast query processing when navigating to a page. This helps prevent exceeding the CPU Time limit in many cases. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additionally, when there are no nested queries to retrieve records, it allows complete avoidance of limit exceedances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When queries do not have nested sub-queries, it allows viewing all records without being restricted by Salesforce's 50,000-record limit. Personally, I've viewed around 400,000 records, but even more is possible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhanced reliability of transmitted data. Users receive comprehensive information about the number of accessible records according to selected filters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The method can be applied to both LWC and Visualforce.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  During the course of development, I identified some drawbacks as well:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If nested queries are present, you still need to limit the total number of records to 50,000.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For records obtained through nested queries, it's advisable to set a limit with a margin based on the number of main object records displayed on a page. This prevents exceeding the 50,000-record limit per transaction. It's recommended to use an additional variable in the methods representing the number of records per page, which would be subtracted from the 50,000-record limit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inaccuracy in data volume due to nested queries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This follows from the previous point. If the number of records retrieved through nested queries is restricted, it limits the search zone for the object that interests the user.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
More complex logic compared to traditional pagination methods. This includes the need to handle NULL values in non-system fields.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As an even more complex alternative, the sequence of executing Apex Batchable classes could be mentioned. The overall result would be the records retrieved for a single page. Meanwhile, the client-side (browser) code would periodically send requests to retrieve these records. Until the server-side code completes and the requested records become available, the user's page might be blocked for further actions or switches. A similar logic could be implemented considering the platform's event technology.&lt;/p&gt;

&lt;p&gt;However, this is not the end. I will delve into the implementation of this batch-based pagination method in one of the upcoming articles. For now, I hope everyone enjoys applying the practices I've described!&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>salesforce</category>
      <category>productivity</category>
    </item>
    <item>
      <title>WebSockets in Python: Security Risks and Possible Alternatives</title>
      <dc:creator>NIX United</dc:creator>
      <pubDate>Wed, 30 Aug 2023 13:29:07 +0000</pubDate>
      <link>https://dev.to/nix_united/websockets-in-python-security-risks-and-possible-alternatives-5dd1</link>
      <guid>https://dev.to/nix_united/websockets-in-python-security-risks-and-possible-alternatives-5dd1</guid>
      <description>&lt;p&gt;&lt;em&gt;Victoria Yelenska, Python Web Developer at &lt;a href="https://nix-united.com/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=websockets_in_python"&gt;NIX&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Today, applications that allow instant messaging and real-time news tracking have become indispensable for us. WebSockets are one of the tools that developers use to implement such applications.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are WebSockets
&lt;/h2&gt;

&lt;p&gt;WebSocket is a bidirectional full-duplex communication protocol between a client and a server. What does this mean? Unlike the HTTP protocol, which works on the principle of "client request - server response," in WebSockets, both the server and the client can send messages to each other. Each communication party is capable of simultaneously receiving and sending data.&lt;/p&gt;

&lt;p&gt;In WebSockets, message exchange takes place through a single communication channel. It remains open throughout the entire communication, and if necessary, either party can close it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Differences from the HTTP protocol
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw2kw82oplnqrwpxbsrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw2kw82oplnqrwpxbsrk.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Structure of WebSocket protocol
&lt;/h2&gt;

&lt;p&gt;The WebSocket protocol exists as an overlay on TCP. The specification defines two URI schemes for websockets: ws:// for unencrypted connections and wss:// for encrypted ones. The protocol consists of an initial handshake and the direct exchange of data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handshake&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the code block below, you can see how the handshake looks from the client's side. The header Connection: Upgrade is present here. It's also visible which upgrade is being proposed — Upgrade: websocket:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket&lt;br&gt;
Connection: Upgrade&lt;br&gt;
Sec-WebSocket-Key:&lt;br&gt;
dGhlIHNhbXBsZSBub25jZQ==&lt;br&gt;
Sec-WebSocket-Origin:&lt;br&gt;
http://example.com&lt;br&gt;
Sec-WebSocket-Protocol: chat, superchat Sec-WebSocket-Version: 8&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The server confirms the handshake with a status code 101 — Switching Protocols, and also sends details about the new connection:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;HTTP/1.1 101 Switching Protocols Upgrade: websocket&lt;br&gt;
Connection: Upgrade&lt;br&gt;
Sec-WebSocket-Accept:&lt;br&gt;
s3pPLMBiTxaQ9kYGzzhZRbK+x0o=&lt;br&gt;
Sec-WebSocket-Protocol: chat&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data exchange&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data is sent in the form of frames with specified types. Each message can consist of one or more frames, all of which must be of the same type. These types can be text, binary data, and control frames meant not for data transmission but for protocol-level control signals. For example, signaling that the connection needs to be closed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Socket.IO&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is one of the most popular tools for working with the WebSocket protocol. It consists of a WebSocket server and a client library. Initially, they were implemented in JavaScript, but later implementations appeared in many other programming languages. Socket.IO provides additional functionality that is not present in pure WebSocket. For instance, automatic reconnection if the connection is lost, or fallback to HTTP long polling if the WebSocket protocol is not supported, as well as the implementation of namespaces and rooms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Namespaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They are necessary for separating responsibilities within a single application. Socket.IO allows you to create multiple namespaces that behave as separate communication channels. Under the hood, they will still use the same connection. Organizing into namespaces can be logical based on modules in the application or, for example, based on shared permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rooms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the second level of hierarchy. Within each namespace, including the default one, you can create individual channels called rooms, to which clients can join and leave. This way, you can broadcast messages to a "room," and all clients that have joined it will receive the message. This can be convenient for simultaneously sending messages to a group of users or collecting messages from multiple devices for a single user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternatives to sockets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;HTTP Polling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we constantly need to monitor updates of information on the server, in some cases, this can be implemented using the HTTP protocol in the form of HTTP Polling.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;There are several types of HTTP Polling:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Short polling: This is a simple approach but is considered bad practice. The client constantly sends requests to the server, asking if the requested information is ready. The server processes the requests as soon as they arrive and responds with an empty response if the data is not ready. In this case, a large number of requests can overload the server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Long polling: The client sends a single request to the server and waits for a response. The server, in turn, holds onto the request until the necessary data becomes available or a specified timeout is reached. Under optimal conditions, we receive a response as soon as the data changes on the server, and we don't create as much traffic as with short polling. However, in practice, it's quite challenging to configure requests so that they are neither too frequent (and often receive no response) nor too slow (resulting in long delays on the server and wasted resources).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a result, HTTP polling is not a very convenient approach if we need to receive real-time information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Streaming &amp;amp; Server-sent Events&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A good option for subscribe-only scenarios. For example, subscribing to news feed updates or receiving notifications in a browser. The client makes a single HTTP request to establish a connection, and the server responds with a series of responses as relevant data becomes available. Responses will be sent until the client closes the connection. Thus, there's no need to open and close connections for each request-response pair.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Drawbacks of this approach:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It doesn't guarantee instant message delivery: the connection can be interrupted, and the request might be queued behind other HTTP requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The client can't send data to the server, which prevents using this approach in applications that require true interactivity. In other words, in cases where the client and server need to send data to each other without additional intermediate requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using WebSockets
&lt;/h2&gt;

&lt;p&gt;Since WebSockets support bidirectional communication, they are ideal for situations that require fast two-way data exchange. This includes online games, chats, financial applications, news services, and data exchange with IoT devices.&lt;/p&gt;

&lt;p&gt;WebSockets might not be suitable when interactivity isn't needed, there's no constant bidirectional traffic, and there are high security requirements. Prolonged open connections introduce several risks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Like any network communication, communication using the WebSocket protocol has vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Potential attack vectors include:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cross-Site Scripting (XSS) and SQL injections — injecting malicious code into messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Man-in-the-Middle attacks — intercepting information from the communication channel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Denial of Service (DoS) — sending a large number of requests to make the resource unavailable to users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unauthorized access to information.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to protect yourself:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Validate data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use encrypted connections (wss://).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implement rate limiting — restricting the number of messages per unit of time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add authentication during the handshake. One approach is to use ticket-based authentication. This is when the client contacts the HTTP server before connection upgrade, which generates a ticket with necessary user information. Then, the client sends this ticket to the WebSocket server, which validates it and only then grants connection permission.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  WebSockets in Python
&lt;/h2&gt;

&lt;p&gt;Among the actively developed libraries for WebSockets in Python, the following can be mentioned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://python-socketio.readthedocs.io/en/latest/"&gt;python-socketio&lt;/a&gt; — a framework-independent implementation of Socket.IO for Python, offering synchronous and asynchronous variants.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://flask-socketio.readthedocs.io/en/latest/"&gt;Flask-SocketIO&lt;/a&gt; — integration of Socket.IO with Flask.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://channels.readthedocs.io/en/latest/"&gt;Django Channels&lt;/a&gt; — an extension for the Django framework that adds WebSocket protocol support, HTTP long polling, MQTT, and allows choosing between synchronous and asynchronous implementation.&lt;br&gt;
&lt;a href="https://autobahn.readthedocs.io/en/latest/"&gt;Autobahn|Python&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a WebSocket protocol and WAMP (web application messaging protocol) implementation on Twisted and asyncio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://websockets.readthedocs.io/en/stable/"&gt;websockets&lt;/a&gt; — a library for creating WebSocket servers and clients based on the WebSocket protocol. The default implementation is built on asyncio but allows optional threading.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://websocket-client.readthedocs.io/en/latest/"&gt;websocket-client&lt;/a&gt; — a low-level WebSocket client for Python based on the raw WebSocket protocol.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The WebSocket protocol enables interactive communication between the client and server without the need to constantly ping the server. This saves time, speeds up application performance, and allows real-time updates.&lt;/p&gt;

&lt;p&gt;As with any technology, WebSockets are invaluable where appropriate, but their application is not always necessary. Sometimes, their use introduces additional risks and requires specific knowledge for proper and secure implementation.&lt;/p&gt;

</description>
      <category>python</category>
      <category>webdev</category>
      <category>security</category>
      <category>programming</category>
    </item>
    <item>
      <title>Design patterns for frontend and pizza - what do they have in common?</title>
      <dc:creator>NIX United</dc:creator>
      <pubDate>Fri, 26 May 2023 10:46:08 +0000</pubDate>
      <link>https://dev.to/nix_united/design-patterns-for-frontend-and-pizza-what-do-they-have-in-common-1nl7</link>
      <guid>https://dev.to/nix_united/design-patterns-for-frontend-and-pizza-what-do-they-have-in-common-1nl7</guid>
      <description>&lt;p&gt;&lt;em&gt;Victoria Tsukan, Frontend Developer at &lt;a href="https://nix-united.com/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=pizza_patterns"&gt;NIX&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You might not have noticed, but in everyday tasks, we often use various patterns. Like a developer's tool, they make our work easier and more efficient, allowing us to write higher-quality code. How exactly? I'll explain further.&lt;/p&gt;

&lt;p&gt;In this article, I want to introduce you to common patterns for front-end development and situations where they should be used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patterns in IT - what are they?
&lt;/h2&gt;

&lt;p&gt;The word "pattern" refers to a design pattern, a simple solution to a common problem. Patterns can be used at the level of functions, object creation, or architectural design. In a way, these tools resemble mathematical formulas for problem-solving.&lt;/p&gt;

&lt;p&gt;The concept of patterns was introduced by architect Christopher Alexander. He noticed that after renovating and arranging homes, his clients often made further adjustments to suit their preferences. By investigating what didn't satisfy people, he identified several patterns - the most suitable positioning of windows and walls, ceiling height, etc. All his findings were compiled in his seminal book, &lt;a href="https://www.amazon.com/Pattern-Language-Buildings-Construction-Environmental/dp/0195019199"&gt;"A Pattern Language."&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some time later, four programmers, inspired by Alexander's book, wrote their own. They were Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, collectively known as the Gang of Four. Their book, "Design Patterns," describes design patterns for object-oriented systems. The authors identified foundational patterns from which others are derived. It is these patterns that I will explore in the article.&lt;/p&gt;

&lt;p&gt;Experts have identified three categories of patterns:&lt;/p&gt;

&lt;p&gt;• &lt;em&gt;Creational&lt;/em&gt; - responsible for object creation.&lt;br&gt;
• &lt;em&gt;Structural&lt;/em&gt; - designed to describe hierarchy.&lt;br&gt;
• &lt;em&gt;Behavioral&lt;/em&gt; - define the interaction between elements.&lt;/p&gt;

&lt;p&gt;To help you better understand the principle of using patterns, I suggest examining them through the example of... a pizzeria operation.&lt;/p&gt;
&lt;h2&gt;
  
  
  Creational Patterns
&lt;/h2&gt;

&lt;p&gt;These patterns help create different objects without resorting to copy-pasting while providing flexibility and reusability. There are many patterns in this category, but I'll mention the main ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prototype&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This pattern defines a template for creating necessary objects. The prototype allows for the creation of reusable code. In other words, multiple independent objects can be created based on a single template, reducing the required code.&lt;/p&gt;

&lt;p&gt;Drawing a parallel with the operation of a pizzeria, this pattern would help automate the pizza-baking process. It would be challenging to manually make a hundred "Margherita" pizzas. However, one could define a formula with ingredients and a pizza recipe, hand it over to a machine, and it would produce 100 copies based on the template.&lt;/p&gt;

&lt;p&gt;Implementing this pattern in code is relatively straightforward. The example below demonstrates a pizza prototype object with pizza information. Necessary copies are created using &lt;code&gt;Object.create()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const pizzaPrototype = {
name: "Margarita",

bake: function () {
console.log( "Smells appetizing!" );
}
1

const pizzal
const pizza2
const pizza3d

Object.create( pizzaPrototype );
Object.create( pizzaPrototype );
Object.create( pizzaPrototype );

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Factory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Provides an interface for creating objects. It can be compared to a factory with its conveyor belt. When you order a batch of products from a factory, the workers know what components are needed, the assembly process, and the desired outcome. You receive the finished products in the required quantity.&lt;/p&gt;

&lt;p&gt;This pattern has several advantages. First and foremost, it maintains independence between the Factory and the objects it produces. It also adheres to the single responsibility principle, where all the logic of object creation resides within the class and is not controlled externally. Additionally, it follows the open-closed principle, meaning that all objects are independent of each other. However, there is a significant drawback: this pattern becomes large and complex when there is a lot of logic involved. It becomes challenging to maintain, especially when creating many objects.&lt;/p&gt;

&lt;p&gt;Using the example of pizza, this pattern is quite illustrative. The client chooses their favorite pizza from the menu. The order is then passed on to the kitchen staff, who prepare the dish. The kitchen staff possesses extensive knowledge of how to prepare each type of pizza, ensuring the desired result is delivered.&lt;/p&gt;

&lt;p&gt;In the code example, two objects are in the first lines: "Margherita" and "Carbonara" pizzas. They contain specific data such as the pizza's ingredients, size, etc. Next, the Factory is defined with the &lt;code&gt;pizzaClass&lt;/code&gt; information, which serves as the pizza configuration. Then, the &lt;code&gt;createPizza&lt;/code&gt; method allows you to specify the desired pizza. The Factory associates the pizza name with a specific object or entity that contains all the necessary information. As a result, by passing the pizza name to the &lt;code&gt;createPizza&lt;/code&gt; method, the Factory returns the desired object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function Margarita() {...} // Morgorita dato
function Carbonara() {...} // Corbonara dato

class PizzaFactory {
pizzaClass = Margarita; // Defoult volue

createPizza(pizzaType) {
switch (pizzaType) {
case “margarita”: this.pizzaClass = Margarita; break;
case "carbonara": this.pizzaClass = Carbonara; break;
}
return new this.pizzaClass();
| H

const pizzaFactory = new PizzaFactory();
const margarita = pizzaFactory.createPizza("margarita®);
const carbonara = pizzaFactory.createPizza("carbonara");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Builder&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Allows for step-by-step creation of objects, providing control over the process. It also adheres to the single responsibility principle. Although we control the object creation, the logic resides within the pattern. The drawback of the Builder pattern is similar to that of the Factory: if a complex and versatile object is required, maintaining this pattern becomes challenging due to the many methods involved.&lt;/p&gt;

&lt;p&gt;In a pizzeria, this pattern would be useful when a customer wants to order a pizza based on their own recipe rather than choosing from the menu. In the code, the &lt;code&gt;PizzaBuilder&lt;/code&gt; class is used for this purpose, containing information about the pizza stored in the &lt;code&gt;_pizza&lt;/code&gt; variable. Some methods can update this information. In our case, it allows changing the ingredients and the pizza's name.&lt;/p&gt;

&lt;p&gt;The illustration demonstrates the implementation of this pattern. We invoke the &lt;code&gt;changeName&lt;/code&gt; method and add ingredients one by one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class PizzaBuilder {
private _pizza: Pizza = { name: "", ingridients: [] };

addIngridient(ingridient) {
this._pizza.ingridients.push(ingridient);
}

changeName (name) {
this._pizza.name = name;
}
)

const myPizza = new PizzaBuilder();
myPizza.changeName( name “Margarita");
myPizza.addIngridient( ingridient: "tomatoes");
myPizza.addIngridient( ingridient: "cheese");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Structural Patterns
&lt;/h2&gt;

&lt;p&gt;Describe the relationships between multiple object entities. Their goal is to build flexible and efficient systems with a clear hierarchy. Let's discuss some key structural patterns…&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adapter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A widely used pattern that transforms one set of data into another. For example, you receive certain data from the backend, but the format doesn't suit your needs. You require different fields, field names, field quantities, and so on. The Adapter pattern performs the necessary transformation.&lt;/p&gt;

&lt;p&gt;This pattern adheres to the principles of single responsibility and open-closed. The logic for the format conversion is separate from the business logic. Adapters can be added or removed independently without affecting other components. As for drawbacks, the use of an Adapter is not always justified. Sometimes it's easier to make changes to the business logic rather than incorporating the pattern.&lt;/p&gt;

&lt;p&gt;In a pizzeria, this pattern can be useful in various situations. Let's consider the case of changing a pizza's name. The proposed menu includes fields such as Name, Ingredients, and others. However, custom orders may not have a name field. In the code, this may not be an issue, but when such a pizza is ordered, the corresponding field in the receipt will be empty. This problem can be resolved by modifying the logic to add the Name field.&lt;/p&gt;

&lt;p&gt;Returning to our pizzeria example, in the code snippet provided, we have two pizzas: "Margherita" and a custom pizza, one with a name field and the other without it. The Margherita pizza is created using the regular approach with new &lt;code&gt;MargheritaPizza()&lt;/code&gt;. For the "nameless" pizza, an adapter is used to set its name. This way, both margarita and &lt;code&gt;adoptedPizza&lt;/code&gt; have the same interface. Now, after processing the order, the receipt will display the names correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class MargaritaPizza implements PizzaI {...} // Hos “nome”
class CustomPizza implements CustomPizzal {...} // Hos no “nome”

class PizzaAdapter {
adoptedPizza;
constructor(customPizza: CustomPizzal) {

this.adoptedPizza = customPizza;
this.adoptedPizza.name = ‘Custom’;

const margarita = new MargaritaPizza();
const customPizza = new CustomPizza();
const adoptedPizza = new PizzaAdapter(customPizza).adoptedPizza;

const order = new Order();
order.create([adoptedPizza, margarital);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Decorator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pattern allows adding functionality to an existing class. It eliminates the need to define a lot of logic within the class itself. Instead, the logic can be extracted to a separate location, described there, and then attached as a decorator. Similar patterns can be easily added or removed if there are too many of them. However, this leads to a drawback. If a decorator is a function that accumulates values and adds its own behavior, removing one decorator becomes challenging when there are many such layered functions. The layers become interdependent. Moreover, decorators can look messy, which requires breaking down all the functions. Overall, this pattern adheres to the single responsibility principle by dividing the logic into classes. Depending on the situation, it can be convenient or introduce difficulties.&lt;/p&gt;

&lt;p&gt;Let's consider an example of applying discounts to certain pizzas. You could add a Discount field or logic to all pizza classes in the code, but that would be tedious. It's easier to use decorators. We have a class called &lt;code&gt;SimplePizza&lt;/code&gt;, which represents a standard pizza with a getCost method that returns the cost. We also have a decorator called &lt;code&gt;PizzaWithDiscount&lt;/code&gt;, which takes a pizza and a discount value. It overrides the &lt;code&gt;getCost&lt;/code&gt; method of the pizza and sets a new cost considering the discount:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface Pizza {...}
class SimplePizza implements Pizza {...}

class PizzaWithDiscount implements Pizza {
protected pizza; discount;
constructor(pizza: Pizza, discount: number) {...}

getCost() {
return this.pizza.getCost() - this.discount;

}

let myPizza = new SimplePizza();

myPizza.getCost(); // simple cost

myPizza = new PizzaWithDiscount(myPizza, discount: 20);
myPizza.getCost(); // simple cost - 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, here we can see the &lt;code&gt;getCost&lt;/code&gt; method returning &lt;code&gt;pizza.getCost()&lt;/code&gt; minus the discount. The discount is a specific amount, not a percentage. Ultimately, we create a pizza using &lt;code&gt;SimplePizza&lt;/code&gt; and check its default value, for example, 100. Then we wrap the pizza in the &lt;code&gt;PizzaWithDiscount&lt;/code&gt;decorator and set a 20% discount. The next time we check the cost, the pattern calculates the discount, overrides the method, and returns the desired result, which is 80.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavioral Patterns&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These patterns describe the interaction and communication between objects, responsible for distributing responsibilities among entities and parts. They are somewhat similar to algorithms. Although algorithms can also be seen as a kind of pattern, they are focused on computation rather than design. From various behavioral patterns, I will highlight two of them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chain of Responsibility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This pattern consists of a chain of handlers. Instead of processing everything in one function, we split it into separate functions, handlers, and classes. As a result, a request goes through multiple stages. It resembles calling a technical support service. Firstly, you reach the first handler "Press this button if you want to talk to an operator." Then you get redirected to the next handler, which is the operator. You explain your technical problem to the operator, and they suggest performing simple operations. If nothing helps, you get redirected to another handler, a technical expert. This way, you go through the entire chain.&lt;/p&gt;

&lt;p&gt;In this pattern, the single responsibility principle is compromised. The logic is divided into small parts: logic for performing operations, for calling and managing requests. However, the process may end up with no resolution. For example, if someone calls technical support with an unreasonable request, the operator may terminate the conversation. In that case, the person won't receive a response because the request was initially invalid.&lt;/p&gt;

&lt;p&gt;The Chain of Responsibility can have different implementations. In the case of a pizzeria, there is a common scenario. The customer orders a pizza, and the cashier asks clarifying questions like whether the customer wants a salad, dessert, or something to drink. There is a clear chain from one question to another.&lt;/p&gt;

&lt;p&gt;To implement this pattern in code, we create two handlers for drinks and salads. There is also an &lt;code&gt;AbstractHandler&lt;/code&gt; class that contains the logic with the &lt;code&gt;setNext&lt;/code&gt; method. It instructs the current handler about the next one. The &lt;code&gt;askQuestions&lt;/code&gt; function acts as the cashier, asking questions and passing them to the handler. Next, we create a handler for drinks called &lt;code&gt;New DrinksHandler&lt;/code&gt; and a similar one for salads. We add &lt;code&gt;setNext(salads)&lt;/code&gt; for the drinks handler, indicating that after asking about drinks, the cashier should ask about salads:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class DrinksHandler extends AbstractHandler {...}
class SaladsHandler extends AbstractHandler {...}

function askQuestions(handler: Handler) {
const foodToAskList = ['Salads', 'Deserts’', 'Drinks'];
for (const food of foodToAskList) {
console.log( Cashier: Do you want some ${food}?");
console.log(handler.handle(food) || 'No');

const drinks = new DrinksHandler();
const salads = new SaladsHandler();
drinks.setNext(salads);

console.log('Chain: Drinks &amp;gt; Salads\n');
askQuestions (drinks);
console.log('Subchain: Salads\n');
askQuestions(salads);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the bottom is the implementation. The questions are asked in the correct sequence. The response for drinks and salads will be "Yes." However, if only salads are mentioned, there is no handler specified for them. Therefore, the response for salads will simply be "Yes."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This pattern is quite complex and similar to algorithms, where we create separate entities and distribute them among classes. There is a higher-level handler that accepts one of the algorithms, and it becomes the &lt;code&gt;Strategy&lt;/code&gt; that we use or switch to another.&lt;/p&gt;

&lt;p&gt;Think about routing in Google Maps. Imagine you want to commute from home to work using public transportation. You open the application, enter the start and destination points, and click on the bus icon. The system builds a route according to the specified &lt;code&gt;Strategy&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Suddenly, you decide to walk instead and click on the pedestrian icon. The system changes the &lt;code&gt;Strategy&lt;/code&gt; and recalculates the route based on the new criteria.&lt;/p&gt;

&lt;p&gt;The main advantage of this pattern is quick system reorientation. You don't need to apply many "if" statements with logic described in one place. This approach also helps with using composition instead of inheritance. Without it, you usually end up with a single generic template algorithm and inherit from it, leading to logic rewriting. In contrast, with composition, all algorithms are independent. Therefore, you can apply the open/closed principle, add something new, and it won't affect the existing code.&lt;/p&gt;

&lt;p&gt;Note that this pattern can become cumbersome. I advise against using it if the strategy changes infrequently. It's also important to understand the presence of multiple strategies and the difference between them. To achieve this, carefully design the service interface. In Google Maps, for example, users see icons for buses, pedestrians, or cars and can immediately see which routes are available to them.&lt;/p&gt;

&lt;p&gt;Returning to the pizzeria example, this pattern can help build the logic for order fulfillment (delivery or pickup). These &lt;code&gt;Strategies&lt;/code&gt; would contain two algorithms and two sets of logic. For delivery, the route would look like this: the courier receives the order, picks up the pizza, delivers it to the customer, and accepts payment. In the case of pickup, the customer places an order, arrives at the establishment at a specific time, pays, receives a receipt, and takes the pizza. If a user's plans change, they can switch between strategies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class DeliveryStrategy implements ReleaseStrategy {...}
class PickupStrategy implements ReleaseStrategy {...}

class ReleaseOrderSystem {
orderStrategy: ReleaseStrategy;
setStrategy(orderStrategy: ReleaseStrategy) {...}
release() {...}

const releaseOrderSystem = new ReleaseOrderSystem();
'releaseOrderSystem.setStrategy(new DeliveryStrategy());
releaseOrderSystem.release();
releaseOrderSystem.setStrategy (new PickupStrategy());
releaseOrderSystem.release()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two strategies with logic that describe the process of getting pizza. There is also a higher-level &lt;code&gt;ReleaseOrderSystem&lt;/code&gt; that contains the logic, a &lt;code&gt;setStrategy&lt;/code&gt; method for changing the strategy, and a release method for executing the strategy. The bottom part shows how it is used. First, an &lt;code&gt;OrderSystem&lt;/code&gt; object is created. Then we set the delivery strategy and call release - and the pizza is delivered. As an alternative, we set the pickup strategy, call release again - and the pickup is completed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anti-patterns&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These solutions introduce numerous issues with the code. To avoid errors and ensure maintainability, it is important to be aware of common anti-patterns.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Magic numbers:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These are hardcoded variables or values whose origins are not obvious to external observers. For example, in a project involving mathematical calculations, there might be a number multiplied by 2. The original coder understands the source of the number clearly, but another developer may not. This complicates the code review process, making it harder to make changes and work on the product. To address this, it's recommended to use variables or constants instead of magic numbers. Functions can also be named more descriptively to enhance code clarity.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Hard code:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is a common problem among developers where input data is hardcoded and cannot be easily modified without code editing. If developers and reviewers overlook this factor, serious issues can arise after deployment. It's important to ensure that values are relative rather than absolute to facilitate flexibility and maintainability.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Boat anchor:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This pattern describes entities, functions, or classes that remain in the project "just in case." You might create a powerful, universal feature and use it. However, as the business logic evolves, the feature becomes unnecessary. The logical solution is to remove it, but sometimes developers keep it, hoping that it might be useful in the future and save them development time. Unfortunately, that "future" may never come, and the anchor remains, cluttering the codebase.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Lava flow:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Similar to the previous anti-pattern, in this case, entities remain in the code due to a lack of time for refactoring. Initially, it may not seem like a big deal since the unused fragments don't bother anyone. However, over time, these remnants become like flowing lava, causing chaos. It's worth taking breaks from developing new features and cleaning up the code from outdated elements.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reinventing the wheel:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This problem is common among beginners who lack experience. They spend a lot of time trying to create something from scratch while there is already a ready-made and tested solution available. It's important to seek advice from colleagues and learn from their experience rather than reinventing the wheel.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reinventing the square wheel:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is an even worse version of the previous anti-pattern. Here, not only do you reinvent the wheel, but you also make it worse than existing alternatives. The solution ends up having bugs, inefficiencies and fails to solve the intended problems effectively.&lt;/p&gt;

&lt;p&gt;Should patterns be used or not?&lt;/p&gt;

&lt;p&gt;There is no definitive answer to whether or not to use patterns. It is important to assess the advantages and disadvantages of different patterns in specific situations. On the one hand, patterns are convenient and reliable. Furthermore, you can learn about the nuances of their implementation and adaptation to your tasks from the relevant community. They are also easily understandable for the team. If you suggest using a specific pattern in a certain part of the code, other developers will immediately see the big picture of how it will work.&lt;/p&gt;

&lt;p&gt;On the other hand, patterns do not always fit certain tasks. You need to analyze your capabilities, the capabilities of these patterns, and the efforts required to implement them. It's important to consider the long-term results. There is a saying: when you have a hammer in your hand, everything looks like a nail. After studying and mastering a pattern, there is often a tendency to apply it in almost every project. However, this can lead to additional problems. Therefore, always consider which of your decisions will genuinely simplify the code-writing process.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>webdev</category>
      <category>frontend</category>
      <category>learning</category>
    </item>
    <item>
      <title>How to Automate Targeted Advertising in a Non-Standard Way - Advice from a Data Engineer</title>
      <dc:creator>NIX United</dc:creator>
      <pubDate>Tue, 07 Mar 2023 11:52:46 +0000</pubDate>
      <link>https://dev.to/nix_united/how-to-automate-targeted-advertising-in-a-non-standard-way-advice-from-a-data-engineer-114p</link>
      <guid>https://dev.to/nix_united/how-to-automate-targeted-advertising-in-a-non-standard-way-advice-from-a-data-engineer-114p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Ivan Harahaychuk, Scala Data Engineer at &lt;a href="https://nix-united.com/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=martech"&gt;NIX&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It would seem that targeted advertising services are already sufficiently automated. But our team of data engineers decided to look at some familiar technologies from a different angle. In the end, we found new, effective solutions for the client. In this article, I will share the most interesting findings and describe what should be considered for anyone who wants to repeat something like this.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's recall the principles of targeted advertising
&lt;/h2&gt;

&lt;p&gt;Once upon a time, in order to place banners on the Internet, it was necessary to negotiate directly with the owners of advertising resources. They determined the cost of the service, collected information about their audience, reported the number of clicks, etc. Over time, all these steps were automated thanks to the following services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Supply-Side Platform.&lt;/strong&gt; These platforms manage advertising spaces on third-party sites and applications. Their main goal is to sell advertising space to users of such a service at a favorable price. SSP also provides comprehensive information about visitors. In fact, it is the supply side of the market.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Demand-Side Platform.&lt;/strong&gt; Designed to place ads on sites offered by SSP. They form requests, that is, demand. DSP helps advertisers place ads on quality sites at a minimal cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Bidding.&lt;/strong&gt; This is a mechanism for advertising auctions in real time. SSP and DSP participate in it. The principle of operation of RTB is depicted in this diagram:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr37i8ggfggazqu1pg4ts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr37i8ggfggazqu1pg4ts.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The user enters the mobile application, where there are banners. Information about this login is sent to the SSP that manages the ad slots in this app. Next, a request is created to the service that conducts the auction — AD Exchange. It sends a bid request to all DSPs to which the SSP is subscribed (in the example shown, there are three of them). All platforms make bids that return to the auction service. The service compares rates. Whoever placed more gets the right to place a banner in the application. Everything happens automatically, in a fraction of a second.&lt;br&gt;
In the context of our topic, it is worth mentioning the following terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User Acquisition&lt;/strong&gt; is the process of finding new customers, drawing attention to the site or application with the help of advertising. This is the basis of advertising as such.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retargeting&lt;/strong&gt; is directing advertising to an audience that is already familiar with the product being presented. Here, the focus is on using a certain site or application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Let's move to the project. What was the goal?
&lt;/h2&gt;

&lt;p&gt;Our team had to improve the system of targeted advertising, which would modify and optimize User Acquisition and retargeting. The scale of activity of this system is truly fascinating. On average, the service processes almost a million requests per second! In addition, traffic comes from many regions: from the USA and Brazil to Europe and Japan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppj8leyn13m7eifepqua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppj8leyn13m7eifepqua.png" alt="Image description" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The service is built on many modules, so the technology stack is very diverse.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To handle bid requests, we used bider, the main request handling module written in Scala. For its API, we take the Akka library stack. They are also required for other Scala modules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apache Kafka was chosen as a message broker for the transmission of bid requests. It can not only transmit bid requests to other modules, but also withstand a significant load in real time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To process data arrays - we use Spark Job for various purposes of processing such large volumes of information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We use Angular for the frontend. The service that manages the frontend API is written in Java using the Spring framework.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bid forecasting requires a powerful tool for predicting the optimal level of price per ad impression. Python with PMML models helped us with this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For data storage, MySQL was chosen for data that should be stored for more than a week, Aerospike and Redis for frequently changing data and cache, and Apache and Druid for analytical data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Elasticsearch was used to work with logs. The platform generates a huge number of logs. He will cope best with such an array.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Web services. For deployment and other related tasks, we have connected many AWS services: EC2, ECS, EMR, S3, S3 Glacier, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Difference between Redis and Aerospike
&lt;/h2&gt;

&lt;p&gt;You probably noticed the use of Redis and Aerospike at the same time. Aren’t those two NoSQL databases with similar functionality? Why not keep one of them then? However, we need both options (we have open source versions). Here it is worth paying attention to their differences - critical for our project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Using flash memory.&lt;/strong&gt; The service generates a lot of data that is difficult to store exclusively in RAM. Redis in the free version works only with RAM. The Aerospike doesn't have this problem, so we use an SSD with it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Support for triggers.&lt;/strong&gt; Aerospike does not have such functionality, but Redis has it. This is very important for our project. We use the publish/subscribe mechanism for some data, and its change should trigger a specific method.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Horizontal scaling.&lt;/strong&gt; Unlike Aerospike, Redis does not scale well horizontally. This database is more suitable for handling heavy loads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data consistency.&lt;/strong&gt; Redis does not guarantee data consistency, which is critical for our project. Aerospike has full support for this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS integration.&lt;/strong&gt; Redis is successfully integrated with Amazon Web Services through ElastiCache. Aerospike does not have this. It is actually deployed only on EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How the Reporting UI works
&lt;/h2&gt;

&lt;p&gt;In the project, all modules are interesting in their own way. But I would single out the two most unusual parts. The first is Reporting UI. This module sends reports, but the implemented mechanism is different from the usual methods. Usually, reports with important business information are sent to email or various BA tools. In our case, reports can also be sent in Slack. All project communication takes place in this messenger.&lt;/p&gt;

&lt;p&gt;We have added other features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the ability to receive a report on channel profit in Slack in &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the form of a detailed chart;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;subscription to the required reports;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;integration with chat teams to use bots to generate a profit report for a specific SSP. This method of delivering reports was appreciated by both business analysts and the customer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfvc96k37qkxk4f3ol86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfvc96k37qkxk4f3ol86.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The technical implementation of the module is not difficult. First, we query Amazon Athena to retrieve the required information from the bid logs. We bring these data into the formats required for reports. Next, it remains to choose the method of distribution: by email, in a Slack channel or a chatbot (if there is a corresponding command).&lt;/p&gt;

&lt;p&gt;The illustration below shows an example of such a report. Here is a graph with profit data and some numbers. The two lines represent today's and yesterday's data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3sy1p18ks6679fcrtb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3sy1p18ks6679fcrtb7.png" alt="Image description" width="642" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a certain point, the number of service clients increased dramatically. In order to increase the flexibility of configuration and scaling of the system, there was a need to switch to a more modular architecture. But after the update, there were errors in the modules, which led to the loss of money. These issues are usually easy to spot because we have a lot of analytics and metrics. So it was necessary to stop the bider and fix the bug. Then we decided to make a high-level Exception Handler, our Stopper. Its purpose is to automate the identification of problems and stop the handling of requests.&lt;/p&gt;

&lt;p&gt;Stopper implementation is quite simple:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F583mwan069n1dkp5tyih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F583mwan069n1dkp5tyih.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a rule, we look at the number of events and the lag of some topics on Kafka to track problems. Our module either looks at the lag or reads the number of events passing through the topic. Since we understand benchmarks, we can set minimum and maximum levels for topics and delay timeouts. In case of excess or shortage of events, data about these problems is sent to a MySQL table. It is there that the Stopper checks the information and decides whether to stop the handling of requests or not.&lt;/p&gt;

&lt;p&gt;You might also notice Redis in the diagram. This is explained by the fact that we are currently testing the stopping of requests against this database as well. If the number of keys becomes critically large or greatly reduced, the system must do everything the same as for Kafka.&lt;/p&gt;

&lt;h2&gt;
  
  
  What should be considered before starting work?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;There is no need to limit yourself to one solution&lt;/strong&gt;&lt;br&gt;
As you can see, we did not limit ourselves to one NoSQL database. This allowed us to combine the best of Redis and Aerospike, increase the scalability of the system, and ultimately save money, which is also appreciated by the business.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use familiar approaches and tools in unusual ways&lt;/strong&gt;&lt;br&gt;
We tried to send important BA information to messengers. This is not so traditional in most projects, but that's what makes it interesting. And what is important — such a solution has increased the mobility of business analysts many times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate everything you can&lt;/strong&gt;&lt;br&gt;
Even if it seems impossible, look for implementation options. Everything is real. We made sure of this on the example of the handling of some exceptions, which freed up some of the resources of our specialists.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tutorial</category>
      <category>programming</category>
      <category>datascience</category>
      <category>martech</category>
    </item>
    <item>
      <title>A Three-pronged Approach to Bringing ML Models Into Production</title>
      <dc:creator>NIX United</dc:creator>
      <pubDate>Wed, 06 Jul 2022 07:45:17 +0000</pubDate>
      <link>https://dev.to/nix_united/a-three-pronged-approach-to-bringing-ml-models-into-production-3ihb</link>
      <guid>https://dev.to/nix_united/a-three-pronged-approach-to-bringing-ml-models-into-production-3ihb</guid>
      <description>&lt;p&gt;I am Vitaly Tsymbaliuk, a Data Science Specialist at &lt;a href="https://nix-united.com/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=ML_models"&gt;NIX United&lt;/a&gt;. Throughout this article, I will explain several ways to deploy machine learning models in production, based on our team experience. It should be noted that the main criteria for choosing these approaches are convenience, speed of operation, and completeness of functionality. In addition, I will describe the bottlenecks we encountered and what solutions we eventually applied.&lt;/p&gt;

&lt;p&gt;Engineers in &lt;a href="https://nix-united.com/services/data-science/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=ML_models"&gt;data science and MLOps&lt;/a&gt; will find this article valuable. With this material, you will be able to set up simple, fast, continuous delivery within ML.&lt;/p&gt;

&lt;p&gt;In data science, sending ML models to production often remains in the background, as it is the last stage. Before that, data collection, selection of algorithms to solve problems, testing various hypotheses, and maybe experiments need to be done. The first time we see results and the problem has been somewhat solved, we understandably want to cheer "Hurray! Triumph! Victory!"—However, we must still find a way to make the model work, and not within some Jupyter Notebook, but in a real application with real workloads and real users. Furthermore, the production phase implies two other factors as well. The first is the option to replace the ML model with a new one without stopping the application (hot swap). The second is to configure access rights to the model and run several versions of it simultaneously.  &lt;/p&gt;

&lt;p&gt;In our team's projects, we have tried many approaches for models created and trained in various ML frameworks. I will focus on the variants that we most often use in practice.&lt;/p&gt;

&lt;p&gt;We attempted to develop self-written web-applications by loading learned models into them before moving on to the serving tools. However, we ran across several issues with this variety. We had to cope with internal web-application multithreading implementations that clashed with ML implementations, as well as the initial loading of the models. Because this is a time-consuming process, the apps were not ready to be used right away. Furthermore, there were issues with numerous users working at the same time, access discrimination, and restarting of the application after training a new version of the model. These issues are now a thing of the past thanks to the use of specialized libraries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tensorflow Serving
&lt;/h2&gt;

&lt;p&gt;This is most likely the best approach to interact with Tensorflow models. We've also used this framework to work with PyTorch models that were converted to Tensorflow using the ONNX intermediate format.&lt;/p&gt;

&lt;p&gt;The main advantages of Tensorflow Serving:&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Support for multiple versions.&lt;/strong&gt; It's simple to set up operations so that, for example, many versions of the same model can run at the same time. You can do A/B testing or keep the QA/Dev versions running this way.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;The ability to shift out models without having to shut down the service.&lt;/strong&gt; For us, this is a very useful function. We can place a new version in the model folder without halting the service, and Tensorflow will wait until the copying of the model is complete before loading the new version, deploying it, and extinguishing the previous one. Even if people are actively interacting with the model, all of this will go unnoticed.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Auto-generating REST and gRPC APIs for working with models.&lt;/strong&gt; This is perhaps the library's most beneficial feature. There's no need to create any services—access to all models is automatically granted. There's also a technique for retrieving model metadata. When we implement third-party models and need to know the input data types, we frequently use them. We use gRPC when we need to speed up our job. This protocol is significantly quicker than REST.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Working with Kubernetes and Docker.&lt;/strong&gt; At the present, the Docker container is our primary method of working with Serving. Serving is loaded in a separate container, into which we copy the configuration file containing our model descriptions. After that, we add a Docker volume containing the models themselves. The same Docker volume is used in additional containers where we train new models as needed (we had the option to use it in Jupyter and a separate application). This system has now been thoroughly tested and is being implemented in a number of our projects.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Scalability.&lt;/strong&gt; This feature is still under study and we are going to use it in the future. In theory, Tensorflow serving can be run in Kubernetes (e.g. Google Cloud) and keep our serving behind LoadBalancer. So the load on the models will be shared across multiple instances.&lt;/p&gt;

&lt;p&gt;Disadvantages of Tensorflow Serving:&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;It’s tough to deploy a model that wasn't built with Tensorflow (sklearn, lightGBM, XGBoost).&lt;/strong&gt; You must write your own C++ code despite the fact that their extensions are supported.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;You should be concerned about security.&lt;/strong&gt; For example, closing network access to Tensorflow Serving and leaving access open just for your services that will already implement authentication. We normally close all ports for the container running the service in our Docker deployment technique. As a result, models can only be accessible from other containers in the same subnet. This container bundle is well-served by Docker-compose.&lt;/p&gt;

&lt;p&gt;When comparing Tensorflow to Pytorch, the latter did not have the ability to serve the model using something similar to Tensorflow Serving until recently. Even the official documentation demonstrated how to use the Flask service. However, with Tensorflow, you don't need to do this as it automatically builds such a service. When it came to drawbacks, they were crucial for us as we began to learn about Serving. They are no longer relevant in the architecture we employ.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In order to add some business context to the technical research that has been outlined, I will briefly share a few relevant cases. &lt;br&gt;
We partnered with an innovative startup in the in vitro fertilization field to help them implement a number of prediction models, reach required optimization and building a continuous deployment process. Since TensorFlow was utilized initially, we decided to build on top of the existing environment and incorporated the Tensorflow Serving method. The released product reliably predicts embryo quality without human intervention, though it keeps us busy with ongoing model enhancements.&lt;br&gt;
Another example where we went with Predictive Model Markup Language was an AI-based advertising system aimed to maximize the outcome of ad campaigns through deep learning of customer profiles and personalized offerings. Since implementation required massive data processing we had to bring in our data engineers to build Scala-based data pipelines. Therefore the ability of PMML to produce Scala-ready ML models was the decisive benefit that led us to select it over other options.&lt;br&gt;
 — Eugene Rudenko, AI Solutions Consultant at NIX United.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Triton Inference Server
&lt;/h2&gt;

&lt;p&gt;Another popular Nvidia model deployment framework. This tool allows you to deploy GPU- and CPU-optimized models both locally and in the cloud. It supports REST and GRPC protocols, and it has the ability to include TIS as a C-library on end-devices directly into applications.&lt;/p&gt;

&lt;p&gt;The main benefits of the Triton Inference Server:&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Ability to deploy models trained with various deep learning frameworks.&lt;/strong&gt; These are TensorRT, TensorFlow GraphDef, TensorFlow SavedModel, ONNX, PyTorch TorchScript, and OpenVINO. Both TensorFlow 1.x and TensorFlow 2.x versions are supported. Triton also supports model formats such as TensorFlow-TensorRT and ONNX-TensorRT.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Parallel operation and hot swapping of deployed models.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;It isn't just for deep learning models.&lt;/strong&gt; Triton provides an API that allows you to use any Python or C++ algorithm. At the same time, all of the benefits of the deep learning models used in Triton are preserved.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Model Pipelines.&lt;/strong&gt; When several models are deployed and some are awaiting data from other models, the models can be integrated into sequences. Sending a request to a group of models like this will cause them to run in order, with data traveling from one model to the next.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Ability to integrate Triton as a component (C-library) in the application.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;Deployment.&lt;/strong&gt; The project features a number of docker images that are updated and expanded on a regular basis. It's a good approach for establishing a scalable production environment when used in conjunction with Kubernetes.&lt;/p&gt;

&lt;p&gt;– &lt;strong&gt;A number of metrics allow you to monitor the status of models and the server.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can say from experience that the combo of Triton Server + TensorRT Engine works well since this format allows models to be as productive as possible. However, at least two moments must be considered in this case. To begin, TensorRT Engine should be compiled on the device that will be used in the Triton deployment environment and has the same GPU/CPU. Second, if you have a custom model, you may have to implement the missing operations manually.&lt;/p&gt;

&lt;p&gt;In terms of the latter, this is quite common when employing non-standard SOTA models. You may discover a variety of TensorRT implementations on the web if you want to use popular models—for example, in the project where we needed to train an object-detection algorithm on Rutorch and deploy it on Triton, we used many cases of PyTorch -&amp;gt; TensorRT -&amp;gt; Triton. &lt;a href="https://github.com/wang-xinyu/tensorrtx"&gt;The implementation of the model on TensoRT was taken from here.&lt;/a&gt; You may also be interested in this repository, as it contains many current implementations supported by developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  PMML (Predictive Model Markup Language)
&lt;/h2&gt;

&lt;p&gt;To be clear, PMML is not a serving library, but the format of saving models in which you can already save scikit-learn, Tensorflow, PyTorch, XGBoost, LightGBM, and many other ML models. In our practice, we used this format to unload the trained LightGBM model and convert the result into a jar file using jpmml transpiler. As a result, we received a fully functional model that could be loaded into Java/Scala code and used immediately.&lt;/p&gt;

&lt;p&gt;In our case, the main goal of applying this approach was to get a very fast response from the model—and indeed, in contrast to the same model in Python, the response time decreased by about 20 times. The second advantage of the approach is the reduction of the size of the model itself—in transpiled form its size became three times smaller. However, there are disadvantages—since this is a fully-programmatic way of working with models, all the possibilities of retraining and substituting models must be provided independently.&lt;/p&gt;

&lt;p&gt;In conclusion, I would note that the three options listed above are not silver bullets, and other approaches have been left behind. We are now taking a closer look at TorchServe and trying Azure ML-based solutions in some projects. The methods named here are what have worked well on most of our projects. They are fairly easy to set up and implement and don't take much time. They can be handled by an ML engineer to get the solution ready. Of course, requirements vary from project to project, and each time you have to decide which ML deployment method is appropriate in a particular case. In really difficult instances, you'll certainly need to work with MLOps engineers and, in some cases, design an entire ML combination of multiple methods and services.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>ai</category>
    </item>
    <item>
      <title>Can Micronaut replace Spring Boot? Let's take a look at an example.</title>
      <dc:creator>NIX United</dc:creator>
      <pubDate>Thu, 17 Feb 2022 16:25:02 +0000</pubDate>
      <link>https://dev.to/nix_united/can-micronaut-replace-spring-boot-lets-take-a-look-at-an-example-3nna</link>
      <guid>https://dev.to/nix_united/can-micronaut-replace-spring-boot-lets-take-a-look-at-an-example-3nna</guid>
      <description>&lt;p&gt;Hi, my name is Ivan Kozikov, I am a full stack Java developer at &lt;a href="https://nix-united.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=Can%20Micronaut%20replace%20Spring%20Boot"&gt;NIX United&lt;/a&gt;. I have Oracle and Kubernetes certifications, and I like to explore new technologies and learn new topics in the area of Java.&lt;/p&gt;

&lt;p&gt;Every year JRebel resource conducts a survey among Java developers on which frameworks they use. In &lt;a href="https://www.jrebel.com/blog/2020-java-technology-report"&gt;2020&lt;/a&gt;, Spring Boot won with 83%. However, in &lt;a href="https://www.jrebel.com/blog/2021-java-technology-report"&gt;2021&lt;/a&gt;, its share dropped to 62%. One of those that more than doubled its presence in the market was Micronaut. The rapid growth of popularity of this framework raises a logical question: what is interesting about it? I decided to find out what problems Micronaut overcomes and understand if it can become an alternative to Spring Boot.&lt;/p&gt;

&lt;p&gt;In this article, I will walk through the history of software architecture, which will help to understand why such frameworks emerged and what problems they solve. I will highlight the main features of Micronaut and compare two applications with identical technologies: one on this framework and the other on Spring Boot.&lt;/p&gt;

&lt;h3&gt;
  
  
  From Monoliths to Microservices and Beyond…
&lt;/h3&gt;

&lt;p&gt;Modern software development began with a monolithic architecture. In it, the application is served through a single deployable file. If we are talking about Java, this is one JAR file, which hides all the logic and business processes of the application. You then offload that JAR file to wherever you need it.&lt;/p&gt;

&lt;p&gt;This architecture has its advantages. First of all, it's very easy to start developing a product. You create one project and fill it with business logic without thinking about communication between different modules. You also need very few resources at the start and it's easier to perform integration testing for the whole application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8sftnfmtm8vily4o13t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8sftnfmtm8vily4o13t.png" alt="Image description" width="770" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, this architecture also has disadvantages. Applications on the monolithic architecture almost always outgrew the so-called "big layer of mud.” The components of the application became so intertwined that it was then difficult to maintain, and the larger the product, the more resources and effort it would take to change anything in the project.&lt;/p&gt;

&lt;p&gt;Therefore, microservice architecture has replaced it. It divides the application into small services and creates separate deployment files depending on the business processes. But don't let the word "micro" mislead you — it refers to the business capabilities of the service, not its size.&lt;/p&gt;

&lt;p&gt;Usually, microservices are focused on single processes and their support. This provides several advantages. First, because they are separate independent applications, you can tailor the necessary technology to the specific business process. Second, it is much easier to assemble and deal with the project.&lt;/p&gt;

&lt;p&gt;However, there are also disadvantages. You first need to think about the relationship between services and their channels. Also, microservices require more resources to maintain their infrastructure than in the case of a monolith. And when you move to the cloud, this issue is even more critical, because you have to pay for the consumption of cloud infrastructure resources from your applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb0arl3bzvkiy8wfthxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb0arl3bzvkiy8wfthxl.png" alt="Image description" width="770" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the Difference Between Frameworks and Microframeworks?&lt;/strong&gt;&lt;br&gt;
To speed up software development, frameworks began to be created. Historically, the model for many Java developers was Spring Boot. However, over time, its popularity declined, and this can be explained. Over the years, Spring Boot has gained quite a lot of "weight," which prevents it from working quickly and using fewer resources, as required by modern software development in the cloud environment. That is why microframeworks began to replace it.&lt;/p&gt;

&lt;p&gt;Microframeworks are a fairly new kind of framework that aim to maximize the speed of web service development. Usually, they have most of the functionality cut — as opposed to full stack solutions like Spring Boot. For example, very often they lack authentication and authorization, abstractions for database access, web templates for mapping to UI components, etc. Micronaut started out the same way but has outgrown that stage. Today it has everything that makes it a full stack framework.&lt;/p&gt;
&lt;h3&gt;
  
  
  Main Advantages of Micronaut
&lt;/h3&gt;

&lt;p&gt;The authors of this framework were inspired by Spring Boot but emphasized the minimal use of reflection and proxy classes, which speeds up its work. Micronaut is multilingual and supports Java, Groovy, and Kotlin.&lt;/p&gt;

&lt;p&gt;Among the main advantages of Micronaut, I highlight the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Abstractions for accessing all popular databases.&lt;/strong&gt; Micronaut has out-of-the-box solutions for working with databases. They also provide an API to create your own classes and methods to access databases. In addition, they support both variations: normal blocking access and reactive access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Aspect-oriented API.&lt;/strong&gt; In Spring Boot, you can develop software quickly thanks to annotations. But these instructions are built on reflection and creation of proxy classes at program execution. Micronaut provides a set of ready-to-use instructions.You can use its tools to write your own annotations that use reflection only at compile-time, not at runtime. This speeds up the launch of the application and improves its performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Natively built-in work with cloud environments.&lt;/strong&gt; We will talk about this in detail further and I will reveal the important points separately.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Built-in set of testing tools.&lt;/strong&gt; These allow you to quickly bring up the clients and servers you need for integration testing. You can also use the familiar JUnit and Mockito libraries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  What Does Full-time Compilation Give Us?
&lt;/h4&gt;

&lt;p&gt;I already pointed out that Micronaut does not use reflection and proxy classes — this is possible through ahead-of-time compilation. Before executing an application at the time of package creation, Micronaut tries to comprehensively resolve all dependency injections and compile classes so that it does not have to while the application itself is running.&lt;/p&gt;

&lt;p&gt;Today there are two main approaches to compilation: just in time (JOT) and ahead of time (AOT). JIT compilation has several main advantages. The first is the great speed of building an artifact, the JAR file. It doesn't need to compile additional classes — it just does this at runtime. It's also easier to load classes at runtime; with AOT-compilation this has to be done manually.&lt;/p&gt;

&lt;p&gt;In AOT compilation, however, startup time is shorter, because everything the application needs to run will be compiled before it is even started. With this approach, the artifact size will be smaller because there are no proxy classes to work through which compilations are then run. On the plus side, fewer resources are required with this compilation.&lt;/p&gt;

&lt;p&gt;It is important to emphasize that, out of the box, Micronaut has built-in support for GraalVM. This is a topic for a separate article, so I will not go deep into it here. Let me say one thing: GraalVM is a virtual machine for different programming languages. It allows the creation of executable image files, which can be run within containers. There the start and run speeds of the application are at maximum.&lt;/p&gt;

&lt;p&gt;However, when I tried to use this in Micronaut, even guided by the comments of the framework's creator, when creating the native image I had to designate the key classes of the application as they will be precompiled at runtime. Therefore, this issue should be carefully researched compared to the advertised promises.&lt;/p&gt;
&lt;h3&gt;
  
  
  How Micronaut Works with Cloud Technology
&lt;/h3&gt;

&lt;p&gt;Separately, native support for cloud technologies should be disclosed. I will highlight four main points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Micronaut fundamentally supports cordoning.&lt;/strong&gt; When we work with cloud environments, especially when there are multiple vendors, we need to create components specifically for the infrastructure in which we will use the application. To do this, Micronaut allows us to create conditional components that depend on certain conditions. This provides a set of configurations for different environments and tries to maximize the definition of the environment on which it runs. This greatly simplifies the work of the developer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Micronaut has nested tools to determine the services needed to run the application.&lt;/strong&gt; Even if it does not know a service’s real address, it will still try to find it. Therefore, there are several options: you can use built-in or add-on modules (e.g. Consul, Eureka, or Zookeeper).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Micronaut has the ability to make a client-side load balancer.&lt;/strong&gt; It is possible to regulate the load of the application replicas on the client-side, which makes life easier for the developer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Micronaut supports serverless architecture.&lt;/strong&gt; I have repeatedly encountered developers saying, "I will never write lambda-functions in Java." In Micronaut we have two possibilities to write lambda-functions. The first is to use the API, which is directly given by the infrastructure. The second is to define controllers, as in a normal REST API, and to then use them within that infrastructure. Micronaut supports AWS, Azure, and Google Cloud Platform.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some may argue that all this is also available in Spring Boot. But connecting cloud support there is only possible thanks to additional libraries or foreign modules, while in Micronaut, everything is built in natively.&lt;/p&gt;
&lt;h3&gt;
  
  
  Let's Compare Micronaut and Spring Boot Applications
&lt;/h3&gt;

&lt;p&gt;Let's get to the fun part! I have two applications — one written in Spring Boot, the other in Micronaut. This is a so-called user service, which has a set of CRUD operations to work with users. We have a PostgreSQL database connected through a reactive driver, a Kafka message broker, and WEB Sockets. We also have an HTTP client for communicating with third-party services to get more information about our users.&lt;/p&gt;

&lt;p&gt;Why such an application? Often in presentations about Micronaut, metrics are passed in the form of Hello World applications, where no libraries are connected and there is nothing in the real world. I want to show how it works in an example similar to practical use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kd5jbk1jw974pg03fuo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kd5jbk1jw974pg03fuo.png" alt="Image description" width="770" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I want to point out how easy it is to switch from Spring Boot to Micronaut. Our project is pretty standard: we have a third-party client for HTTP, a REST controller for handling deals, services, a repository, etc. If we go into the controller, we can see that everything is easy to understand after Spring Boot. The annotations are very similar. It shouldn't be hard to learn it all. Even most instructions, like PathVariable, are one-to-one to Spring Boot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Controller("api/v1/users")
public class UserController {
  @Inject
  private UserService userService;

  @Post
  public Mono&amp;lt;MutableHttpResponse&amp;lt;UserDto&amp;gt;&amp;gt; insertUser(@Body Mono&amp;lt;UserDto&amp;gt; userDtoMono) {
      return userService.createUser(userDtoMono)
          .map(HttpResponse::ok)
          .doOnError(error -&amp;gt; HttpResponse.badRequest(error.getMessage()));
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same goes for Service. If we were to write a Service annotation in Spring Boot, here we have a Singleton annotation that defines the scope to which it applies. There's also a similar mechanism for injecting dependencies. They, like in Spring Boot, can be used via constructors or made via property or method parameters. In my example, business logic is written to make our class work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Controller("api/v1/users")
public class UserController {
  @Inject
  private UserService userService;

  @Post
  public Mono&amp;lt;MutableHttpResponse&amp;lt;UserDto&amp;gt;&amp;gt; insertUser(@Body Mono&amp;lt;UserDto&amp;gt; userDtoMono) {
      return userService.createUser(userDtoMono)
          .map(HttpResponse::ok)
          .doOnError(error -&amp;gt; HttpResponse.badRequest(error.getMessage()));
  }

  @Get
  public Flux&amp;lt;UserDto&amp;gt; getUsers() {
    return userService.getAllUsers();
  }

  @Get("{userId}")
  public Mono&amp;lt;MutableHttpResponse&amp;lt;UserDto&amp;gt;&amp;gt; findById(@PathVariable long userId) {
    return userService.findById(userId)
        .map(HttpResponse::ok)
        .defaultIfEmpty(HttpResponse.notFound());
  }

  @Put
  public Mono&amp;lt;MutableHttpResponse&amp;lt;UserDto&amp;gt;&amp;gt; updateUser(@Body Mono&amp;lt;UserDto&amp;gt; userDto) {
    return userService.updateUser(userDto)
        .map(HttpResponse::ok)
        .switchIfEmpty(Mono.just(HttpResponse.notFound()));
  }

  @Delete("{userId}")
  public Mono&amp;lt;MutableHttpResponse&amp;lt;Long&amp;gt;&amp;gt; deleteUser(@PathVariable Long userId) {
    return userService.deleteUser(userId)
        .map(HttpResponse::ok)
        .onErrorReturn(HttpResponse.notFound());
  }

  @Get("{name}/hello")
  public Mono&amp;lt;String&amp;gt; sayHello(@PathVariable String name) {
    return userService.sayHello(name);
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The repository also has a familiar look after Spring Boot. The only thing is I use a reactive approach in both applications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Inject
private UserRepository userRepository;

@Inject
private UserProxyClient userProxyClient;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I personally really liked the HTTP client for communicating with other services. You can write it declaratively just by defining the interface and specifying what types of methods it will be, what Query values will be passed, what parts of the URL it will be, and what body it will be. It's all quick, plus you can make your own client. Again, this can be done using third-party libraries within Spring Boot with reflection and proxy classes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@R2dbcRepository(dialect = Dialect.POSTGRES)
public interface UserRepository extends ReactiveStreamsCrudRepository&amp;lt;User, Long&amp;gt; {
  Mono&amp;lt;User&amp;gt; findByEmail(String email);

  @Override
  @Executable
  Mono&amp;lt;User&amp;gt; save(@Valid @NotNull User entity);
}
@Client("${placeholder.baseUrl}/${placeholder.usersFragment}")
public interface UserProxyClient {

  @Get
  Flux&amp;lt;ExternalUserDto&amp;gt; getUserDetailsByEmail(@NotNull @QueryValue("email") String email);

  @Get("/{userId}")
  Mono&amp;lt;ExternalUserDto&amp;gt; getUserDetailsById(@PathVariable String userId);

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's go directly to work in the terminal. I have two windows open. On the left side on the yellow background is Spring Boot, and on the right side on the gray background is Micronaut. I did a build of both packages — In Spring Boot it took almost 5 seconds, while Micronaut took longer because of AOT compilation; in our case, the process took almost twice as long.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6xj1ip6rda66r52vqd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6xj1ip6rda66r52vqd2.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I compared the size of the artifact. The JAR file for Spring Boot is 40 MB, and for Micronaut 38 MB. Not much less, but still less.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm6akd4tfana46ketlol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm6akd4tfana46ketlol.png" alt="Image description" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, I ran an application startup speed test. In Spring Boot Netty the server started on port 8081 and lasted 4.74 seconds. But in Micronaut we have 1.5 seconds. In my opinion, quite a significant advantage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqhq7rarnolhgnvauln5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqhq7rarnolhgnvauln5.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is a very interesting test. I have a Node.js script whose path passes  to the JAR file as an argument. It runs the application and every half-second it tries to get the data from the URL I wrote to it — that is, our users. This script terminates when it gets the first response. In Spring Boot it finished in 6.1 seconds, and in Micronaut it finished in 2.9 seconds — again, twice as fast. At the same time, the metrics show that Spring Boot started in 4.5 seconds and the result came in 1.5 seconds. For Micronaut, these figures are about 1.5 and 1.3 seconds, respectively. That is, the gain is obtained exactly due to the faster start of the application, and practically, Spring Boot could correspond as fast if it did not do additional compilation at the start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllluxq42fmkznq208m5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllluxq42fmkznq208m5q.png" alt="Image description" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next test: let's start the applications (start takes 4.4 seconds and 1.3 seconds, in favor of Micronaut) and see how much memory both frameworks use. I use jcmd — I pass the identifier to the process and get heap_info. The metrics show that in total the Spring Boot application requested 149 MB to run and actually used 63 MB. We repeat the same for Micronaut, with the same command, but changing the process ID. The result: the application asked for 55 MB and used 26 MB. That is, the difference in resources is 2.5 – 3 times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F960kyvnhhg91r66c7lra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F960kyvnhhg91r66c7lra.png" alt="Image description" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will end with another metric to show that Micronaut is not a silver bullet and has room to grow. With ApacheBench, I simulated 500 requests to the Spring server for Spring Boot with concurrency for 24 requests. That is, we're simulating a situation where 24 users are simultaneously making requests to the application. With a reactive database, Spring Boot shows a pretty good result: it can pass about 500 requests per second. After all, JIT compilation works well on system peaks. Let's copy the procedure to Micronaut and repeat it a few times. The result is about 106 requests per second. I checked the figures on different systems and machines, and they were about the same, give or take.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn534n5pp02f3ytbld2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn534n5pp02f3ytbld2r.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Conclusion is Simple
&lt;/h3&gt;

&lt;p&gt;Micronaut is not an ideal that can immediately replace Spring Boot. It still has some points that are more convenient or functional in the first framework. However, in some areas the more popular product is inferior to less popular, but a quite advanced competitor. That said, Spring Boot also has a ways to go. For example, the same AOT compilation has optionally existed in Java since version 9 in 2017.&lt;/p&gt;

&lt;p&gt;I’d like to add one more thought: developers should not be afraid to try new technologies. They can provide us with great opportunities and allow us to go beyond the standard frameworks we usually work with.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>microservices</category>
      <category>micronaut</category>
    </item>
    <item>
      <title>Using Cucumber and Spock for API test Automation — What Benefits Can You Expect?</title>
      <dc:creator>NIX United</dc:creator>
      <pubDate>Mon, 13 Dec 2021 16:37:28 +0000</pubDate>
      <link>https://dev.to/nix_united/using-cucumber-and-spock-for-api-test-automation-what-benefits-can-you-expect-dp3</link>
      <guid>https://dev.to/nix_united/using-cucumber-and-spock-for-api-test-automation-what-benefits-can-you-expect-dp3</guid>
      <description>&lt;h4&gt;
  
  
  Hi, I’m Vladimir Pasiuga, and I work at &lt;a href="https://nix-united.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api_article"&gt;NIX United&lt;/a&gt; as a quality assurance engineer.
&lt;/h4&gt;

&lt;p&gt;I've been working in the IT field for the past 7 years. I worked as a manual tester for 2.5 years on a healthcare project that comprised UI and API components, and currently, I’m working on an automated testing project, where the application for the medical field consists only of an API.&lt;/p&gt;

&lt;p&gt;I'll go over API testing in detail in this article, so this content will be helpful for QA beginners. You'll learn what an API is, what tools our team uses to test APIs manually, and what technologies we use for automated testing. I'll also talk about how I've used the Cucumber and Spock frameworks to automate API testing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Let's Quickly Go Over the Tech
&lt;/h4&gt;

&lt;p&gt;Before we get into the meat of &lt;a href="https://nix-united.com/services/software-qa-and-testing-services/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api_article"&gt;API testing&lt;/a&gt;, let's brush up on some fundamental ideas. An API allows software components to exchange information with one another. To put it another way, the API acts as a link between internal and external software processes. If you imagine software as a black box, the API is a set of knobs that the user can twist, push, and pull as he pleases.&lt;/p&gt;

&lt;p&gt;Today, the &lt;a href="https://restfulapi.net/"&gt;REST (RESTful) API&lt;/a&gt; and the &lt;a href="https://www.soapui.org/learn/api/soap-vs-rest-api/"&gt;SOAP API&lt;/a&gt; are the two most used techniques to create a programming interface for a web service. When comparing an HTTP request to paper media, we can say that the REST API sends requests via basic notes most of the time, and a letter in an envelope once in a while (perhaps writing part of the message on the envelope itself as well). The SOAP API, on the other hand, sends all instructions in the form of a detailed letter in a standard format, with simply an envelope (a single HTTP request) as a delivery method.&lt;/p&gt;

&lt;p&gt;REST APIs are used when clients and servers solely work in a web environment, where object information isn't important and multi-call transactions aren't required. &lt;a href="https://microservices.io/"&gt;Microservices&lt;/a&gt;, on the other hand, is configured for SOAP APIs if a rigorous contract between the server and the client is required, as well as the ability to perform extremely demanding multi-call transactions with high security and no bandwidth issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  API Testing Tools
&lt;/h4&gt;

&lt;p&gt;For a QA specialist, the lack of UI elements can be perplexing — there are no buttons, fields, or a clear format for addressing the services. Interacting with the API is made easier with special tools. SoapUI and Postman are the most popular.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SoapUI&lt;/strong&gt; — an open-source tool for testing Soap and Rest APIs. In September 2005, SoapUI was first released on &lt;a href="https://sourceforge.net/"&gt;SourceForge&lt;/a&gt;. It's open-source software with a European Union public license, and it’s been downloaded over 2,000,000 times since its initial release. The user interface is created using &lt;a href="https://www.techopedia.com/definition/26102/java-swing"&gt;Swing&lt;/a&gt; and is totally based on the Java platform (i.e., SoapUI is cross-platform). Web service validation, startup, development, modeling and layout, functional testing, and load and compliance testing are all included in its capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://smartbear.com/"&gt;SmartBear&lt;/a&gt;, a software development company, has also built a commercial version of SoapUI Pro (now named &lt;a href="https://www.soapui.org/downloads/download-readyapi-trial-slm/?v=2"&gt;ReadyAPI&lt;/a&gt;), which focuses on performance-related features. SoapUI can perform HTTP(S) and JDBC calls, as well as test SOAP and REST web services, JMS, and AMF. Automated scripts are written in the Groovy programming language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Postman&lt;/strong&gt; — a Swiss army knife, according to its developers, that allows you to form and run queries and document and monitor services all in one spot. From within Postman, testers can develop tests and perform automated testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="(https://www.postman.com/)"&gt;Postman&lt;/a&gt;’s primary function is &lt;em&gt;generating collections&lt;/em&gt; using API queries. Collections make it easy to store queries for an application you're testing or building, and a newbie to the project can rapidly learn how to use the program. Additionally, the development team may easily design the API using Postman. Postman's automated scripts are written in JavaScript.&lt;/p&gt;

&lt;h4&gt;
  
  
  How Cucumber and Spock Became our Go-to Guys
&lt;/h4&gt;

&lt;p&gt;SoapUI and Postman both have their own set of characteristics. Such tests are difficult to maintain, and storing them in version control systems (such as git) is problematic.&lt;/p&gt;

&lt;p&gt;SoapUI and Postman, despite their widespread use in automated testing, can only run tests locally and cannot be used in integration systems like &lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt;. Our team chooses Cucumber and Spock to handle this challenge, as it’s possible to conduct Jenkins tests remotely using them. Furthermore, these frameworks enable the creation of automated smoke-tests that run during the installation of an application, something also not possible with Postman or SoapUI.&lt;/p&gt;

&lt;h4&gt;
  
  
  Features of Cucumber and Spock frameworks
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://spockframework.org/"&gt;Spock&lt;/a&gt; and &lt;a href="https://cucumber.io/"&gt;Cucumber&lt;/a&gt; exemplify the philosophy of &lt;em&gt;behavior-driven development&lt;/em&gt; (BDD). The principle behind BDD is that you must first define the desired result of the added feature in a subject-oriented language before writing any tests. The developers are then given the final documentation.&lt;/p&gt;

&lt;p&gt;A behavioral specification has the following structure and is delivered in a semi-formal format:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Title — a description of the business objective given in the subjunctive form.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Narrative — answers for the following questions in summary form:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Who is the stakeholder in the story?&lt;br&gt;
What is included in the story?&lt;br&gt;
What is the value of the story for the business?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scenarios — One or more cases may be included in the specification, each revealing one of the user behavior situations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A scenario usually follows the same pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One or more initial conditions (&lt;em&gt;Given&lt;/em&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The event that triggers the start of the scenario (&lt;em&gt;When&lt;/em&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The expected result or results (&lt;em&gt;Then&lt;/em&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BDD does not provide any formal rules, but it does require using a limited standard set of terms that encompass all aspects of the behavior specification. &lt;a href="https://dannorth.net/"&gt;Dan North&lt;/a&gt;, the founder of BDD, developed a template for specifications in 2007, which quickly gained traction and became known as the Gherkin language.&lt;/p&gt;

&lt;p&gt;Cucumber is one of the most widely used BDD tools nowadays. Its authors aimed to bring together automated acceptance testing, functional requirements, and software documentation into a unified format that could be understood by both technical and non-technical project participants.&lt;/p&gt;

&lt;p&gt;The test scenario description is built around the Given, When, and Then stages. Each stage corresponds to an annotation that associates a method with a string in the scenario's text description using a regular expression. Scenarios are made up of test steps that each define a specific functionality or feature.&lt;/p&gt;

&lt;p&gt;To automate the scripts given in Cucumber, you can use Ruby, Java, and Python. The test is stored in a separate file with the extension *.feature and is written in Gherkin notation. One or more scripts — which can be written by BAs or manual QA specialists — may be included in this file. The test automation expert then generates a separate class that has a programming language implementation of the steps.&lt;/p&gt;

&lt;p&gt;While building scripts for the behavior of the API application I was testing, I became acquainted with the Cucumber framework. To be more precise, it was not Cucumber itself,  it was the Gherkin language, and we were attempting to describe application behavior scenarios using BDD rules. This was a fascinating experience from the perspective of a manual tester. On the crew, there were several manual testers who wrote Gherkin scripts. The key issue was getting everyone to agree on a standard structure for describing each step and generating a set of procedures that could be repeated in different tests without being duplicated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature: Test OpenWeather API 
             As a customer
             In order to check weather 
             I want to get my city name in response. 
             Scenario Outline: Check if city name is returned correctly 
             When Sent request to openweathermap for “&amp;lt;cityReq&amp;gt;”
             Then Check that 200 response code is returned 
             And Server returns correct city name “&amp;lt;cityResp&amp;gt;”

Examples 

| cityReq           | cityResp |

| “Kharkiv, UA”  | “Kharkiv” |

| “London, GB”  | “London” |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script for testing an OpenWeather API application in Gherkin notation is shown above. For this example, I created a simple script that sends a request to an application server with specific parameters and then checks the answer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Stepdefs {
      @when ( “Sent request to openweathermap for {cityReq} “ )
      public void sent_request_to_ openweathermap ( String cityReq) { 
             HTTP Builder http =  null; 
             try  { 
             http = new  HTTP Builder (testUrl);
             String [ ] actualCity = cityReq.split ( regex: “, “ ) ;
       ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An example Java Stepsdef class is included in the code. The annotation (@Given, @When, &lt;a class="mentioned-user" href="https://dev.to/then"&gt;@then&lt;/a&gt;, etc.) and the text from the feature file are used to map each step from the feature file to its implementation in the Stepsdef class.&lt;/p&gt;

&lt;p&gt;Cucumber is merely an activator for BDD — you must follow BDD principles to get the most out of it.&lt;/p&gt;

&lt;p&gt;Spock is a testing ground. Some would even call it "a language built on top of &lt;a href="https://groovy-lang.org/"&gt;Groovy&lt;/a&gt;." On another project, where I was acting as an automator, I used Spock. As I previously stated, the execution scripts are separate from implementing each construct in Cucumber. This produces understandable scripts, although it’s time-consuming. Because of this, when writing an implementation, this strategy may be impractical.&lt;/p&gt;

&lt;p&gt;The steps are described and implemented in Spock in a single Groovy class. The framework is compatible with all popular IDEs (in particular IntelliJ IDEA), multiple build tools (Ant, Gradle, Maven), and continuous integration services because it’s built on JUnit (continuous integration). A test class is a collection of scripting methods with names in quotes that are similar to Cucumber script names. Because these classes are derived from JUnit, they may be performed like ordinary Groovy unit tests from the IDE. We get regular JUnit results at the same time, which is really useful when designing and debugging automated tests.&lt;/p&gt;

&lt;p&gt;Each test step is broken down into its own code block in Spock, which begins with a label and ends with the start of the next code block or the end of the test. The &lt;em&gt;Given&lt;/em&gt; block is in charge of establishing the test's initial circumstances. The system stimulus is represented by the &lt;em&gt;When&lt;/em&gt; block, and the system response is represented by the Then block. Both of these blocks are always used in tandem. One &lt;em&gt;Expect&lt;/em&gt; block can be utilized if the When-Then construct can be reduced to a single expression.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class WeatherTestSpec extends Specification {
      @Shared def testUrl, testResponse 
       def setupSpec () {
       testUrl = “http:// api. openweathermap.org” 
       testRequest = [ ‘APPID’ : “aaa” ] 
       testResponse = ‘  ‘
}
def  ‘ Check if city name and coordinates is returned correctly’ () { 

when: "Sent request to openweathermap"
def  http =  new HTTPBuilder(testUrl)
testRequest.put ( ‘q’, cityReq)
testResponse = http.get( path :   ‘/data/2.5/weather’ , query : testRequest )

then: “Check that 200 response code is returned”
testResponse. cod == 200

and:  “Server returns correct city name”
testResponse. name == cityResp

where:  
cityReq &amp;lt;&amp;lt; [  “Kharkiv, UA” ,   “London, GB”  ] 
cityResp &amp;lt;&amp;lt;  [  “Kharkiv” , “London ”  ] 
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Groovy class with a Spock test is shown in the example above. The step description appears after the ":" sign and is an arbitrary string. It’s not, however, a required component. Spock enables you to create a test specification without having to describe the procedures. This method, however, is not widely recognized, and it can make it harder to grasp test logic in the future.&lt;/p&gt;

&lt;h4&gt;
  
  
  Which One’s Better?
&lt;/h4&gt;

&lt;p&gt;Cucumber and Spock both have a strong relationship between the human language specification and the test code. This is a direct result of both systems being built to accommodate the BDD paradigm. Cucumber, on the other hand, takes it more seriously. If no regular method expression matches the given step text, large modifications to the human language step description will break the test code — for example, with a missing step implementation. After the ":" character, the text for Spock is an arbitrary string. The step's description is double-checked for consistency before it's put into action.&lt;/p&gt;

&lt;p&gt;Cucumber properly distinguishes between the human-readable specification and the test code. This is really useful for non-technical experts who write or read specs, and a strong collaboration between the Product Owner, BA, QA, architects, and developers is at the heart of BDD. In the case of Cucumber, all project participants will agree on and understand the specification before development begins.&lt;/p&gt;

&lt;p&gt;Spock, on the other hand, provides a quick, succinct, single-file answer. Individual test scripts can have easy-to-understand names due to Groovy's flexibility to use any string as a method name. Spock allows developers to read and understand the specification as well as the code that implements it from a single location. Let's also not forget about the extra benefits that come with Spock (e.g., advanced data table features).&lt;/p&gt;

&lt;p&gt;Cucumber is also only useful for integration testing. Spock, on the other hand, can also be used to run unit tests.&lt;/p&gt;

&lt;p&gt;It wouldn’t be helpful to categorically state which of these approaches to API testing is superior. At NIX United, we use both based on our tasks and objectives. When there are no automators on the team (or only a few), SoapUI and Postman are ideal for the early phases of automation. It's more rational to transition to Cucumber or Spock as the team grows. Each of these frameworks has its own set of benefits that make QA specialists' jobs easier and the testing process more efficient.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>api</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
