<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: George Hadjisavva</title>
    <description>The latest articles on DEV Community by George Hadjisavva (@scuz12).</description>
    <link>https://dev.to/scuz12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/scuz12"/>
    <language>en</language>
    <item>
      <title>RabbitMQ for Developers: A Comprehensive Introduction</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Thu, 24 Aug 2023 17:36:24 +0000</pubDate>
      <link>https://dev.to/scuz12/rabbitmq-for-developers-a-comprehensive-introduction-369p</link>
      <guid>https://dev.to/scuz12/rabbitmq-for-developers-a-comprehensive-introduction-369p</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Imagine a bustling train station in the heart of a metropolitan city. Trains arrive and depart, carrying with them thousands of passengers to their respective destinations. This station, with its intricate rail networks and precise scheduling, ensures that every passenger gets to where they need to be, efficiently and on time. In the world of software development, RabbitMQ is akin to this train station, orchestrating the flow of messages between various applications.&lt;/p&gt;

&lt;p&gt;RabbitMQ, a leading message broker software, is pivotal in ensuring applications in distributed systems communicate effectively. But what makes it stand out in the crowded world of message brokers? Let's dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is RabbitMQ?
&lt;/h2&gt;

&lt;p&gt;RabbitMQ is an open-source message broker software (also known as message-oriented middleware) that implements the Advanced Message Queuing Protocol (AMQP). At its core, it allows different parts of a system to send, receive, and hold messages. Think of it as the train tracks and signals that guide the flow of data, just as rails and signals guide trains.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_rdac-FE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/440nlno52wzt0e542o9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_rdac-FE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/440nlno52wzt0e542o9z.png" alt="Flowchart" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use RabbitMQ?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; RabbitMQ can handle millions of messages per second, making it perfect for high-throughput systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility:&lt;/strong&gt; It supports multiple messaging protocols like AMQP, MQTT, STOMP, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability:&lt;/strong&gt; Messages can be persisted to disk, ensuring they aren't lost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed Deployment:&lt;/strong&gt; Supports clustering and high availability configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Concepts
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Producer:&lt;/strong&gt; Application that sends the messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumer:&lt;/strong&gt; Application that receives the messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Queue:&lt;/strong&gt; Buffer that stores messages sent by the producer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exchange:&lt;/strong&gt; Directs messages to queues using rules called bindings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binding:&lt;/strong&gt; A link between a queue and an exchange.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Types of Exchanges
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Direct Exchange:&lt;/strong&gt; Sends messages to specific queues based on a routing key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topic Exchange:&lt;/strong&gt; Directs messages based on wildcard matches between the routing pattern and the routing key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headers Exchange:&lt;/strong&gt; Uses message header attributes for routing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fanout Exchange:&lt;/strong&gt; Routes messages to all the queues bound to it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Task Queues:&lt;/strong&gt; Distribute tasks among multiple workers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publish/Subscribe Pattern:&lt;/strong&gt; Send a message to multiple consumers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message Routing:&lt;/strong&gt; Direct messages based on rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topics:&lt;/strong&gt; Dynamically filter messages.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;RabbitMQ is a powerful tool that every developer should consider adding to their toolkit. Its flexibility, reliability, and scalability make it a prime choice for modern applications that require robust communication mechanisms. As with all tools, the best way to understand it is to try it out and experiment with its features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/sCuz123"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://devgnosis.substack.com/"&gt;Newsletter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>development</category>
      <category>backend</category>
      <category>programming</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Unveiling Docker: A Primer on the Revolutionary Containerization Technology</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Sat, 20 May 2023 18:46:19 +0000</pubDate>
      <link>https://dev.to/scuz12/unveiling-docker-a-primer-on-the-revolutionary-containerization-technology-4121</link>
      <guid>https://dev.to/scuz12/unveiling-docker-a-primer-on-the-revolutionary-containerization-technology-4121</guid>
      <description>&lt;p&gt;DevGnosis : &lt;a href="https://devgnosis.substack.com/"&gt;https://devgnosis.substack.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker is a robust platform that leverages lightweight containers, streamlining container management tasks for developers. With Docker, you can effortlessly create and deploy applications, which are analogous to images in traditional virtual machine environments, but tailored for containerization. By handling container provisioning and mitigating networking complexities, Docker simplifies the development process. Moreover, Docker offers a built-in registry functionality, enabling convenient storage and versioning of Docker applications&lt;/p&gt;

&lt;p&gt;The Docker application abstraction proves beneficial as it shields us from the underlying technology utilized to implement the service, similar to VM images. We employ Docker applications generated through our service builds and store them within the Docker registry, allowing us to seamlessly proceed with our development process.&lt;/p&gt;

&lt;p&gt;Docker can also alleviate some of the downsides of running lots of services locally for dev and test purposes. Rather than using Vagrant to host multiple independent VMs, each one containing its own service, we can host a single VM in Vagrant that runs a Docker instance. We then use Vagrant to set up and tear down the Docker platform itself, and use Docker for fast provisioning of individual services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xw2XlQkJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtoo8bkz0i1x0iuyw5ni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xw2XlQkJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtoo8bkz0i1x0iuyw5ni.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>microservices</category>
      <category>devops</category>
    </item>
    <item>
      <title>How Redis Enhances Microservice Ecosystems: Exploring its Benefits</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Sat, 22 Apr 2023 09:52:28 +0000</pubDate>
      <link>https://dev.to/scuz12/how-redis-enhances-microservice-ecosystems-exploring-its-benefits-51i8</link>
      <guid>https://dev.to/scuz12/how-redis-enhances-microservice-ecosystems-exploring-its-benefits-51i8</guid>
      <description>&lt;p&gt;In the world of microservices, managing data can be a complex task. With multiple services communicating with each other, it's essential to have a system that can handle data storage and retrieval efficiently. Redis is a popular open-source, in-memory data structure store that is used by many companies to solve this problem. Its speed, flexibility, and ease of use make it an ideal choice for microservice ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bMkKyXop--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bwj7nd2c2bxgs4zausq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bMkKyXop--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bwj7nd2c2bxgs4zausq.png" alt="Redis diagram" width="800" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Redis is a key-value-based caching system that can be compared to memcached, but with added features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It has a schemaless structure that allows for flexibility in defining data tables or schemas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redis supports multiple data models and types, making it a versatile database option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compared to other database systems, Redis offers advanced features such as sharding, which enables it to withstand concurrent write requests and transactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redis can be used alongside other databases to reduce load and improve performance, or as a primary database based on individual needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is particularly useful in scenarios that require quick data ingestion, data integrity, high efficiency, and replication.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Redis Use-Cases
&lt;/h2&gt;

&lt;p&gt;Redis has a wide range of use cases, making it a versatile tool for various applications. One of the most popular use cases for Redis is caching. By caching frequently accessed data in Redis, microservices can quickly retrieve the data from memory, rather than having to query a database every time. Redis is also used as a message broker, where it can manage queues of messages between services, ensuring reliable and efficient communication. Another popular use case is real-time analytics, where Redis can store and analyze large volumes of data in memory, making it an ideal choice for microservices that require real-time data processing. &lt;/p&gt;

&lt;h3&gt;
  
  
  Redis as a Caching
&lt;/h3&gt;

&lt;p&gt;Redis is a powerful tool that can be used in a variety of microservice use cases. One common use case is caching, where Redis can significantly improve the performance of microservices by storing frequently accessed data in memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redis in an ecommerce application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In an e-commerce microservice architecture, product pages need to be loaded quickly and efficiently for a smooth user experience. However, querying a database for product information every time a page is requested can be time-consuming and resource-intensive. This is where Redis comes in handy as a caching solution. When a user requests a product page, the microservice can first check if the product information is already available in Redis cache. If it is, the microservice can quickly retrieve the product data from Redis, without having to query the database. This can significantly reduce the response time and improve the performance of the microservice.&lt;/p&gt;

&lt;p&gt;Moreover, Redis's in-memory storage model allows for extremely fast read and write operations, making it an ideal choice for caching. Additionally, Redis offers advanced features like expiration time and eviction policies, which allow you to set a timeout for cached data or remove it based on certain criteria. This ensures that the cached data stays fresh and relevant, without taking up unnecessary memory space.&lt;/p&gt;

&lt;p&gt;Overall, using Redis to cache product information in an e-commerce microservice can lead to faster response times, improved user experience, and reduced load on the underlying database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis as Message Broker
&lt;/h3&gt;

&lt;p&gt;Message brokering is an important aspect of microservice architectures, as it allows services to communicate with each other efficiently and reliably. Redis can be used as a message broker, where it can manage queues of messages between services. For example, in a ride-hailing app, Redis can be used to manage ride requests and dispatch them to drivers. When a user requests a ride, the ride request can be stored in a Redis queue, which can be monitored by a driver dispatch service. The dispatch service can then pick up the request from the queue and assign a driver to the ride. This way, Redis helps ensure that ride requests are handled efficiently and reliably.&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis as Real-time analytics
&lt;/h3&gt;

&lt;p&gt;Real-time analytics is another important use case for Redis in microservices. Redis's in-memory storage model allows it to store and analyze large volumes of data in real-time. For example, a social media microservice might use Redis to analyze user behavior in real-time, such as tracking likes, comments, and shares. This data can be stored in Redis and analyzed using Redis's built-in data structures and functions. This can help the microservice make real-time decisions based on user behavior, such as recommending similar posts or content.&lt;/p&gt;

&lt;p&gt;In conclusion, Redis is a highly versatile and powerful tool that can greatly benefit microservice ecosystems. Its speed, scalability, and in-memory storage model make it an ideal choice for caching data, managing message queues, and processing real-time analytics. Redis's ability to handle multiple data types and models, and its lack of strict schema requirements, allow for flexible and efficient data management within microservices. Additionally, Redis's advanced features, such as sharding and eviction policies, make it a highly customizable and effective tool for various use cases. Whether used as a primary database or in conjunction with other databases, Redis is a valuable addition to any microservice ecosystem looking to improve performance, efficiency, and user experience.&lt;/p&gt;

&lt;p&gt;Subscribe to DevGnosis Newsletter to learn more : &lt;a href="https://devgnosis.substack.com/"&gt;https://devgnosis.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Motivations for Decomposing Monolithic Applications into Microservices</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Tue, 18 Apr 2023 18:47:25 +0000</pubDate>
      <link>https://dev.to/scuz12/motivations-for-decomposing-monolithic-applications-into-microservices-4n1h</link>
      <guid>https://dev.to/scuz12/motivations-for-decomposing-monolithic-applications-into-microservices-4n1h</guid>
      <description>&lt;p&gt;If you have a monolithic service or application that you want to break down into smaller components, taking an incremental approach is highly recommended. This will allow you to learn about microservices gradually and minimize the risk of making mistakes (which are bound to happen). Instead of trying to tackle the entire monolith at once, think of it as a block of marble that you can chip away at little by little. Trying to blow it up all at once is not a good idea, as it often leads to negative consequences. By taking an incremental approach, you can achieve your goal of breaking down the monolith while avoiding unnecessary risks and complications.&lt;/p&gt;

&lt;p&gt;If we plan to break down the monolith gradually, the question arises as to where we should begin. Although we have identified the seams in our application, it is crucial to determine which one should be extracted first. Instead of dividing things arbitrarily, it is wise to consider which part of the codebase will yield the most significant benefits when separated. Therefore, we need to consider some factors that can guide us in selecting the right chisel to start with.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pace of Change
&lt;/h3&gt;

&lt;p&gt;If we anticipate upcoming changes in our inventory management approach, it could be advantageous to initiate the decomposition process by separating the warehouse seam into a microservice. This approach can enhance our ability to modify the service promptly and effectively since it operates autonomously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team Structure
&lt;/h3&gt;

&lt;p&gt;Microservices can bring numerous benefits to a team's structure, promoting autonomy and flexibility. By breaking down the monolithic application into smaller services, developers can take ownership of the services they are responsible for, and can work in smaller teams that focus on specific areas of functionality. This allows teams to be more autonomous and make faster decisions, reducing the need for coordination with other teams.&lt;/p&gt;

&lt;p&gt;In addition, microservices can facilitate easier integration of new team members into the project, as they can quickly become familiar with specific services and contribute to their development without having to understand the entire monolith. The modular structure of microservices also makes it easier to reuse code between services and build new services on top of existing ones, providing a more agile and flexible development process.&lt;/p&gt;

&lt;p&gt;Microservices can also lead to improved scalability, as individual services can be scaled up or down independently based on their specific usage patterns. This can improve performance and reduce costs, as resources can be allocated more efficiently to meet demand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;Microservices can also provide advantages when it comes to security. Breaking down the monolith into smaller, independent services can reduce the potential impact of security breaches. For instance, if a security breach occurs in one service, the impact will be limited to that service only, and not affect the entire application. This is because each service has its own codebase, data store, and API, which reduces the attack surface area.&lt;/p&gt;

&lt;p&gt;Moreover, microservices enable teams to apply different security measures to each service based on its specific requirements. Teams can implement security protocols that are tailored to the specific needs of each service, rather than having a one-size-fits-all approach that may not be suitable for all components of the monolithic application. This can enhance the overall security of the application, as teams can focus on specific vulnerabilities and protect against them more effectively.&lt;/p&gt;

&lt;p&gt;In addition, microservices can provide better control over data access and usage. Each service can have its own data store, and teams can implement access controls that limit who can access, modify, and delete the data within each service. This can prevent unauthorized access and ensure that sensitive data is protected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technology
&lt;/h3&gt;

&lt;p&gt;Technology stacks play a vital role in the development and deployment of microservices. Microservices architecture allows teams to choose different technologies for each service, enabling them to select the best-suited tool for the specific service's needs. This results in a diverse technology stack for a microservices-based application.&lt;/p&gt;

&lt;p&gt;One of the primary technology stacks used in microservices is containerization. Tools like Docker and Kubernetes are used to package, deploy, and manage individual microservices. Containers provide a lightweight and isolated runtime environment for each service, making it easier to deploy and scale services independently.&lt;/p&gt;

&lt;p&gt;API gateways are also commonly used in microservices architecture. An API gateway sits between clients and microservices, providing a single entry point for all service requests. API gateways can perform various tasks, such as routing requests to the appropriate service, handling authentication and authorization, and rate-limiting requests to prevent service overload.&lt;/p&gt;

&lt;p&gt;Another technology that is often used in microservices is event-driven architecture. This approach involves services communicating with each other via events and messages. Event-driven architecture allows for loosely coupled services and enables asynchronous processing, improving the scalability and resilience of the application.&lt;/p&gt;

&lt;p&gt;In addition, cloud platforms are often used in microservices-based applications. Cloud platforms like Amazon Web Services, Microsoft Azure, and Google Cloud provide a range of services and tools that can be used to deploy, manage, and scale microservices. Cloud platforms also offer features like auto-scaling, load balancing, and serverless computing, which can help optimize the performance and cost of microservices-based applications.&lt;/p&gt;

&lt;p&gt;Finally, technologies like service discovery, monitoring, and tracing are also important for microservices architecture. Service discovery enables services to locate and communicate with each other, while monitoring and tracing tools allow developers to monitor the performance of individual services and identify issues quickly.&lt;/p&gt;

&lt;p&gt;Subscribe to DevGnosis Newsletter to learn more : &lt;a href="https://devgnosis.substack.com/"&gt;https://devgnosis.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Exploring the RESTful Architecture: How the Web Inspired a New Way of Building APIs</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Thu, 06 Apr 2023 17:40:10 +0000</pubDate>
      <link>https://dev.to/scuz12/exploring-the-restful-architecture-how-the-web-inspired-a-new-way-of-building-apis-17oe</link>
      <guid>https://dev.to/scuz12/exploring-the-restful-architecture-how-the-web-inspired-a-new-way-of-building-apis-17oe</guid>
      <description>&lt;p&gt;REST, which stands for Representational State Transfer, is an architecture that takes inspiration from the World Wide Web. Although REST comprises numerous principles and limitations, we will concentrate on the ones that are most useful in resolving integration difficulties in the context of microservices and when seeking an alternative interface style to RPC for our services.&lt;/p&gt;

&lt;p&gt;The most critical aspect of REST is the notion of resources, which can be thought of as entities that the service is aware of, such as a Customer. The server generates various representations of this Customer upon request, and how a resource appears externally is entirely separated from how it is stored internally. For instance, a client may request a JSON representation of a Customer, even if it is stored in an entirely distinct format. Once a client obtains a representation of this Customer, it may issue requests to modify it, and the server may or may not fulfill them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rest And HTTP
&lt;/h3&gt;

&lt;p&gt;The REST style integrates very well with the useful capabilities defined in HTTP. For instance, HTTP verbs like GET, POST, and PUT already have defined meanings in the HTTP specification regarding how they interact with resources. According to the REST architecture, methods should behave consistently on all resources, and luckily, HTTP provides a range of methods we can use. Specifically, GET retrieves a resource idempotently, and POST creates a new resource. By utilizing these HTTP methods, we can eliminate the need for numerous createCustomer or editCustomer methods. Instead, we can simply send a POST request with a customer representation to request the server to create a new resource, and use a GET request to obtain a representation of an existing resource. In these scenarios, we have one endpoint in the form of a Customer resource, and the available operations are integrated into the HTTP protocol.&lt;/p&gt;

&lt;p&gt;HTTP offers an extensive range of supportive tools and technologies, such as HTTP caching proxies like Varnish and load balancers like mod_proxy. Additionally, many monitoring tools come equipped with pre-built support for HTTP. These building blocks enable us to handle substantial volumes of HTTP traffic and efficiently route them in a transparent manner. With HTTP, we can also leverage all available security controls to secure our communications, from basic authentication to client certificates. However, it is crucial to use HTTP effectively to reap these benefits. If used poorly, it can be just as insecure and difficult to scale as any other technology. But when used correctly, HTTP provides ample assistance to facilitate our work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--weTxc-6P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c6m3gmh8f9za4uafdb55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--weTxc-6P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c6m3gmh8f9za4uafdb55.png" alt="Rest diagram" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hypermedia As the Engine of Application State(HATEOAS)
&lt;/h3&gt;

&lt;p&gt;Another principle introduced in REST that can help us avoid the coupling between client and server is the concept of hypermedia as the engine of application state. This is fairly dense wording and a fairly interesting concept, so let’s break it down a bit.&lt;/p&gt;

&lt;p&gt;The concept of hypermedia involves including links to various pieces of content in different formats, such as text, images, and sounds, within a particular content piece. This notion is likely familiar to you, as it mirrors how a typical web page functions: by following links, which are a form of hypermedia controls, to view related content. HATEOAS takes this idea further by suggesting that clients should interact with the server (which may trigger state transitions) through links to other resources. Instead of knowing precisely where customers reside on the server and what URI to target, the client should search for and navigate links to locate the necessary information.&lt;/p&gt;

&lt;p&gt;Think of the Amazon.com shopping site. The location of the shopping cart has changed over time. The graphic has changed. The link has changed. But as humans we are smart enough to still see a shopping cart, know what it is, and interact with it. We have an understanding of what a shopping cart means, even if the exact form and underlying control used to represent it has changed. We know that if we want to view the cart, this is the control we want to interact with. This is why web pages can change incrementally over time. As long as these implicit contracts between the cus‐ tomer and the website are still met, changes don’t need to be breaking changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.&amp;lt;link rel = “/products” href=”products/item1” /&amp;gt;

2.&amp;lt;link rel=”/instantpurchase” href=”instantPurchase/1234” /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this document, we have two hypermedia controls. The client reading such a docu‐ ment needs to know that a control with a relation of products is where it needs to navigate to get information about the product items, and that instantpurchase is part of the protocol used to purchase the album. The client has to understand the semantics of the API in much the same way as a human being needs to understand that on a shopping website the cart is where the items to be purchased will be.&lt;/p&gt;

&lt;p&gt;As a client, I do not require knowledge of the specific URI scheme to purchase the album. All I need to do is access the resource, locate the buy control, and proceed to it. The location of the buy control may change, the URI may undergo modification, or the site may redirect me to an entirely different service, but as a client, I need not worry about these details. This setup provides a significant level of decoupling between the client and server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Formats : JSON or XML
&lt;/h3&gt;

&lt;p&gt;Employing standard textual formats offers clients extensive flexibility when consuming resources, and REST over HTTP enables us to use a diverse range of formats. Although the examples I have illustrated until now employed XML, currently JSON has become a more prevalent content type for services that operate via HTTP.&lt;/p&gt;

&lt;p&gt;The fact that JSON is a much simpler format means that consumption is also easier. Some proponents also cite its relative compactness when compared to XML as another winning factor, although this isn’t often a real-world issue.&lt;/p&gt;

&lt;p&gt;Despite its popularity, JSON does come with a few drawbacks. While XML defines the link control as a hypermedia control, there is no equivalent standard in JSON. Therefore, custom in-house styles are often utilized to incorporate this concept. To address this issue, the Hypertext Application Language (HAL) has been developed to establish a common set of standards for hyperlinking in JSON (and XML, though it is arguably less necessary for XML). Adhering to the HAL standard allows for the use of tools like the web-based HAL browser to explore hypermedia controls, simplifying the process of creating a client.&lt;/p&gt;

&lt;h3&gt;
  
  
  Downsides to REST Over HTTP
&lt;/h3&gt;

&lt;p&gt;One potential downside of using REST over HTTP is performance, which may be a concern for services with low-latency requirements. While REST supports alternative formats like JSON or binary, HTTP still introduces overhead for each request. Additionally, REST payloads may not be as lean as those of binary protocols like Thrift.&lt;/p&gt;

&lt;p&gt;Although HTTP is suitable for handling large amounts of traffic, it may not be the best option for low-latency communications when compared to other protocols built on Transmission Control Protocol (TCP) or other networking technologies. One such example is WebSockets, which, despite its name, is not necessarily tied to the Web. Once the initial HTTP handshake is completed, WebSockets operate solely as a TCP connection between the client and server, making it a much more efficient way to stream data to a browser. However, it's important to note that using WebSockets doesn't involve much of HTTP or REST.&lt;/p&gt;

&lt;p&gt;For server-to-server communications, if extremely low latency or small message size is important, HTTP communications in general may not be a good idea. You may need to pick different underlying protocols, like User Datagram Protocol (UDP), to achieve the performance you want, and many RPC frameworks will quite happily run on top of networking protocols other than TCP.&lt;/p&gt;

&lt;p&gt;Subscribe to newsletter for more :&lt;br&gt;
&lt;a href="https://architechinsider.substack.com/"&gt;https://architechinsider.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Role of Apache Thrift in Twitter's search ranking: Explain how Apache Thrift plays a role in Twitter's search ranking system</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Sun, 02 Apr 2023 17:24:41 +0000</pubDate>
      <link>https://dev.to/scuz12/role-of-apache-thrift-in-twitters-search-ranking-explain-how-apache-thrift-plays-a-role-in-twitters-search-ranking-system-52ib</link>
      <guid>https://dev.to/scuz12/role-of-apache-thrift-in-twitters-search-ranking-explain-how-apache-thrift-plays-a-role-in-twitters-search-ranking-system-52ib</guid>
      <description>&lt;p&gt;Recently, Twitter made waves in the open-source community by publicly sharing its search ranking system, a critical component of the platform's user experience that helps users find the most relevant and timely content based on their search queries. However, the process of ranking tweets in real-time presents a number of technical challenges, such as efficiently processing large volumes of data and communicating between multiple services involved in the ranking process.&lt;/p&gt;

&lt;p&gt;Apache Thrift, an open-source framework for communication between services, played a key role in enabling efficient communication between the various services involved in Twitter's search ranking process. By defining data structures and interfaces in a language-agnostic way, Apache Thrift allowed services written in different programming languages to communicate with each other seamlessly and efficiently, making it an ideal choice for a real-time system like Twitter's search ranking.&lt;/p&gt;

&lt;p&gt;In this article, we'll explore the role of Apache thrift  and specific ways in which Apache Thrift is used , and how it helps solve some of the technical challenges .&lt;/p&gt;

&lt;h3&gt;
  
  
  What is apache thrift
&lt;/h3&gt;

&lt;p&gt;Apache Thrift is an open-source framework for implementing remote procedure call (RPC) services. It was developed by Facebook in 2007, and later became an Apache Software Foundation project in 2008. The goal of Apache Thrift is to provide a simple and efficient way for services written in different programming languages to communicate with each other over a network.&lt;/p&gt;

&lt;p&gt;Thrift works by defining a set of data structures and service interfaces using a domain-specific language called Thrift IDL (Interface Definition Language). This IDL is then used to generate code in multiple programming languages, which can be used to implement the server and client components of the RPC service. Thrift supports a wide range of programming languages, including Java, Python, C++, Ruby, and many others.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of apache thrift
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5dqvuhfT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujq318w4srcxatuc2b1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5dqvuhfT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujq318w4srcxatuc2b1v.png" alt="Thrift schema" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1.Language Independence: With Thrift, developers can define their services using an interface definition language (IDL), which is independent of any specific programming language. The Thrift compiler then generates code in the target language, allowing developers to write code once and deploy it on multiple platforms.&lt;/p&gt;

&lt;p&gt;2.Efficient Data Transfer: Thrift uses a compact binary protocol to transfer data between client and server applications, which reduces network overhead and improves performance. The binary protocol is also designed to be extensible, allowing developers to add custom data types and serialization formats.&lt;/p&gt;

&lt;p&gt;3.Cross-Platform Compatibility: Thrift supports a wide range of programming languages, including Java, Python, Ruby, PHP, C++, and many more. This makes it an ideal choice for building applications that need to communicate across different platforms and operating systems.&lt;/p&gt;

&lt;p&gt;4.Service Evolution: As applications evolve over time, it is often necessary to add new features and functionality. Thrift makes it easy to add new services without breaking existing clients or servers. This is achieved through versioning, which allows developers to define multiple versions of the same service and choose which version to use at runtime.&lt;/p&gt;

&lt;p&gt;5.Scalability: Thrift is designed to be scalable, with support for multiple transport protocols, load balancing, and connection pooling.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Twitter is utilising Thrift
&lt;/h3&gt;

&lt;p&gt;Twitter uses Thrift to define several data structures for ranking tweets, including ThriftLinearFeatureRankingParams, ThriftAgeDecayRankingParams, ThriftHostQualityParams, and ThriftCardRankingParams. These data structures allow Twitter to store and process tweet ranking information efficiently, resulting in faster and more accurate tweet rankings.&lt;/p&gt;

&lt;p&gt;Additionally, Twitter uses Thrift to define a variety of ranking parameters, including score parameters for various tweet features like retweet count, reply count, reputation, and text score. By defining these parameters in a concise and structured format, Twitter can easily modify and experiment with their ranking algorithms without rewriting large amounts of code.&lt;/p&gt;

&lt;p&gt;Thrift also allows Twitter to dynamically load custom ranking algorithms and collectors for experimentation purposes. This feature provides Twitter with the flexibility to test new ranking strategies quickly, enabling them to iterate and improve their ranking algorithms continuously. &lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Here is the link to the Thrift file for the thrift ranking definitions:&lt;a href="https://github.com/twitter/the-algorithm/blob/main/src/thrift/com/twitter/search/common/ranking/ranking.thrift"&gt;https://github.com/twitter/the-algorithm/blob/main/src/thrift/com/twitter/search/common/ranking/ranking.thrift&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Subscribe to newsletter for more :&lt;br&gt;
&lt;a href="https://architechinsider.substack.com/"&gt;https://architechinsider.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>beginners</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Optimizing Communication in Microservices: Comparing Synchronous and Asynchronous Methods</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Fri, 31 Mar 2023 07:54:40 +0000</pubDate>
      <link>https://dev.to/scuz12/optimizing-communication-in-microservices-comparing-synchronous-and-asynchronous-methods-1jnh</link>
      <guid>https://dev.to/scuz12/optimizing-communication-in-microservices-comparing-synchronous-and-asynchronous-methods-1jnh</guid>
      <description>&lt;p&gt;When it comes to service collaboration, deciding whether to opt for synchronous or asynchronous communication is one of the most vital choices to make. This decision significantly impacts implementation details.&lt;/p&gt;

&lt;p&gt;Synchronous communication necessitates making a call to a remote server that temporarily suspends execution until the operation finishes. Conversely, asynchronous communication enables the caller to return without waiting for the operation to complete and might not place priority on whether or not the operation concludes.&lt;/p&gt;

&lt;p&gt;Synchronous communication is often more straightforward to comprehend. We can easily determine if operations have concluded successfully or not. In contrast, asynchronous communication can be highly advantageous for long-running tasks, where it's impractical to keep a connection open between the client and server for an extended period. It's also beneficial for situations where low latency is crucial, and blocking a call while awaiting a response can significantly slow down the system. In the context of mobile networks and devices, firing off requests and presuming that they have succeeded (unless notified otherwise) can ensure that the user interface remains responsive, even if the network is sluggish. However, it's worth noting that handling asynchronous communication may require more complex technology, which we will delve into shortly.&lt;/p&gt;

&lt;p&gt;There are two primary modes to consider: synchronous and asynchronous. Each mode can facilitate distinct idiomatic styles of collaboration, either request/response or event-based.&lt;/p&gt;

&lt;h3&gt;
  
  
  Request/Response Communication (Orchestration)
&lt;/h3&gt;

&lt;p&gt;In the request/response model, a client initiates a request and waits for a response. While this approach is well-suited for synchronous communication, it can also work in the context of asynchronous communication.For instance, a client might launch an operation and register a callback, requesting the server to notify them when the operation is complete.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event-Based Communication (Choreography)
&lt;/h3&gt;

&lt;p&gt;In an event-based collaboration, we take a different approach. Instead of a client initiating requests for specific tasks to be performed, it informs other parties that a particular event has occurred and expects them to act accordingly. Unlike request/response collaboration, we never instruct anyone else what to do. Event-based systems are inherently asynchronous, and the business logic is more evenly distributed among the various collaborators instead of being centralised in a core system. Furthermore, event-based collaboration enables high decoupling. The client emitting an event has no knowledge of who or what will respond to it, meaning that new subscribers can be added to these events without the client's involvement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real life example
&lt;/h3&gt;

&lt;p&gt;Let’s take an example from a real-life banking software , and look at what happens when we create a customer:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A new record is created in the loyalty points bank for the customer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2.Our postal system sends out a welcome pack.&lt;/p&gt;

&lt;p&gt;3.We send a welcome email to the customer.&lt;/p&gt;

&lt;p&gt;In terms of implementing the flow, there are two architectural styles to consider. The first is orchestration, which relies on a central system to guide and control the process, similar to a conductor leading an orchestra. The second style is choreography, where each component of the system is informed of its role and allowed to determine the details of its own execution, similar to dancers in a ballet who find their way and respond to others around them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O_oN8i7d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdyqof4djwwng2gwd02w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O_oN8i7d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdyqof4djwwng2gwd02w.png" alt="Figure 1.1" width="880" height="495"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Let's consider how an orchestration approach could be applied to this flow. In this scenario, the most straightforward solution would be to designate the customer service as the central control point. When a new customer is created, the customer service would communicate with the loyalty points bank, email service, and postal service using a series of request/response calls, as shown in Figure 1.2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eT5O8Tk6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75u7xnjxe7bzi75i204i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eT5O8Tk6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75u7xnjxe7bzi75i204i.png" alt="Figure 1.2" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The customer service itself can then track where a customer is in this process. It can check to see if the customer’s account has been set up, or the email sent, or the post delivered.&lt;/p&gt;

&lt;p&gt;The downside of the orchestration(request/response) approach is that it can lead to the customer service becoming a centralised governing authority. It can become the focal point of a web, and a central location where the logic is concentrated. I have observed this approach leading to a few intelligent "god" services dictating to weak CRUD-based services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choreographed Approach
&lt;/h3&gt;

&lt;p&gt;In an alternative approach, choreography, the customer service could emit an asynchronous event saying "Customer created" instead. The email service, postal service, and loyalty points bank would then simply subscribe to these events and respond accordingly, as shown in Figure 1.3. This method is much more loosely coupled. If another service needed to respond to the creation of a customer, it would simply subscribe to the events and perform its job as required. However, the downside is that the explicit view of the business process depicted in Figure 1.2 is now only implicitly represented in our system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y-V1qXlG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0owomx16cqrt14dstbit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y-V1qXlG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0owomx16cqrt14dstbit.png" alt="Figure 1.3" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that approach implies that there is a need for extra effort to ensure that you can keep an eye on and verify that the necessary tasks have been completed. For instance, would you be able to identify if the loyalty points bank encountered an error and failed to set up the correct account? One strategy that I find useful in addressing this issue is to create a monitoring system that aligns explicitly with the business process view in Figure 1.2. However, this system also tracks the actions of each service as independent entities, enabling you to map any unusual exceptions onto the more explicit process flow.&lt;/p&gt;

&lt;p&gt;In my experience, systems that lean towards a event-based (choreographed) approach are typically more flexible, loosely coupled, and adaptable to change. However, additional effort is required to monitor and track the processes across different systems. Conversely, heavily orchestrated implementations can be very fragile and have a higher cost of change. For this reason, I highly recommend striving for a choreographed system, where each service is intelligent enough to comprehend its role in the overall process.&lt;/p&gt;

&lt;p&gt;On the other hand synchronous communication is straightforward and provides immediate feedback on the success of a call. However, if we prefer the request/response model but are dealing with processes that take longer, we can initiate asynchronous requests and wait for callbacks. On the contrary, adopting an asynchronous event-based collaboration allows for a choreographed approach, resulting in more decoupled services. This is desirable as it ensures that our services can be released independently of each other.&lt;/p&gt;

&lt;p&gt;Subscribe to newsletter for more :&lt;br&gt;
&lt;a href="https://architechinsider.substack.com/"&gt;https://architechinsider.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
      <category>career</category>
    </item>
    <item>
      <title>Shared Databases in Microservices: A Blessing or a Curse?</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Mon, 27 Mar 2023 17:19:23 +0000</pubDate>
      <link>https://dev.to/scuz12/shared-databases-in-microservices-a-blessing-or-a-curse-1j56</link>
      <guid>https://dev.to/scuz12/shared-databases-in-microservices-a-blessing-or-a-curse-1j56</guid>
      <description>&lt;p&gt;The rise of microservices has fundamentally changed the way we approach application development. But with this new architecture comes new challenges, particularly when it comes to database management. One of the key decisions you'll face is whether to use a shared database or separate databases for each microservice. In this article, we'll dive into the shared database dilemma.&lt;br&gt;
Shared Database is by far the most common form of integration that I or any of my colleagues see in the industry is database (DB) integration. In this world, if other services want information from a service, they reach into the database. And if they want to change it, they reach into the database! This is really simple when you first think about it, and is probably the fastest form of integration to start with—which probably explains its popularity.&lt;br&gt;
In the figure below, it can be observed that our registration user interface (UI) creates customers by executing SQL operations directly on the database. The diagram also displays our finance application, which accesses and modifies customer data by running SQL queries on the database. Additionally, the marketing department can update customer information by querying the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4hGVsxeh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ljjqg2va2znsiwbutlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4hGVsxeh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ljjqg2va2znsiwbutlc.png" alt="Share db Schema" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Initially, we are granting external entities the ability to examine and connect to the internal implementation specifics. The data structures stored in the database are open to all; they are shared in their entirety with any other party that has access to the database. If I choose to modify my schema to enhance the representation of my data or facilitate the maintenance of my system, it is feasible that I may disrupt my consumers(Finance,Registation,Marketing). Essentially, the database serves as an extensive, shared application programming interface (API) that is susceptible to breaking. If I need to modify the procedures linked to, for example, how the Finance administers customers, and this necessitates a change to the database, I must be exceedingly cautious not to impair parts of the schema that are employed by other services. Such a scenario usually mandates a significant amount of regression testing.&lt;/p&gt;

&lt;p&gt;Another withdrawn is consumers are tied to specific technology choice . Right now makes sense to store customers in a relational database so my consumers use an appropriate (potentially DB-specific) driver to talk to it . Suppose we determine in the future that a non-relational database would be a superior option for storing data. Would the customer service be able to make that determination? As a result, the consumers are closely linked to the implementation of the customer service, which contradicts our goal of concealing implementation details from consumers to allow the service greater autonomy in modifying its internal workings over time. This effectively eliminates the concept of loose coupling that we previously discussed in previous topics.&lt;/p&gt;

&lt;p&gt;Finally , Let's take a moment to consider the behavior aspect. There will be certain rules associated with modifying a customer. Where is this set of rules located? If the consumers are directly manipulating the database, then they are responsible for owning the relevant set of rules. In this scenario, the set of rules for making changes to a customer could be spread across multiple consumers. For example, if the Finance UI, registration UI and Marketing UI all require the ability to modify customer data, then any bug fixes or changes in behavior will need to be implemented in three different places and then deployed. This creates a situation where we are no longer achieving a high degree of cohesion.&lt;/p&gt;

&lt;p&gt;However, the integration of a database seems to compromise both of these principles. While database integration facilitates data sharing between services, it fails to address the sharing of behavior. As a result, our internal representation is exposed to consumers, making it challenging to avoid changes that may break the existing functionality, ultimately leading to a sense of fear towards any modification. Therefore, it is advisable to avoid this shared database approach at almost any expense.&lt;/p&gt;

&lt;p&gt;Recalling our previous discussion on the fundamental tenets of effective microservices, we emphasized the importance of strong cohesion and loose coupling. However, the integration of a database seems to compromise both of these principles. While database integration facilitates data sharing between services, it fails to address the sharing of behavior. As a result, our internal representation is exposed to consumers, making it challenging to avoid changes that may break the existing functionality, ultimately leading to a sense of fear towards any modification.&lt;/p&gt;

&lt;p&gt;Subscribe to newsletter for more :&lt;br&gt;
&lt;a href="https://architechinsider.substack.com/"&gt;https://architechinsider.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>distributedsystems</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Exploring the Power of ElasticSearch</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Thu, 23 Mar 2023 15:15:14 +0000</pubDate>
      <link>https://dev.to/scuz12/exploring-the-power-of-elasticsearch-4n5a</link>
      <guid>https://dev.to/scuz12/exploring-the-power-of-elasticsearch-4n5a</guid>
      <description>&lt;p&gt;Elastic Search is a powerful search and analytics engine that is designed to handle large volumes of data and deliver lightning-fast results. It's based on Apache Lucene, a full-text search engine library that is widely used in the industry. Elastic Search adds a distributed layer on top of Lucene, making it scalable, fault-tolerant, and highly available.&lt;/p&gt;

&lt;p&gt;One of the key features of Elastic Search is its ability to index and search both structured and unstructured data. Structured data refers to data that is organized into a specific format, such as a database table or a CSV file. Unstructured data, on the other hand, refers to data that is not organized into a specific format, such as text documents, social media posts, or sensor data. Elastic Search can handle both types of data, making it ideal for a wide range of use cases&lt;/p&gt;

&lt;h2&gt;
  
  
  Capabilities of Elasticsearch
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Full Text Search Engine
&lt;/h3&gt;

&lt;p&gt;When it comes to searching through large volumes of data, traditional SQL database management systems are often not equipped to handle the task. That's where Elastic Search comes in as a powerful full-text search engine built on top of Lucene. With Elastic Search, you can perform a wide range of searches, from structured and unstructured data to geo and metric searches, allowing for greater flexibility and precision. Its full-text search capabilities make it an ideal tool for searching through large and complex datasets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analytical Engine
&lt;/h3&gt;

&lt;p&gt;While Elastic Search is known for its powerful full-text search capabilities, its analytical use case is even more popular. Elastic Search is frequently used for log analytics and slicing and dicing numerical data, such as application and infrastructure performance metrics. Although Apache Solr was the first to provide faceting, Elastic Search has taken faceting to the next level by allowing users to aggregate data in real-time using its aggregation queries. These queries are instrumental in powering data visualizations across various tools, including Kibana, Grafana, and others. Elastic Search's analytical engine is a key factor in making it a popular choice for businesses that require powerful and flexible data analytics capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Architecture Designed for Scaling
&lt;/h3&gt;

&lt;p&gt;Elastic Search was designed to be scalable from the ground up, thanks to its distributed architecture. With Elastic Search, it's possible to scale your system to hundreds of servers and handle petabytes of data.&lt;/p&gt;

&lt;p&gt;While distributed systems can be complex, Elastic Search simplifies the process by making many of the scaling decisions automatically and providing a robust management API. Compared to other systems, scaling Elastic Search is much easier, although managing large Elastic Search clusters can be challenging and often requires specialized expertise. Elastic Search also includes automatic data replication to prevent data loss in the event of node failures, further enhancing its reliability and resilience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k1K_rugf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9zyiejug73ds5ly21xjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k1K_rugf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9zyiejug73ds5ly21xjq.png" alt="Diagram architecture" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Difference between SQL vs ElasticSearch
&lt;/h2&gt;

&lt;p&gt;While SQL databases have been the go-to solution for managing and querying structured data for decades, Elastic Search offers a more flexible approach to data management that can better handle unstructured and semi-structured data. Here are some key differences between SQL databases and Elastic Search:&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Modeling
&lt;/h3&gt;

&lt;p&gt;SQL databases require a strict schema for data modeling, meaning that the structure of the data needs to be defined in advance. This can make it challenging to handle data that doesn't fit neatly into a structured schema. In contrast, Elastic Search doesn't require a strict schema, allowing for more flexible and dynamic data modeling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Full-Text Search
&lt;/h3&gt;

&lt;p&gt;While SQL databases can support full-text search, it's often not their primary function, and the performance can be suboptimal for large volumes of data. Elastic Search, on the other hand, was built with full-text search in mind and can efficiently handle large volumes of unstructured data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability
&lt;/h3&gt;

&lt;p&gt;SQL databases can be scaled horizontally by adding more nodes to a cluster, but this process can be complex and time-consuming. In contrast, Elastic Search is designed to be distributed from the ground up, making it easier to scale horizontally and accommodate petabytes of data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;p&gt;Traditional SQL databases require complex joins and aggregations to perform complex queries, which can lead to performance issues when analyzing large data sets in real-time.&lt;/p&gt;

&lt;p&gt;In addition Elastic Search's real-time search capabilities allow for the efficient analysis of large and complex data sets, while its flexible data model allows for the storage and retrieval of unstructured and semi-structured data. Elastic Search also includes powerful aggregation capabilities, which enable users to perform complex calculations on large data sets in real-time..&lt;/p&gt;

&lt;p&gt;Subscribe to newsletter for Weekly Learnings &amp;amp; News :&lt;br&gt;
&lt;a href="https://architechinsider.substack.com/"&gt;https://architechinsider.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Why Microservices Matter: Revolutionizing Modern Software Development</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Mon, 20 Mar 2023 17:26:54 +0000</pubDate>
      <link>https://dev.to/scuz12/why-microservices-matter-revolutionizing-modern-software-development-1e1f</link>
      <guid>https://dev.to/scuz12/why-microservices-matter-revolutionizing-modern-software-development-1e1f</guid>
      <description>&lt;p&gt;In software development, microservices refer to a novel approach to building applications that involves creating small, independent services that collaborate with each other. To better understand this concept, it is important to delve into the specific attributes that distinguish microservices from other software development approaches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices Introduction
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Single Responsibility : Small, and Focused on Doing One Thing Well
&lt;/h3&gt;

&lt;p&gt;As we add new features to our codebase, it inevitably grows larger, making it challenging to identify where changes need to be made. Despite efforts to maintain clear, modular monolithic codebases, the arbitrary in-process boundaries can break down over time. This can result in code related to similar functions being dispersed throughout the codebase, leading to difficulties in bug fixing or implementation.&lt;/p&gt;

&lt;p&gt;In a monolithic system, we counteract these challenges by striving for cohesive code, often by creating abstractions or modules. Cohesion, which refers to the grouping of related code, is a crucial concept in microservices. Robert C. Martin's Single Responsibility Principle emphasizes the importance of cohesion, stating that we should "gather together those things that change for the same reason and separate those things that change for different reasons."&lt;/p&gt;

&lt;p&gt;Microservices employ a similar approach by using independent services that are designed to align with specific business boundaries, making it clear where the code resides for a particular functionality. By limiting each service to a distinct boundary, we prevent the temptation for it to become overly large and complicated, which can lead to a host of issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Autonomous :
&lt;/h3&gt;

&lt;p&gt;Our microservices are designed as separate entities and can be deployed as isolated services on a platform or their own operating system process. We aim to avoid combining multiple services on the same machine to maintain simplicity and ensure that communication between services is through network calls. Services must be able to change independently without affecting consumers, and we need to consider what services should expose and what should be hidden to maintain autonomy. To ensure technology-agnostic Application programming interface (API), we may need to pick the right technology that doesn't constrain technology choices. Decoupling is vital for our microservices, and any changes must be deployable without changing anything else to achieve the benefits of this approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key benefits
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Technology Heterogeneity
&lt;/h3&gt;

&lt;p&gt;In a system with multiple collaborating services, we have the flexibility to use different technologies for each service. This approach allows us to select the most suitable tool for each task, rather than being limited to a standardized, one-size-fits-all approach that can compromise performance.&lt;/p&gt;

&lt;p&gt;If we need to enhance the performance of a specific component in our system, we may opt to use a different technology stack that can better meet the required performance levels. We might also decide to store data differently for different parts of our system. For instance, for a social network, we could store user interactions(friends) in a graph-oriented database to capture the intricate network of connections. However, we might choose to store user posts in a relational data store, resulting in a heterogeneous architecture, as depicted in the diagram.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fbbxtbg8g5bvlkq41hw7.png)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Microservices also enable us to quickly adopt new technologies and understand how they can benefit us. In traditional monolithic applications, one of the biggest challenges in experimenting with new technology is the associated risks. Even small changes can have significant impacts on the entire system. However, with microservices, we can introduce new programming languages, databases, or frameworks without affecting the entire system, thus reducing the risks involved in technology adoption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resilience
&lt;/h3&gt;

&lt;p&gt;In a microservices architecture, service boundaries act as bulkheads that help to isolate problems in case of a component failure. This means that if one service fails, it doesn't cause a cascade effect, and the rest of the system can continue working. On the other hand, in a monolithic service, the failure of one component can bring down the entire system. Although a monolithic system can run on multiple machines to reduce the risk of failure, a microservices architecture can handle service failures and degrade functionality accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling
&lt;/h3&gt;

&lt;p&gt;In a monolithic service, scaling is an all-or-nothing affair, meaning that we have to scale the entire system, even if only a small part of it is experiencing performance issues. This constraint can lead to unnecessary costs and complexity.&lt;/p&gt;

&lt;p&gt;With smaller services, we can just scale those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware .&lt;/p&gt;

&lt;p&gt;As illustrated in the figure below, we have chosen to scale the post's service and chat service in our social media platform due to the higher traffic volume on those services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqqy2hk3imrl1sejlyl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqqy2hk3imrl1sejlyl4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Easy of deployment
&lt;/h3&gt;

&lt;p&gt;Imagine changing one line to a monolithic application with hundreds of lines of code , as you guessed this required to re-deploy the whole application. That could be a larger impact . Regrettably, this implies that our modifications accumulate over time, and we keep piling up changes until the new version of our application that is released to production has an overwhelming number of changes. The greater the difference between releases, the more likely we are to make mistakes, increasing the risk involved.&lt;/p&gt;

&lt;p&gt;By using microservices, we can modify and deploy a single service without affecting the rest of the system, enabling us to deploy our code more rapidly. If there is an issue, it can be isolated to a single service, allowing us to quickly roll back. This approach also enables us to provide new functionality to customers more quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Organizational Alignment
&lt;/h3&gt;

&lt;p&gt;Usually large teams and large codebases attract organisational problems . These problems can be exacerbated when the team is distributed. We also know that smaller teams working on smaller codebases tend to be more productive. Microservices allow us to better align our architecture to our organization, helping us minimize the number of people working on any one codebase to hit the sweet spot of team size and productivity. We can also shift ownership of services between teams to try to keep people working on one service colocated.&lt;/p&gt;




&lt;p&gt;Subscribe to newsletter for more : &lt;br&gt;
&lt;a href="https://architechinsider.substack.com/" rel="noopener noreferrer"&gt;https://architechinsider.substack.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>distributedsystems</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Javascript Namespace pattern </title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Mon, 29 Nov 2021 15:57:12 +0000</pubDate>
      <link>https://dev.to/scuz12/javascript-namespace-pattern-b9p</link>
      <guid>https://dev.to/scuz12/javascript-namespace-pattern-b9p</guid>
      <description>&lt;h2&gt;
  
  
  Namespace Pattern
&lt;/h2&gt;

&lt;p&gt;Namespace can dramatically reduce the number of globals required and at the same time prevents the collisions or excessive name prefixing .&lt;br&gt;
Its important to know that javascript doesn't have namespaces built into the language syntax , but you can achieve this feature quite easy .Instead of adding functions,objects and variables into global scope you can create one global object and add all the functionality &lt;/p&gt;
&lt;h3&gt;
  
  
  Refactor anti-pattern to Namespace example
&lt;/h3&gt;

&lt;p&gt;Consider this example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//anti-pattern example
function Read() {}
function Speak() {}
var topic_to_learn = "Javascript";
//objects
var book1 = {}
book1.data = {title:"Learn javascript",author:"John doe"}
var book2 = {};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in this example all the functions,variables and objects are declared and polluting the global scope of your application .  You can refactor this type of code by creating a single global object for your application , called for example &lt;em&gt;Student&lt;/em&gt; and change all functions and variables to become properties of your global object&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//Declare the global object
var STUDENT = {}
//constructors
STUDENT.Read = function(){};
STUDENT.SPEAK = function(){};

//a varibale
STUDENT.topic_to_learn = "javascript"

//object container 
STUDENT.books = {}

//nested objects 
STUDENT.books.book1 = {};
STUDENT.books.book1.data = {title:"Learn javascript",author:"John doe"}
//add second book
STUDENT.books.book2 = {};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern is good way to namespace your code and avoid naming collisions not only in your own code but collisions between your code and third-party code on the same page .&lt;/p&gt;

&lt;h3&gt;
  
  
  Drawbacks of Namespace
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;More to type , prefixing every variable and function adds up in the total amount of code that needs to be downloaded&lt;/li&gt;
&lt;li&gt;Only one global instance as a result any part of the code can modify the global instance and the rest of the functionality gets the updated state&lt;/li&gt;
&lt;li&gt;Long nested names = slower property resolution lookups&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>codenewbie</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Exploring several types of javascript functions</title>
      <dc:creator>George Hadjisavva</dc:creator>
      <pubDate>Thu, 11 Nov 2021 20:16:57 +0000</pubDate>
      <link>https://dev.to/scuz12/exploring-several-types-of-javascript-functions-13dg</link>
      <guid>https://dev.to/scuz12/exploring-several-types-of-javascript-functions-13dg</guid>
      <description>&lt;h2&gt;
  
  
  Returning Functions
&lt;/h2&gt;

&lt;p&gt;In Javascript functions are objects , so they can be used as return values . Therefore functions doesn't need to return some sort of data or array as result of its execution. A function can also return more specialised function or it can generate another function on demand , depending of inputs .&lt;/p&gt;

&lt;p&gt;Here's a simple example ,functions does some work and then returns another function , which can also be executed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var login = function () {
    console.log("Hello");
    return function () {
        console.log("World");
    }
}

//Using login function
var hello = login(); //log hello
hello(); // log world 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;lets see another example ..&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var setup = function () {
    var count = 0 ;
    return function() {
        return (count +=1);
    };
};

//usage
var next = setup();
next(); //returns 1
next(); //returns 2
next(); //returns 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in this example setup wraps the returned function , it creates a closure and you can use this closure to store some private data which is accessible by the returned function only in the scope. &lt;/p&gt;

&lt;h2&gt;
  
  
  Self-Defining Functions (Lazy function)
&lt;/h2&gt;

&lt;p&gt;Functions can be defined dynamically and can be assigned into variables. You can override the old function with the new one if you create a new function and assign it to the same variable that already holds another function.In this case function overwrites and redefines itself with a new implementation.&lt;br&gt;
To simplify this lets see a simple example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var helpMe = function () {
    alert("help me")
    helpMe = function() {
        alert("Please , Help me")      
    };
};

//Using the self-defining function
helpMe(); // help me
helpMe(); // Please, Help me

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Self-defining functions pattern is very useful when your function has some initial preparatory work and it is require to do it only once . &lt;br&gt;
Using this pattern can improve the performance and efficiency of your application .&lt;/p&gt;
&lt;h2&gt;
  
  
  Immediate Functions(Self-invoking or Self-executing)
&lt;/h2&gt;

&lt;p&gt;The immediate function pattern is syntax that enables you to execute functions as soon as it is defined .Heres an example :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(function () {
    alert("Help");
}())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern is just a function expression (either named or anonymous) which is executed immediately after its creation . The term &lt;em&gt;immediate function&lt;/em&gt; term is not defined in the ECMAScript Standard.&lt;/p&gt;

&lt;h5&gt;
  
  
  Steps for defining immediate function
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;You define a function using a function expression&lt;/li&gt;
&lt;li&gt;You add a set of parentheses at the end, which causes the function to be executed immediately &lt;/li&gt;
&lt;li&gt;You wrap the whole function block in parentheses (only if you don't assign the function to a variable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think the scenario when your code has to perform some setup tasks when the page initially loads ex : creating objects . This needs to be done only once , so creating reusable named function is unnecessary . Thats why you need immediate function , to wrap all the code in its local scope and not leak any variables to the global scope&lt;/p&gt;

&lt;h5&gt;
  
  
  Passing parameters to immediate function
&lt;/h5&gt;

&lt;p&gt;You have the ability to pass arguments to immediate functions&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//Prints : 
//Hello Joe , today is Nov 9 2022 23:26:34 GMT-0800

(function (name,day){
    console.log("Hello" + name + " Today is " + day )
},("Joe",new Date()));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;usually the global object(&lt;strong&gt;this&lt;/strong&gt;) is passed as an argument to the immediate function so its accessible inside of the function without having to use &lt;strong&gt;window&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Avoid passing to many parameters to an immediate function because it could make the function unreadable and difficult to understand ..&lt;/p&gt;

&lt;h5&gt;
  
  
  Returned Values from immediate Functions
&lt;/h5&gt;

&lt;p&gt;Immediate function can return values and these returns values can be assigned to variables&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var result = (function() {
    return 5+5;
}());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can achieve the same results by omitting the parentheses that wrap the function , because they are not required when you assign the return value to a variable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var result = function() {
    return 5+5;
}();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Immediate functions can also be used when you define objects . A good example to use Immediate function to instantiate object is , lets say you need to define a property that will never change during the life circle of object but before you define it needs to perform a bit of work and the returned value will be the value of property .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of the Immediate functions&lt;/strong&gt; &lt;br&gt;
This pattern helps you wrap an amount of work you want to do without leaving any global variables behind. All the defining variables will be local to the self-invoking functions without worry about the global space.&lt;br&gt;
The pattern also enables you to wrap individual features into self-contained modules .&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>codenewbie</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
