<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: gsbc</title>
    <description>The latest articles on DEV Community by gsbc (@gsbc).</description>
    <link>https://dev.to/gsbc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gsbc"/>
    <language>en</language>
    <item>
      <title>What exactly is Kafka? Not the novelist.</title>
      <dc:creator>gsbc</dc:creator>
      <pubDate>Tue, 28 Nov 2023 05:07:18 +0000</pubDate>
      <link>https://dev.to/gsbc/what-exactly-is-kafka-not-the-novelist-58f1</link>
      <guid>https://dev.to/gsbc/what-exactly-is-kafka-not-the-novelist-58f1</guid>
      <description>&lt;p&gt;Recently, I delved into Kafka Streams for my job and found it to be a very interesting topic. I am by no means an expert; on the contrary, I am just documenting my journey of apprenticeship.&lt;/p&gt;

&lt;p&gt;So, let's do this.&lt;/p&gt;

&lt;p&gt;Well, (Apache) Kafka was named after the famous novelist Franz Kafka just because it is the framework creator's (Jay Kreps) favourite writer. Just because. I find that to be very nice.&lt;/p&gt;

&lt;p&gt;Apache Kafka is a &lt;strong&gt;distributed event streaming platform&lt;/strong&gt; that is designed to be fast, scalable, and durable. It provides a &lt;strong&gt;publish-subscribe model&lt;/strong&gt;, allowing multiple &lt;strong&gt;producers&lt;/strong&gt; to send &lt;strong&gt;messages into topics&lt;/strong&gt;, and &lt;strong&gt;consumers&lt;/strong&gt; to &lt;strong&gt;read messages from topics&lt;/strong&gt;. Kafka is commonly used for building real-time data pipelines and streaming applications because of its ability to handle high throughput with low latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Okay, so what are it's main components?
&lt;/h2&gt;

&lt;p&gt;Let's delve a little deeper into Kafka's main components, starting with:&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka Topics
&lt;/h3&gt;

&lt;p&gt;Kafka topics are a particular stream of data within your Kafka cluster. Your Kafka cluster can have many topics. Kind of like a database table, but without all the constraints, because you send whatever you want to a Kafka topic, there is no data validation involved. A topic is identifiable by its &lt;strong&gt;name&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The sequence of messages in a topic is called a data stream. Kafka is used to make data stream through topics.&lt;/p&gt;

&lt;p&gt;Topics cannot be queried, unlike a database table. Instead, Kafka uses Producers to send data and Consumers to read the data.&lt;/p&gt;

&lt;p&gt;Topics can be divided into &lt;strong&gt;Partitions&lt;/strong&gt; and each message within each partition is ordered and gets an incremental id, called &lt;strong&gt;offset&lt;/strong&gt;. Each offset has meaning only to its respective partition, i.e. the offset values don't mix up between each partition. Offsets are not re-used even when a previous message has been deleted.&lt;/p&gt;

&lt;p&gt;Kafka Topics are also &lt;strong&gt;immutable&lt;/strong&gt;. Once data has been written to a partition, it cannot be changed, you must keep writing to the latter.&lt;/p&gt;

&lt;p&gt;Data is assigned randomly to a partition unless a message key (more on that later on) is provided. Data is kept for a limited time (the default is one week - or seven days).&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka Broker
&lt;/h3&gt;

&lt;p&gt;In the Kafka ecosystem, a broker acts as the workhorse by storing data and handling client requests. Multiple brokers work in unison in a Kafka cluster to provide scalability, fault tolerance, and high throughput. The presence of multiple brokers and their ability to replicate data across each other ensures that the Kafka system remains robust and available even in the face of broker failures. It's pretty awesome.&lt;/p&gt;

&lt;p&gt;A Kafka cluster is composed of multiple brokers (or servers, for that matter). Each broker is identified with its ID (which is an integer). Each broker contains certain topic partitions and after connecting to any broker (called a &lt;strong&gt;bootstrap broker&lt;/strong&gt;), you will be connected to the entire cluster, but don't worry, Kafka clients are smart enough to work with the scenario.&lt;/p&gt;

&lt;p&gt;A good number of brokers to start with would be three or so, but there are clusters with hundreds of brokers.&lt;/p&gt;

&lt;p&gt;Visual example (behold my drawing skillz):&lt;br&gt;
Topic-A has 3 partitions and Topic-B has 2 partitions. The data is distributed but broker 3 doesn't have any Topic-B data because it has already been placed within the other 2 brokers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs63reioo7ao6iz84xqob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs63reioo7ao6iz84xqob.png" alt="The image shows a diagram with three rectangular blocks, each representing a "&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Kafka Broker Discovery
&lt;/h4&gt;

&lt;p&gt;Every Kafka broker is also called a &lt;strong&gt;bootstrap server&lt;/strong&gt;. Meaning: you only need to connect to one broker and Kafka automatically will have the capability of connecting to the entire cluster (using smart clients). Each broker knows of all brokers, topics and partitions (the metadata!).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp06b084gacml3xvhiijk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp06b084gacml3xvhiijk.png" alt="On the left side of the image, there's a brown rectangle labeled "&gt;&lt;/a&gt;
An arrow points from the Kafka client to "broker 1 (bootstrap)" with the label "1. connection + metadata request," indicating the initial step where the client connects to a bootstrap broker and requests metadata.&lt;br&gt;
A second arrow points back from "broker 1 (bootstrap)" to the Kafka client, labeled "2. list of all brokers," suggesting that the bootstrap broker responds with a list of all brokers in the cluster.&lt;br&gt;
A third arrow leads from the Kafka client towards the bottom of the image and then points to the right, indicating "3. kafka client is able to connect to the needed brokers," meaning that after receiving the list, the client can connect to any of the brokers in the cluster as required."/&amp;gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Topic Replication Factor
&lt;/h4&gt;

&lt;p&gt;Topics should have a replication factor &amp;gt; 1 (usually between 2 and 3). If a broker is down, another broker can then serve the data.&lt;/p&gt;
&lt;h4&gt;
  
  
  Partition Leader
&lt;/h4&gt;

&lt;p&gt;At any time, only &lt;strong&gt;ONE&lt;/strong&gt; broker can be a leader for a given partition. Producers can only send data to the broker that is the pratition leader, while the other brokers will replicate the data. Consumers will read, by default, from the leader broker of a partition (since Kafka 2.4, it is possible to configure consumers to read from the closes replica, improving latency). Long story short: each partition has one leader and mutiple in-sync replicas (ISR).&lt;/p&gt;

&lt;p&gt;Visual example: Topic-A with 2 partitions and a replication factor of 2, with it's partitions leaders:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6c9mu13tg1ytuslffkv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6c9mu13tg1ytuslffkv.png" alt="The image is a flowchart that illustrates the replication process within a Kafka cluster, showing three brokers and the distribution and replication of partitions for a topic:&amp;lt;br&amp;gt;
Broker 1: Has a blue rectangle labeled "&gt;&lt;/a&gt;
Broker 2: Contains two blue rectangles. The top rectangle is labeled "partition 1, topic a (leader)" with a yellow star, indicating that this broker is the leader for partition 1 of topic a. The bottom rectangle is labeled "partition 0, topic a (ISR)," suggesting that this broker has a replica of partition 0 and is in the set of In-Sync Replicas (ISR).&lt;br&gt;
Broker 3: Features a single blue rectangle labeled "partition 1, topic a (ISR)," indicating that this broker is part of the ISR for partition 1 of topic a."/&amp;gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Topic Durability
&lt;/h4&gt;

&lt;p&gt;For a topic replication of factor 3, topic data durability can withstand 2 broker losses.&lt;br&gt;
As a general rule, for a replication factor of &lt;code&gt;N&lt;/code&gt;, you can permanently lose up to &lt;code&gt;N-1&lt;/code&gt; brokers and still recover the data.&lt;/p&gt;
&lt;h3&gt;
  
  
  Kafka Producers
&lt;/h3&gt;

&lt;p&gt;Producers could be described as applications that create and send messages to topics (which are made of partitions). Producerts know to which partition to write to and which Kafka broker has it. Producers will automatically recover if a Kafka broker fails.&lt;/p&gt;

&lt;p&gt;We mentioned message keys above, so let's conceptualize it: Kafka Producers send an optional key (a string, number, binary...) within the messages. &lt;/p&gt;

&lt;p&gt;Let's take, for instance, a Producer with two partitions: if the message key is null, it's content is going to be saved round-robin style (first partition 0, then partition 1 and so on - as a form of load balancing). However, if the key isn't null, all messages for that key are going to be sent to the same partition. Cool, huh?&lt;/p&gt;
&lt;h4&gt;
  
  
  Producer Acknowledgements (acks)
&lt;/h4&gt;

&lt;p&gt;Producers can choose to receive acknowledgement of data writes: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;acks = 0&lt;/code&gt;: Producer won't wait for acknowledgements, which could end up in data loss.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;acks = 1&lt;/code&gt;: Producer will wait for the leader acknowledgement, which could generate limited data loss.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;acks = all&lt;/code&gt;: Producer will wait for both leader and ISRs to acknowledge. No data loss.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A basic Producer example written in Java could be like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class SimpleProducer {
    public static void main(String[] args) {

// Creating Kafka producer properties:
        Properties properties = new Properties();
        properties.put("bootstrap.servers", "localhost:9092");
        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("acks", "all"); 

// Initializing the Kafka producer:
        KafkaProducer&amp;lt;String, String&amp;gt; producer = new KafkaProducer&amp;lt;&amp;gt;(properties);

// Creating and sending the message with a key:
        ProducerRecord&amp;lt;String, String&amp;gt; record = new ProducerRecord&amp;lt;&amp;gt;("test-topic", "key", "value");
        producer.send(record);

// Closing the producer:
        producer.close();
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok but what?&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bootstrap.servers&lt;/code&gt;: This is the address to your Kafka cluster (a group of Kafka brokers). In this case, it's pointing to a Kafka broker (a server that stores data and serves client requests from producers and consumers) running locally on port 9092.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;key.serializer&lt;/code&gt; and &lt;code&gt;value.serializer&lt;/code&gt;: These specify how the producer should serialize (or convert) the keys and values to bytes before sending them to the Kafka broker. In this case, we're using Kafka's built-in &lt;code&gt;StringSerializer&lt;/code&gt;, which means we're sending strings for both keys and values.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;acks&lt;/code&gt;: This setting means the producer will receive an acknowledgment after all in-sync replicas have received the data.&lt;/p&gt;

&lt;p&gt;First of all, we're initializing the Kafka producer with the given properties. The type parameters  signify that both the key and the value are of type String.&lt;/p&gt;

&lt;p&gt;Then, we create a ProducerRecord which contains the topic we want to send the message to (&lt;code&gt;test-topic&lt;/code&gt;) and the key-value pair we want to send (&lt;code&gt;key&lt;/code&gt; and &lt;code&gt;value&lt;/code&gt;). Afterwards, we send the record using the &lt;code&gt;send&lt;/code&gt; method of the producer.&lt;/p&gt;

&lt;p&gt;After sending the message, it's a good practice to close the producer to free up resources.&lt;/p&gt;

&lt;p&gt;In summary, this code initializes a Kafka producer, sends a single message to the &lt;code&gt;test-topic&lt;/code&gt;, and then closes the producer. This is a simple illustration and in real-world scenarios, additional error handling, callback mechanisms for acknowledgments, and other configurations ARE VERY NEEDED.&lt;/p&gt;

&lt;p&gt;Now, let's talk about them Kafka Consumers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka Consumers
&lt;/h3&gt;

&lt;p&gt;Consumers are applications that read messages from topics. Here's a basic example of a Kafka consumer in Java:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class SimpleConsumer {
    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        Consumer&amp;lt;String, String&amp;gt; consumer = new KafkaConsumer&amp;lt;&amp;gt;(properties);
        consumer.subscribe(Collections.singletonList("test-topic"));

        while (true) {
            ConsumerRecords&amp;lt;String, String&amp;gt; records = consumer.poll(Duration.ofMillis(100));
            records.forEach(record -&amp;gt; {
                System.out.printf("Topic: %s, Partition: %s, Offset: %s, Key: %s, Value: %s\n", 
                                  record.topic(), record.partition(), record.offset(), record.key(), record.value());
            });
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yeah but?&lt;/p&gt;

&lt;p&gt;The code above is a simple example of a Kafka consumer using the Kafka client library. It consumes messages from the &lt;code&gt;test-topic&lt;/code&gt; topic and prints out details about each message. Let's walk through the code step by step:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;BOOTSTRAP_SERVERS_CONFIG&lt;/code&gt;: Specifies the Kafka broker's address. In this example, the broker is running locally on port 9092.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GROUP_ID_CONFIG&lt;/code&gt;: Specifies the consumer group ID. Consumers can join a group to collaboratively consume topics. Kafka ensures that each message is delivered to one consumer within each consumer group.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;KEY_DESERIALIZER_CLASS_CONFIG&lt;/code&gt; &amp;amp; &lt;code&gt;VALUE_DESERIALIZER_CLASS_CONFIG&lt;/code&gt;: Define how to deserialize (convert from bytes back to objects) the keys and values from the Kafka records. In this example, we are using Kafka's built-in StringDeserializer, indicating that our keys and values are strings.&lt;/p&gt;

&lt;p&gt;Then we're creating an instance of the Kafka consumer with the properties defined above. The type parameters  indicate that both the key and the value are of type String.&lt;/p&gt;

&lt;p&gt;The line &lt;code&gt;consumer.subscribe(Collections.singletonList("test-topic"));&lt;/code&gt; specifies that the consumer is interested in messages from the test-topic topic. The subscribe method expects a list of topics, so we wrap our single topic in a singletonList (not a real world example).&lt;/p&gt;

&lt;p&gt;Then, the consumer continuously polls for new messages from the topic:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;consumer.poll(Duration.ofMillis(100))&lt;/code&gt;: This method retrieves any available messages from the topic. The duration (100 milliseconds in this case) specifies the maximum amount of time the poll method will block if no records are available.&lt;/p&gt;

&lt;p&gt;For each message (or record) retrieved, we print out its details such as the topic name, partition number, offset, key, and value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zookeeper
&lt;/h2&gt;

&lt;p&gt;When installing Kafka, you usually install Zookeeper with it. But what is Zookeeper?&lt;/p&gt;

&lt;p&gt;Well, Zookeeper manages brokers, keeping a list. It also helps electing leaders for partitions and sends notifications to Kafka in case of changes (e.g.: new topic, deleted topics, broker dies, broker comes up...). &lt;br&gt;
Kafka 2.x can not work without Zookeeper. &lt;br&gt;
Kafka 3.x can work without Zookeeper (it uses Kafka Raft - KRaft instead). &lt;br&gt;
Kafka 4.x will not have Zookeeper.&lt;br&gt;
Zookeeper, by design, will work with an odd number of servers (1, 3, 5...) and Zookeeper also has leaders (which can write) and followers (which can read). Zookeeper also does not store any consumer data.&lt;/p&gt;

&lt;p&gt;We will not cover details of Zookeeper on this post, but should you use it? &lt;/p&gt;

&lt;p&gt;In Kafka Brokers&lt;br&gt;
Yes. Until version 4.x is out, you should use Zookeeper in production.&lt;/p&gt;

&lt;p&gt;In Kafka Clients&lt;br&gt;
As of version 0.10, Kafka has deprecated the use of Zookeeper for consumer offset storage; consumers should instead store offsets directly in Kafka. From version 2.2 onwards, the kafka-topics.sh CLI command has been updated to interact with Kafka brokers rather than Zookeeper for topic management tasks such as creation and deletion. Consequently, any APIs and commands that previously depended on Zookeeper have been transitioned to utilize Kafka. This ensures a seamless experience for clients when clusters eventually operate without Zookeeper. For enhanced security, Zookeeper ports should be restricted to accept connections solely from Kafka brokers and not from Kafka clients. &lt;/p&gt;

&lt;p&gt;TLDR: DON'T USE ZOOKEEPER IN KAFKA CLIENTS.&lt;/p&gt;




&lt;p&gt;Hope you enjoyed it! Any constructive feedback is more than welcome.&lt;/p&gt;

&lt;p&gt;Next up: setting a closer-to-what-you-see-in-your-day-to-day-work-as-a-developer-in-information-technology Kafka application!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What exactly are Inversion of Control and Dependency Injection? How do they correlate with each other?</title>
      <dc:creator>gsbc</dc:creator>
      <pubDate>Sun, 30 Apr 2023 00:50:27 +0000</pubDate>
      <link>https://dev.to/gsbc/what-exactly-are-inversion-of-control-and-dependency-injection-how-do-they-correlate-with-each-other-2c3</link>
      <guid>https://dev.to/gsbc/what-exactly-are-inversion-of-control-and-dependency-injection-how-do-they-correlate-with-each-other-2c3</guid>
      <description>&lt;p&gt;I have always been confused about these two concepts, but I think it's time to finally tackle them.&lt;/p&gt;

&lt;p&gt;Inversion of Control (IoC) and Dependency Injection (DI) are software design patterns. Those concepts help improve modularity, testability and maintainability of code by promoting loose coupling between components. Those principles are often used in the context of object-oriented programming (OOP).&lt;/p&gt;

&lt;p&gt;Let's take a closer look:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inversion of Control is a design principle that involves inverting the flow of control in a system. High-level components, in tradicional software design, directly call low-level components and manage their dependencies. With IoC, high-level components do not explicitly call or create low-level components: instead, high-level components define their dependencies and rely on an external mechanism in order to provide the dependencies needed. The change in control flow allows greater flexibility and modularity.&lt;/li&gt;
&lt;li&gt;Dependency Injection is an implementation of the design principle above. It is a way of providing the dependencies of one component (or object) to another without having the dependent object create or manage the dependency itself. In DI, an external source, such as a framework, factory, or container, creates and injects the dependencies into the dependent object, usually through constructor arguments, properties or methods. This technique reduces tight coupling between the system components, thus making them easier to maintain, test and extend.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, let's exemplify these concepts by making use of the Spring Framework, which provides Dependency Injection.&lt;/p&gt;

&lt;p&gt;Consider a simple scenario where a &lt;code&gt;MessageService&lt;/code&gt; interface has two implementations, &lt;code&gt;EmailService&lt;/code&gt; and &lt;code&gt;SMSService&lt;/code&gt;. There is also a &lt;code&gt;NotificationService&lt;/code&gt; that depends on the &lt;code&gt;MessageService&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface MessageService {
    void sendMessage(String message, String recipient);
}

public class EmailService implements MessageService {
    public void sendMessage(String message, String recipient) {
        // Send an email
    }
}

public class SMSService implements MessageService {
    public void sendMessage(String message, String recipient) {
        // Send an SMS
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Constructor-based Dependency Injection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class NotificationService {
    private MessageService messageService;

    public NotificationService(MessageService messageService) {
        this.messageService = messageService;
    }

    public void notify(String message, String recipient) {
        messageService.sendMessage(message, recipient);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is some ways to achieve and implement Inversion of Control using Dependency Injection in Spring: defining the beans in a XML config file or using Java-based configuration with annotations, which is the method I'm going to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Configuration
public class AppConfig {

    @Bean
    public MessageService emailService() {
        return new EmailService();
    }

    @Bean
    public NotificationService notificationService() {
        return new NotificationService(emailService());
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setter-based Dependency Injection for the &lt;code&gt;NotificationService&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class NotificationService {
    private MessageService messageService;

    public void setMessageService(MessageService messageService) {
        this.messageService = messageService;
    }

    public void notify(String message, String recipient) {
        messageService.sendMessage(message, recipient);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Spring configuration using Java-based configuration with annotations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Configuration
public class AppConfig {

    @Bean
    public MessageService emailService() {
        return new EmailService();
    }

    @Bean
    public NotificationService notificationService() {
        NotificationService service = new NotificationService();
        service.setMessageService(emailService());
        return service;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both constructor-based and setter-based dependency injection have their pros and cons. The choice depends whether you need immutable or mutable objects, among other characteristics, which are going to be explained as follows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Constructor-based Dependency Injection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immutable objects: By injecting dependencies through the constructor, you can make your objects immutable, which can lead to safer and more reliable code, &lt;strong&gt;especially in multi-threaded&lt;/strong&gt; environments.&lt;/li&gt;
&lt;li&gt;Explicit dependencies: Constructor-based injection makes it clear which dependencies are required for an object to function correctly, since they must be provided when the object is created.&lt;/li&gt;
&lt;li&gt;Fail-fast behavior: If a required dependency is not provided, the object instantiation will fail, making it easy to spot and fix the issue early in the application lifecycle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verbosity: Constructor-based injection can become verbose when there are many dependencies, leading to long constructor parameter lists. This can make the code harder to read and maintain.&lt;/li&gt;
&lt;li&gt;Inflexibility: With constructor-based injection, dependencies are set at the time of object creation and cannot be changed later. This can be limiting if you need to modify dependencies during the runtime of your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setter-based Dependency Injection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pros&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flexibility: Setter-based injection allows you to change the dependencies of an object during its lifetime, which can be useful in certain scenarios &lt;strong&gt;where dependencies need&lt;/strong&gt; to be modified at runtime.&lt;/li&gt;
&lt;li&gt;Less verbose: Setter-based injection can lead to less verbose code, especially when there are many optional dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mutable objects: Setter-based injection can result in mutable objects, which may lead to less predictable behavior and potential issues in multi-threaded environments.&lt;/li&gt;
&lt;li&gt;Hidden dependencies: With setter-based injection, dependencies may not be as explicitly required as with constructor-based injection, making it harder to understand the dependencies of a class at a glance.&lt;/li&gt;
&lt;li&gt;Late failure: If a required dependency is not provided, the error might not be caught until the method relying on the dependency is called, making it harder to spot and fix issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In general, constructor-based dependency injection is recommended when the dependencies are required for the object to function correctly and when immutability is desired. Setter-based dependency injection can be used when you need more flexibility to change dependencies at runtime or when dealing with optional dependencies.&lt;/p&gt;

&lt;p&gt;It's worth noting that you can also combine both approaches in your application, using constructor-based injection for required dependencies and setter-based injection for optional or modifiable dependencies.&lt;/p&gt;

&lt;p&gt;Hope you folks enjoyed it! Any feedback is appreciated.&lt;/p&gt;

&lt;p&gt;Next up: streams and how these tools correlate with parallelism and concurrency.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>java</category>
      <category>learning</category>
    </item>
    <item>
      <title>What exactly is Object-oriented Programming?</title>
      <dc:creator>gsbc</dc:creator>
      <pubDate>Fri, 28 Apr 2023 00:05:54 +0000</pubDate>
      <link>https://dev.to/gsbc/what-exactly-is-object-oriented-programming-5a7a</link>
      <guid>https://dev.to/gsbc/what-exactly-is-object-oriented-programming-5a7a</guid>
      <description>&lt;p&gt;I'm going to be concise here, of course.&lt;/p&gt;

&lt;p&gt;First things first, Object-oriented Programming (or OOP, for short), is a programming paradigm that relies on dynamic dispatch, a mechanism that enables polymorphism and code reusability. &lt;/p&gt;

&lt;p&gt;OOP organizes code arount the concept of "objects", which are instances of "classes". These classes encapsulate data (attributes) and behaviour (methods) related to a specific entity or a specific concept. Yes, it is abstract. No, it does not have anything to do with Dogs that are Animals and can Swim.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic dispatch&lt;/strong&gt;, a.k.a. &lt;strong&gt;late binding&lt;/strong&gt; or &lt;strong&gt;runtime method dispatch&lt;/strong&gt;, is a mechanism in OOP languages that resolves, &lt;strong&gt;at runtime&lt;/strong&gt;, which method implementation to call. The choice is based on the &lt;strong&gt;object's actual type&lt;/strong&gt;, not based on its declared type. This wizardry allows objects of different classes to be considered objects of a common superclass.&lt;/p&gt;

&lt;p&gt;There is some correlation between dynamic dispatch and OOP, summarized as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encapsulation: dynamic dispatch allows objects to have their own implementation of a method, hidden from the outside world;&lt;/li&gt;
&lt;li&gt;Inheritance: dynamic dispatch supports inheritance, enabling derived classes to override or extend methods from their parent classes;&lt;/li&gt;
&lt;li&gt;Polymorphism: dynamic dispatch is a enabler of polymorphism, as it allows a single method name to be associated with multiple implementations, depending on the object's type at runtime, promoting code reusability and modular, clean design;&lt;/li&gt;
&lt;li&gt;Abstraction: dynamic dispatch enables abstraction by allowing programmers to define a common interface (abstract class or interface) that multiple classes can implement, while the actual details of the implementation are resolved at runtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At runtime, the appropriate method implementation is selected based on the object's actual type. Here's an example using interfaces (a contract that implementing classes must adhere to). This promotes code reusability, modularity and separation of concerns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Define the Shape interface
interface Shape {
    double getArea();
}

// Define a Circle class that implements Shape
class Circle implements Shape {
    private double radius;

    Circle(double radius) {
        this.radius = radius;
    }

    @Override
    public double getArea() {
        return Math.PI * radius * radius;
    }
}

// Define a Rectangle class that implements Shape
class Rectangle implements Shape {
    private double width;
    private double height;

    Rectangle(double width, double height) {
        this.width = width;
        this.height = height;
    }

    @Override
    public double getArea() {
        return width * height;
    }
}

public class Main {
    public static void main(String[] args) {
        Shape[] shapes = new Shape[3];
        shapes[0] = new Circle(5);
        shapes[1] = new Rectangle(4, 6);
        shapes[2] = new Circle(3);

        for (Shape shape : shapes) {
            System.out.println("Area: " + shape.getArea());
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the example above, a &lt;code&gt;Shape&lt;/code&gt; interface is defined with a single method: &lt;code&gt;getArea()&lt;/code&gt;. The &lt;code&gt;Circle&lt;/code&gt; and &lt;code&gt;Rectangle&lt;/code&gt; classes implement the &lt;code&gt;Shape&lt;/code&gt; interface and provide their own implementations of the interface's contract method &lt;code&gt;getArea()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I'm not giving an example of dynamic dispatch using inheritance because this concept promotes coupling, which is overall not desirable.&lt;/p&gt;

&lt;p&gt;Hope you people enjoyed it, any feedback is appreciated.&lt;/p&gt;

&lt;p&gt;Next up: inversion of control (IoC) and dependency injection.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>oop</category>
    </item>
  </channel>
</rss>
