<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nanditha Vuppunuthula</title>
    <description>The latest articles on DEV Community by Nanditha Vuppunuthula (@nanditha_vuppunuthula_d09).</description>
    <link>https://dev.to/nanditha_vuppunuthula_d09</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nanditha_vuppunuthula_d09"/>
    <language>en</language>
    <item>
      <title>Kafka Architecture at Uber: Powering Real-Time Mobility at Scale</title>
      <dc:creator>Nanditha Vuppunuthula</dc:creator>
      <pubDate>Sat, 14 Jun 2025 18:29:17 +0000</pubDate>
      <link>https://dev.to/nanditha_vuppunuthula_d09/kakfa-architechture-for-uber-like-o3n</link>
      <guid>https://dev.to/nanditha_vuppunuthula_d09/kakfa-architechture-for-uber-like-o3n</guid>
      <description>&lt;p&gt;Uber’s meteoric growth and global reach depend on the ability to process, analyze, and react to massive streams of data in real time. At the heart of this capability is Apache Kafka, which Uber has transformed into a highly customized, resilient, and scalable backbone for its data infrastructure. Here’s a deep dive into how Kafka powers Uber’s core systems, from ride requests to dynamic pricing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Kafka?
&lt;/h2&gt;

&lt;p&gt;Uber’s business hinges on real-time data: rider and driver locations, trip events, payments, and more. Kafka was chosen for its ability to:&lt;/p&gt;

&lt;p&gt;Handle trillions of messages and petabytes of data daily&lt;/p&gt;

&lt;p&gt;Provide high throughput and low latency&lt;/p&gt;

&lt;p&gt;Guarantee durability and fault tolerance&lt;/p&gt;

&lt;p&gt;Support both batch and real-time processing&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Architectural Innovations
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Federated Kafka Clusters
&lt;/h2&gt;

&lt;p&gt;Scalability &amp;amp; Reliability: Instead of one monolithic Kafka cluster, Uber operates many federated clusters, each with around 150 nodes. This makes scaling easier and reduces operational risk.&lt;/p&gt;

&lt;p&gt;Cross-Cluster Replication: Uber developed uReplicator, a tool to synchronize data across clusters and data centers, ensuring global data availability and disaster recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Tiered Storage
&lt;/h2&gt;

&lt;p&gt;Local &amp;amp; Remote Storage: Kafka brokers store recent data on fast local disks (SSDs) for quick access, while older data is offloaded to remote, cost-effective storage. This two-tier approach decouples storage from compute, reducing hardware costs and enabling longer data retention without performance trade-offs.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Consumer Proxy Layer
&lt;/h2&gt;

&lt;p&gt;Simplified Client Management: With hundreds of microservices in different languages, Uber built a proxy layer that standardizes Kafka consumption, handles retries, and manages errors (like poison pill messages) via dead-letter queues (DLQ). This keeps the system robust and easy to maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Security &amp;amp; Authorization
&lt;/h2&gt;

&lt;p&gt;End-to-End Encryption: Mutual TLS (mTLS) secures all producer-broker and consumer-broker connections. Uber’s internal PKI (uPKI) system manages certificates for both brokers and clients.&lt;/p&gt;

&lt;p&gt;Fine-Grained Access Control: Requests are authorized via Uber’s IAM framework, ensuring only permitted services can produce or consume from specific topics.&lt;/p&gt;

&lt;p&gt;Kafka in Action: Dynamic Pricing&lt;br&gt;
Uber’s surge pricing is a textbook example of Kafka’s power:&lt;/p&gt;

&lt;p&gt;Data Ingestion: Millions of GPS and event messages per second flow from rider and driver apps into Kafka.&lt;/p&gt;

&lt;p&gt;Stream Processing: Tools like Apache Flink consume these streams, analyzing supply and demand in real time.&lt;/p&gt;

&lt;p&gt;Decision Making: Pricing models update fares every few seconds, with results published back to Kafka for downstream systems and user notifications.&lt;/p&gt;

&lt;p&gt;Benefits Realized&lt;br&gt;
Real-Time Responsiveness: Kafka’s low latency enables Uber to match riders and drivers and adjust prices instantly.&lt;/p&gt;

&lt;p&gt;Reliability: Features like partitioning, replication, and DLQ ensure data is never lost and the system remains operational even during failures.&lt;/p&gt;

&lt;p&gt;Operational Efficiency: Tiered storage and federated clusters keep costs manageable while supporting massive scale.&lt;/p&gt;

&lt;p&gt;Security: End-to-end encryption and strict authorization protect sensitive data and maintain user trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Uber’s Kafka architecture is a masterclass in building a real-time, resilient, and scalable data backbone. Through innovations like federated clusters, tiered storage, consumer proxies, and custom replication, Uber has pushed Kafka to its limits—enabling everything from seamless ride matching to dynamic pricing and global business continuity. For any organization looking to build real-time, data-driven applications at scale, Uber’s Kafka journey offers invaluable lessons in both technology and strategy.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Apache Kafka and Spring Boot: A Simple Example</title>
      <dc:creator>Nanditha Vuppunuthula</dc:creator>
      <pubDate>Fri, 13 Jun 2025 18:29:47 +0000</pubDate>
      <link>https://dev.to/nanditha_vuppunuthula_d09/apache-kafka-and-spring-boot-a-simple-example-5e05</link>
      <guid>https://dev.to/nanditha_vuppunuthula_d09/apache-kafka-and-spring-boot-a-simple-example-5e05</guid>
      <description>&lt;p&gt;Part 1: Install and Run Kafka on Windows&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Java 8+&lt;/p&gt;

&lt;p&gt;Kafka 3.x (includes Zookeeper)&lt;/p&gt;

&lt;p&gt;A terminal like Command Prompt, Git Bash, or PowerShell&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Download Kafka
&lt;/h2&gt;

&lt;p&gt;Go to: &lt;a href="https://kafka.apache.org/downloads" rel="noopener noreferrer"&gt;https://kafka.apache.org/downloads&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a binary (e.g., Kafka 3.6.0 with Scala 2.13)&lt;/p&gt;

&lt;p&gt;Extract it to C:\kafka&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Start Zookeeper
&lt;/h2&gt;

&lt;p&gt;Kafka uses Zookeeper for managing brokers. In a terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd C:\kafka&lt;br&gt;
bin\windows\zookeeper-server-start.bat config\zookeeper.properties&lt;/code&gt;&lt;br&gt;
Keep this terminal open.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Start Kafka Server
&lt;/h2&gt;

&lt;p&gt;Open a new terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd C:\kafka&lt;br&gt;
bin\windows\kafka-server-start.bat config\server.properties&lt;/code&gt;&lt;br&gt;
Kafka is now running on localhost:9092.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: Kafka Commands
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Create a Topic
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;bin\windows\kafka-topics.bat --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Check the Topic
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;bin\windows\kafka-topics.bat --list --bootstrap-server localhost:9092&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 3: Spring Boot Kafka Project
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Spring Boot Project
&lt;/h3&gt;

&lt;p&gt;You can use Spring Initializr with the following settings:&lt;/p&gt;

&lt;p&gt;Dependencies: Spring Web, Spring for Apache Kafka&lt;/p&gt;

&lt;p&gt;Name: kafka-demo&lt;/p&gt;

&lt;p&gt;Package: com.example.kafkademo&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;src/&lt;br&gt;
└── main/&lt;br&gt;
    ├── java/com/example/kafkademo/&lt;br&gt;
    │   ├── KafkaDemoApplication.java&lt;br&gt;
    │   ├── config/KafkaConfig.java&lt;br&gt;
    │   ├── controller/MessageController.java&lt;br&gt;
    │   ├── service/KafkaProducerService.java&lt;br&gt;
    │   └── listener/KafkaConsumerListener.java&lt;br&gt;
    └── resources/&lt;br&gt;
        └── application.yml&lt;br&gt;
🛠 application.yml&lt;br&gt;
&lt;code&gt;spring:&lt;br&gt;
  kafka:&lt;br&gt;
    bootstrap-servers: localhost:9092&lt;br&gt;
    consumer:&lt;br&gt;
      group-id: my-group&lt;br&gt;
      auto-offset-reset: earliest&lt;br&gt;
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer&lt;br&gt;
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer&lt;br&gt;
    producer:&lt;br&gt;
      key-serializer: org.apache.kafka.common.serialization.StringSerializer&lt;br&gt;
      value-serializer: org.apache.kafka.common.serialization.StringSerializer&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  KafkaProducerService.java
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Service
public class KafkaProducerService {
    @Autowired
    private KafkaTemplate&amp;lt;String, String&amp;gt; kafkaTemplate;

    public void sendMessage(String message) {
        kafkaTemplate.send("test-topic", message);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  KafkaConsumerListener.java
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Component
public class KafkaConsumerListener {
    @KafkaListener(topics = "test-topic", groupId = "my-group")
    public void listen(String message) {
        System.out.println("Received Message: " + message);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  MessageController.java
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@RestController
@RequestMapping("/api/messages")
public class MessageController {

    @Autowired
    private KafkaProducerService producerService;

    @PostMapping
    public ResponseEntity&amp;lt;String&amp;gt; sendMessage(@RequestBody String message) {
        producerService.sendMessage(message);
        return ResponseEntity.ok("Message sent to Kafka");
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Main Application
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@SpringBootApplication
public class KafkaDemoApplication {
    public static void main(String[] args) {
        SpringApplication.run(KafkaDemoApplication.class, args);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Part 4: Run and Test
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Run the App
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;./mvnw spring-boot:run&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Test with Postman or Curl
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;curl -X POST http://localhost:8080/api/messages -H "Content-Type: text/plain" -d "Hello Kafka!"&lt;br&gt;
&lt;/code&gt;## Output&lt;br&gt;
Your consumer will show:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Received Message: Hello Kafka!&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 5: Clean Up
&lt;/h2&gt;

&lt;p&gt;Stop the Spring Boot app    --&amp;gt; Ctrl+C&lt;/p&gt;

&lt;p&gt;Stop Kafka and Zookeeper    --&amp;gt; Ctrl+C (Ctrl+C in each terminal)&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You’ve just created a complete Kafka setup with:&lt;/p&gt;

&lt;p&gt;A running Kafka instance on Windows&lt;/p&gt;

&lt;p&gt;A Spring Boot REST API to send messages&lt;/p&gt;

&lt;p&gt;A Kafka listener to consume them&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Kafka MQ</title>
      <dc:creator>Nanditha Vuppunuthula</dc:creator>
      <pubDate>Thu, 12 Jun 2025 18:28:31 +0000</pubDate>
      <link>https://dev.to/nanditha_vuppunuthula_d09/kafka-mq-304d</link>
      <guid>https://dev.to/nanditha_vuppunuthula_d09/kafka-mq-304d</guid>
      <description>&lt;h2&gt;
  
  
  What is Kafka?
&lt;/h2&gt;

&lt;p&gt;Kafka is a publish/subscribe (pub/sub) messaging system that provides data streaming capabilities while also taking advantage of distributed computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a pub/sub messaging system?
&lt;/h2&gt;

&lt;p&gt;A pub/sub messaging system contains two components that relay some form of data or information between each other. One component publishes data while the other component subscribes to the publisher to receive the published data.&lt;/p&gt;

&lt;p&gt;Kafka follows this pattern with its own set of components and features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Producers
&lt;/h2&gt;

&lt;p&gt;The first component in a pub/sub messaging system is the publisher which is referred to as a Producer in Kafka. The producer is a data source that publishes or produces a message into Kafka. One of the great features of Kafka is that it is data type independent. This means that Kafka does not care about what type of data is being produced, whether it’s the GPS signal of a car, application metrics from front-end servers, or even images!&lt;/p&gt;

&lt;h2&gt;
  
  
  Consumers
&lt;/h2&gt;

&lt;p&gt;The second component in a pub/sub messaging system is the subscriber, which is referred to as a Consumer in Kafka. The consumer can subscribe or listen to a data stream and consume messages from that stream while having no relationship or knowledge about the producers.&lt;/p&gt;

&lt;p&gt;Consumers can subscribe to multiple streams of data regardless of the type of data being consumed. In other words, you can have a single application that takes in data from as many different sources as you’d like. Kafka makes it easy to access the data you need while leaving the processing steps entirely in your control.&lt;/p&gt;

&lt;h2&gt;
  
  
  High-level Architecture
&lt;/h2&gt;

&lt;p&gt;Now that you know where messages come from (producers) and how messages can be retrieved (consumers), let’s discuss what happens in between.&lt;/p&gt;

&lt;p&gt;Simple Kafka flow&lt;br&gt;
Lets imagine a simple Kafka flow with three producers and two consumers. Each producer must specify a destination for the message and each consumer must specify from where it needs to consume. This middle ground between the producer and consumer, where the Kafka message is stored, is called a Topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Topics, Partitions, and Offsets
&lt;/h2&gt;

&lt;p&gt;Topics can be thought of as a table in a database, where producers can write to and where consumers can read from. Each topic contains Partitions which are essentially logs that commit and append Kafka messages as they arrive. To identify messages, partitions use an auto-incrementing integer called an Offset, which is unique within partitions.&lt;/p&gt;

&lt;p&gt;Offsets provide consumers the flexibility of reading messages when and from where they want, this is done by committing the offset. A commit from a consumer is like checking items on a list, once a message has been consumed, the commit tells Kafka to mark that offset as processed for that consumer.&lt;/p&gt;

&lt;p&gt;As a consumer, you have the ability to read partitions from a specified offset or from the last committed message. How can this be useful? Consider an application that receives data every 2 hours. In this case, having the application continuously running and waiting for messages can be very expensive. By reading from the last committed message, you could have the application go live every 8 hours to simply consume all new messages and commit the offset of the latest message. This can reduce costs and usage of resources significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brokers and Clusters
&lt;/h2&gt;

&lt;p&gt;Now, you know about producers, consumers, and how Kafka messages flow within Kafka, but one of the most important components remain, the Kafka Broker. The broker is what ties the whole system together; it is the Kafka server that is responsible for dealing with all communications involving producers, consumers, and even other brokers. Producers rely on the broker to correctly accept and store the incoming Kafka message to its appropriate topic. Consumers rely on the broker to handle their fetch and commit requests while consuming from topics.&lt;/p&gt;

&lt;p&gt;A group of brokers is called a Kafka Cluster. One of the biggest perks of using Kafka is its use of distributed computing. A distributed system shares its workload among many other computers called nodes. These nodes all work together and communicate to complete the work rather than having all the work assigned to a single node. When we have multiple Kafka brokers and clusters dealing with large amounts of data, distributed computing saves resources and increases overall performance; making Kafka a desirable choice for big data applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Apache Kafka
&lt;/h2&gt;

&lt;p&gt;There are four key benefits of using Kafka:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reliability: 
Kafka distributes, replicates, and partitions data. Additionally, Kafka is fault-tolerant.&lt;/li&gt;
&lt;li&gt;Scalability: Kafka’s design allows you to handle enormous volumes of data. And it can scale up without any downtime.&lt;/li&gt;
&lt;li&gt;Durability: Messages received are persisted as quickly as possible to storage. So, we can say Kafka is durable.&lt;/li&gt;
&lt;li&gt;Performance: Finally, Kafka maintains the same level of performance even under extreme loads of data (many terabytes of message data). Kafka can perform up to two million writes per second.
So, you can see that Kafka can store large amounts of data with zero downtime and no data loss.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Disadvantages of Apache Kafka
&lt;/h2&gt;

&lt;p&gt;After discussing the advantages, let’s take a look at the disadvantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Limited flexibility: Kafka doesn’t support rich queries. For example, it’s not possible to filter for specific asset data in messages. (Functions like this are the responsibility of the consumer application reading the messages.) With Kafka, you can simply retrieve messages from a particular offset. The messages will be ordered as Kafka received them from the message producer.
Not designed for holding historical data: Kafka is great for streaming data, but the design doesn’t allow you to store historical data inside Kafka for more than a few hours. Additionally, data is duplicated, which means storage can quickly become expensive for large amounts of data. You should use Kafka as transient storage where data gets consumed as quickly as possible.&lt;/li&gt;
&lt;li&gt;No wildcard topic support: Last on the list of disadvantages is that it’s not possible to consume from multiple topics with one consumer. If you want to consume, for example, both log-2019-01 and log-2019-02, it’s not possible to use a wildcard for topic selection like log-2019-*.
The above disadvantages are design limitations intended to improve Kafka’s performance. For some use cases that expect more flexibility, the above limitations can constrain an application consuming from Kafka.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Kafka is a great tool when it comes to handling and processing data especially with big data applications. It's a reliable platform that provides low-latency and high throughput with its data-streaming capabilities and gives an ample amount of helpful features and services to make your application better.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mastering Java Streams API: A Modern Way to Process Data</title>
      <dc:creator>Nanditha Vuppunuthula</dc:creator>
      <pubDate>Mon, 02 Jun 2025 18:29:42 +0000</pubDate>
      <link>https://dev.to/nanditha_vuppunuthula_d09/mastering-java-streams-api-a-modern-way-to-process-data-6ld</link>
      <guid>https://dev.to/nanditha_vuppunuthula_d09/mastering-java-streams-api-a-modern-way-to-process-data-6ld</guid>
      <description>&lt;p&gt;If you're tired of writing clunky for-loops to manipulate collections in Java, then the Streams API is your best friend. Introduced in Java 8, the Stream API brings functional programming to Java, making your code more expressive, readable, and concise.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll break down what streams are, how they work, and explore real-world examples to get you comfortable with this powerful tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Stream?
&lt;/h2&gt;

&lt;p&gt;A Stream in Java represents a sequence of elements supporting sequential and parallel aggregate operations. Think of it as a pipeline of data where you can apply a series of transformations and computations in a fluent, functional style.&lt;/p&gt;

&lt;p&gt;⚠️ Streams don’t store data. They simply convey elements from a source (like a List) through a pipeline of operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stream Operations: The Basics
&lt;/h2&gt;

&lt;p&gt;Stream operations are typically chained and categorized into two types:&lt;/p&gt;

&lt;p&gt;Intermediate Operations – Return another stream (e.g., map, filter, sorted)&lt;/p&gt;

&lt;p&gt;Terminal Operations – Produce a result or side-effect (e.g., collect, forEach, reduce)&lt;/p&gt;

&lt;p&gt;Here’s the general syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;streamSource
    .intermediateOperation()
    .intermediateOperation()
    .terminalOperation();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Practical Examples
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Filtering a List
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;List&amp;lt;String&amp;gt; names = Arrays.asList("Alice", "Bob", "Charlie", "David");
List&amp;lt;String&amp;gt; filtered = names.stream()
                             .filter(name -&amp;gt; name.startsWith("A"))
                             .collect(Collectors.toList());

System.out.println(filtered); // Output: [Alice]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Mapping Elements (Transforming)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;List&amp;lt;String&amp;gt; lowerCase = Arrays.asList("apple", "banana");
List&amp;lt;String&amp;gt; upperCase = lowerCase.stream()
                                  .map(String::toUpperCase)
                                  .collect(Collectors.toList());

System.out.println(upperCase); // Output: [APPLE, BANANA]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Sorting Elements
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;List&amp;lt;Integer&amp;gt; nums = Arrays.asList(5, 3, 8, 1);
List&amp;lt;Integer&amp;gt; sorted = nums.stream()
                           .sorted()
                           .collect(Collectors.toList());

System.out.println(sorted); // Output: [1, 3, 5, 8]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Reducing a List to a Single Value
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;List&amp;lt;Integer&amp;gt; nums = Arrays.asList(1, 2, 3, 4);
int sum = nums.stream()
              .reduce(0, Integer::sum);

System.out.println(sum); // Output: 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Finding the First Element
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Optional&amp;lt;String&amp;gt; first = Stream.of("cat", "dog", "elephant")
                               .findFirst();

first.ifPresent(System.out::println); // Output: cat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Counting Elements
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;long count = Stream.of("a", "b", "a", "c")
                   .filter(s -&amp;gt; s.equals("a"))
                   .count();

System.out.println(count); // Output: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Working with Custom Objects
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Product {
    String name;
    double price;
    Product(String name, double price) {
        this.name = name; this.price = price;
    }
}

List&amp;lt;Product&amp;gt; products = Arrays.asList(
    new Product("Laptop", 999.99),
    new Product("Phone", 499.99),
    new Product("Tablet", 299.99)
);

List&amp;lt;String&amp;gt; expensiveProducts = products.stream()
    .filter(p -&amp;gt; p.price &amp;gt; 300)
    .map(p -&amp;gt; p.name)
    .collect(Collectors.toList());

System.out.println(expensiveProducts); // Output: [Laptop, Phone]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bonus: Parallel Streams
&lt;/h2&gt;

&lt;p&gt;Want to speed up operations on large datasets? Use parallelStream:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;List&amp;lt;Integer&amp;gt; bigList = IntStream.rangeClosed(1, 1_000_000)
                                 .boxed()
                                 .collect(Collectors.toList());

long sum = bigList.parallelStream()
                  .mapToLong(Integer::longValue)
                  .sum();

System.out.println(sum); // Output: 500000500000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
  </channel>
</rss>
