<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: arih1299</title>
    <description>The latest articles on DEV Community by arih1299 (@arih1299).</description>
    <link>https://dev.to/arih1299</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arih1299"/>
    <language>en</language>
    <item>
      <title>A Simpler Architecture for Handling High Connection Counts and Throughput</title>
      <dc:creator>arih1299</dc:creator>
      <pubDate>Fri, 22 Oct 2021 13:00:47 +0000</pubDate>
      <link>https://dev.to/solacedevs/a-simpler-architecture-for-handling-high-connection-counts-and-throughput-3mln</link>
      <guid>https://dev.to/solacedevs/a-simpler-architecture-for-handling-high-connection-counts-and-throughput-3mln</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iVOaV5xf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2017/07/DARK_Solace-Says-Enabling-Event-Driven-Microservices.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iVOaV5xf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2017/07/DARK_Solace-Says-Enabling-Event-Driven-Microservices.png" alt=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;I stumbled upon a &lt;a href="https://blogs.qiscus.com/blog/2019/04/18/handle-10-million-concurrent-user-connections-5-millions-messages-per-minute-throughput/"&gt;blog post by Qiscus.com&lt;/a&gt; that talks about the architecture that lets them handle 10 million concurrent connections and throughput of 5 million messages per minute. This is the high-level Qiscus architecture shared in the blog post, which jives with what I learned from a talk their CTO Evan Purnama gave in Jakarta a couple years ago.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://solace.com/wp-content/uploads/2021/10/alernative-architecture-blog-post_pic-01.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1wawehHz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/10/alernative-architecture-blog-post_pic-01.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pretty cool, right?&lt;/p&gt;

&lt;p&gt;In that talk, Evan spoke about the architecture and design decisions, and the post has some of the information there as well. Here’s a few of the key points I noted from this architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MQTT was chosen as the fronting layer because it’s lightweight, requires minimal client resources, and topics can be very specific, and support wildcard patterns.&lt;/li&gt;
&lt;li&gt;They opted to decouple the layer that handles the huge incoming traffic and the layer that services the many different consumers of those events for performance and independent scalability reasons.&lt;/li&gt;
&lt;li&gt;Message persistency is required so the system can tolerate failed consumers in a way that lets reconnected consumers pick things up from the last message they did receive, and absorb bursts of traffic in a way that lets downstream apps consume incoming data at whatever pace works for them.&lt;/li&gt;
&lt;li&gt;Kafka was chosen for the consumption layer because of its scalability and replay capability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Alternative Architecture
&lt;/h2&gt;

&lt;p&gt;Looking at the requirements and design decision assumptions, I had been thinking if I could come up with a different approach. After a little thinking and a lot of procrastination, here’s what I came up with. Naturally, I’m using &lt;a href="https://solace.com/products/platform/"&gt;Solace PubSub+ Platform&lt;/a&gt; as the event platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://solace.com/wp-content/uploads/2021/10/alernative-architecture-blog-post_pic-02.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kWhzp_3X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/10/alernative-architecture-blog-post_pic-02.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: Alternative architecture with a Solace-enabled event mesh.&lt;/p&gt;

&lt;p&gt;With this alternative architecture, let’s start with the MQTT requirement. &lt;a href="https://solace.com/products/event-broker/"&gt;Solace PubSub+ Event Broker&lt;/a&gt; supports MQTT 3.1 and 5, along with REST and WebSocket, all without any add-ons or proxies. Events coming in via MQTT can be consumed in any other protocols/APIs used by the consumers on the right hand side, such as JMS, AMQP, WebSocket, MQTT, or even via a REST webhook mechanism.&lt;/p&gt;

&lt;p&gt;Message persistence is taken care of with the guaranteed messaging capability of Solace PubSub+ Event Broker, and &lt;a href="https://www.youtube.com/watch?v=rnbuwnQWt-M"&gt;yes it can do replay&lt;/a&gt; as well. No biggies there. In fact, shock absorption and lossless guaranteed delivery is one of the things Solace is known for by our customers globally.&lt;/p&gt;

&lt;p&gt;The key distinction between Qiscus’ architecture and the one I’ve presented here is that there is logically just one fabric for the messaging platform, something called an “&lt;em&gt;&lt;a href="https://solace.com/what-is-an-event-mesh/"&gt;event mesh&lt;/a&gt;” —&lt;/em&gt; like service mesh but for events instead of services. With an event mesh, there is no need for applications to move and/or translate the events from connectivity layer to the processing layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tiered Architecture Working as One
&lt;/h2&gt;

&lt;p&gt;This architecture is still decoupled for performance and scalability, by creating a layered architecture for the connectivity side and the processing/consumption side. The connectivity layer consists of multiple PubSub+ brokers deployed as the needed scale as well as location requirements. After all, these brokers can run on most if not all public and private clouds, containers, or virtualization platforms.&lt;/p&gt;

&lt;p&gt;The connectivity layer then streams the events to the processing layers in a smart, efficient, and guaranteed manner, so we don’t lose messages along the way. The processing layer then scales as needed to stream those events to the back-end applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Crux of EDA: Topic Routing
&lt;/h2&gt;

&lt;p&gt;One major thing to note here ̶ these back-end applications have all the flexibility of the topic subscriptions with wildcards just like the one you see with MQTT protocol. If you don’t really get what this means, it could be because you are led to believe &lt;em&gt;topic&lt;/em&gt; creation is required before anything else, and that it is expensive, and you shouldn’t have the luxury or agility of fine-grained topic addressing. In that case, you’re missing out on one of the great things about EDA, and you should really go watch a &lt;a href="https://youtu.be/PP1nNlgERQI"&gt;video on this &lt;em&gt;topic&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0nNgIQl_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/10/alernative-architecture-blog-post_pic-03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0nNgIQl_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/10/alernative-architecture-blog-post_pic-03.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: A sneak peek of how topic routing with wildcards works&lt;/p&gt;

&lt;h2&gt;
  
  
  The Event Mesh
&lt;/h2&gt;

&lt;p&gt;An event mesh, as you should know by now, isn’t a single event broker, nor a cluster of brokers acting as one, but a network of several nodes or pairs or clusters of event brokers working together much like how IP routers work to form a TCP/IP network.&lt;/p&gt;

&lt;p&gt;In this alternative architecture, there’s no “subscribers” moving events from the connectivity layer to the processing layer. That’s because these PubSub+ brokers are linked to form an event mesh. A different application of event mesh is distributing events simply by leveraging the topic routing. This is really useful for multi-sites or even hybrid-cloud topology, instead of relying on cumbersome and expensive cross-site replications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event Mesh and Internet of Things
&lt;/h2&gt;

&lt;p&gt;Internet of Things (IoT) is closely related to the use of MQTT protocol, but MQTT is being used for more than just IoT, just like Qiscus’ and many other mobile applications or even real-time dashboards.&lt;/p&gt;

&lt;p&gt;Event mesh has a few more tricks up its sleeves for IoT solutions. One is the tiering or layering of brokers to form a tree topology that can support larger connection counts and greater geographic distributions. Another is bidirectional communications, i.e. not just streaming events from devices to back end servers, but also sending out events to devices or mobile apps as part of command and control interactions.&lt;/p&gt;

&lt;p&gt;Our CTO Shawn McAllister produced a fun demo of those capabilities, and hope you’ll watch what I playfully like to call “&lt;a href="https://video.solace.com/watch/njCd4X1WpGaR1RUVB2cHUW?"&gt;The Architects’ Guide to Honking the Horn&lt;/a&gt;”.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zy6qSHEt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://play.vidyard.com/njCd4X1WpGaR1RUVB2cHUW.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zy6qSHEt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://play.vidyard.com/njCd4X1WpGaR1RUVB2cHUW.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance
&lt;/h2&gt;

&lt;p&gt;While I didn’t do a performance test for this architecture, there is a public report on &lt;a href="https://solace.com/products/performance/"&gt;Solace PubSub+ performance numbers&lt;/a&gt; for several possible combinations of deployment types, payload sizes, delivery mode, and client connections.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To recap, check out this summary table for the features and requirements discussed in this post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fEvmHsrz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/10/alternative-architecture-table.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fEvmHsrz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/10/alternative-architecture-table.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I want to bring up with this alternative architecture is simplicity. Yes, &lt;em&gt;simplicity&lt;/em&gt;. Fewer moving parts is better. Leave the event distribution and delivery to the infrastructure and provide support for the many open standard protocols and APIs to avoid rewrites.&lt;/p&gt;

&lt;p&gt;I hope this architecture gives you some ideas for your own architecture and set of challenges. Get in touch for an ideas exchange or just quick chats in my &lt;a href="https://www.linkedin.com/in/arihermawan/"&gt;LinkedIn&lt;/a&gt; or post your questions in the &lt;a href="https://solace.community/"&gt;Solace Developer Community&lt;/a&gt;. And lastly, head out to &lt;a href="https://www.solace.dev/"&gt;Solace.Dev&lt;/a&gt; if you are a developer and want to get hands-on with event mesh!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/a-simpler-architecture-for-handling-high-connection-counts-and-throughput/"&gt;A Simpler Architecture for Handling High Connection Counts and Throughput&lt;/a&gt; appeared first on &lt;a href="https://solace.com"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>fordevelopers</category>
    </item>
    <item>
      <title>See How PubSub+ Event Broker Handles Slow Consumers</title>
      <dc:creator>arih1299</dc:creator>
      <pubDate>Wed, 21 Apr 2021 13:10:16 +0000</pubDate>
      <link>https://dev.to/solacedevs/see-how-pubsub-event-broker-handles-slow-consumers-4jja</link>
      <guid>https://dev.to/solacedevs/see-how-pubsub-event-broker-handles-slow-consumers-4jja</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mWxYwuM_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/solace-blog-featured-image_tortoise-gray.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mWxYwuM_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/solace-blog-featured-image_tortoise-gray.jpg" alt=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;If you use a message broker as a communication backbone for your enterprise applications and have had issues caused by a few slow consuming applications, then this is for you. I wrote a quick &lt;a href="https://www.linkedin.com/posts/arihermawan_solace-activity-6775686432825708544-SkEe?lipi=urn%3Ali%3Apage%3Ad_flagship3_publishing_post_edit%3BRFIDp4sASIiNOFv1z258bQ%3D%3D"&gt;LinkedIn post&lt;/a&gt; about this when a friend asked me what’s so different with how Solace PubSub+ deals with slow consumers. I figured the best way to show this is by doing a quick demo with more than 50 GB messages queued in the broker due to slow consumption. But I want to show more of that demo here, so I can easily refer it back should someone ask me the same question again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://solace.com/what-is-an-event-broker/"&gt;Event brokers&lt;/a&gt; are considered infrastructure. We don’t want infrastructure to keep us up at night worrying that we’ll need to force a restart and purge tens of gigabytes of customers’ transactions because some downstream applications slowed down or the traffic just spiked due to some new marketing promotions.&lt;/p&gt;

&lt;p&gt;With this blog post I will explain and show how Solace &lt;a href="https://solace.com/products/event-broker/"&gt;PubSub+ Event Broker&lt;/a&gt; keeps on working as expected in the event a consumer slows down or fails so other applications connected to the broker are not affected. I will also show that a large number of pending messages will not impact the fail-over time needed for Solace PubSub+ Event Broker.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;For this demo, I’m using &lt;a href="https://solace.com/products/event-broker/cloud/"&gt;Solace PubSub+ Cloud&lt;/a&gt; simply because it’s the quickest way to go. It literally just took me just few clicks and I only needed to type the broker name. Of course we can set it up using the software on &lt;a href="https://solace.com/downloads/"&gt;many other platforms&lt;/a&gt; if you so wish. The brokers are deployed on AWS and run on m5.xlarge EC2 instances. This is because the environment I’m using is not using the most recent and highly recommended Kubernetes-based deployment.&lt;/p&gt;

&lt;p&gt;For the applications, I’m using &lt;a href="https://docs.solace.com/SDKPerf/SDKPerf.htm"&gt;SDKPerf&lt;/a&gt; to simulate the producers of events and the few differently paced events consumers, running on two t2.micro instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1xwisqdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_01-1024x952.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1xwisqdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_01-1024x952.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Create Solace Cloud service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZpY60z_U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_02-1024x786.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZpY60z_U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_02-1024x786.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: SDKPerf EC2 instances for publishers and subscribers&lt;/p&gt;

&lt;h2&gt;
  
  
  The Test
&lt;/h2&gt;

&lt;p&gt;Here I’m going to simulate a situation where a broker is serving multiple applications and some of these applications are slow and unable to keep up with their respective incoming traffic. You will be able to see that the broker and the other applications will not be impacted by the slow consumer.&lt;/p&gt;

&lt;p&gt;I created three queues, each having different topic subscriptions. If you are not familiar with this concept, just consider that we have different queues with different incoming traffic. You can also take a look at &lt;a href="https://tutorials.solace.dev/jcsmp/topic-to-queue-mapping/"&gt;a tutorial on this concept&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OC1NKClq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_03-1024x252.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OC1NKClq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_03-1024x252.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: List of Queues&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b-fMaxMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_04-1024x252.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b-fMaxMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_04-1024x252.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4: The first queue and its subscription&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--27rmikXd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_05-1024x252.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--27rmikXd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_05-1024x252.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 5: The second queue and its subscription for slow consumer test&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--257OU0KA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_06-1024x252.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--257OU0KA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_06-1024x252.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 6: The third queue and its subscriptions for a very slow consumer test&lt;/p&gt;

&lt;p&gt;I have ten producers sending around up to 10,000 messages per second with a payload size of 1 KB to four different topics, which were then spooled or written into the three different queues we had created earlier. This is because we want to test the spooling or storing of the messages while the designated consumer applications can’t keep up with the pace to immediately process the incoming messages.&lt;/p&gt;

&lt;p&gt;In normal condition, all three queues will not have many unprocessed messages at any time because all the consumer applications are able to catch up with the incoming traffic. In this case, the first test queue steadily flows messages in and out to its consumer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fvbHYFop--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_07-1024x252.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fvbHYFop--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_07-1024x252.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 7: Queue 1 with a consumer fast enough to keep up with incoming traffic&lt;/p&gt;

&lt;p&gt;The second queue consumer is a rather slow consumer that is only able to process around 800 messages per second, which is around half of the incoming traffic rate. This means there are a lot of messages waiting to be consumed in this queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3M5jwdkg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_08-1024x252.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3M5jwdkg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_08-1024x252.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 8: Queue 2 with a consumer capable of processing only half of the incoming traffic rate&lt;/p&gt;

&lt;p&gt;The third queue has a much slower consumer, which can only take one message for every 10 seconds. Consider this as a sample of a normal application that suddenly has a problem and is almost unable to process any messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GfK52f6U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_09-1024x252.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GfK52f6U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_09-1024x252.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 9: Queue 3 with a very slow consumer&lt;/p&gt;

&lt;p&gt;As you can see here, the fast queue consumer is not affected by the other slow consumers. You should see some network limitation based on your environment setup, but the broker maintains good service level for the healthy consumers while keeping spooling messages for the slow consumers to process later on.&lt;/p&gt;

&lt;p&gt;Note: This test was using a single t2.micro AWS EC2 instance to run all three consumer applications, and they are likely to hit the network bandwidth capacity of the instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The key point to show here is that any healthy consumer will not be affected by the other slow consumers and piling-up messages in the other queues. The Solace PubSub+ Event Broker maintains the service for the other clients while isolating the slow consumers. Compare this to other brokers that will slow down entirely, or even crash due to the building pressure and large spool size. If your broker slows down when messages start piling up due to a slow consumers, take a stab with this free broker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Point: Safe, Predictable Failover
&lt;/h2&gt;

&lt;p&gt;When the pile gets way too big, a broker may struggle even to fail-over to its standby/backup broker because the other broker needs to ‘load’ the pending messages before it can start taking new traffic. This usually forces administrators to reset the broker to zero pending messages to allow instant start-up and resume operations. The nice thing with Solace PubSub+ Event Broker is that it will take linear time of fail-over regardless of how big the pending messages pile is. I’ll demonstrate that by stopping the primary broker EC2 instance and see what happens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eI2fQGcf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_10-1024x190.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eI2fQGcf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_10-1024x190.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 10: The SDKPerf client lost connection after the primary broker shut down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--93UDO-t4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_11-1024x190.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--93UDO-t4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_11-1024x190.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 11: SDKPerf client got back connection on the 9th attempt of 1 second interval.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i08-D1mO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_12-1024x190.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i08-D1mO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2021/04/slow-consumer-post_12-1024x190.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 12: Immediately back to business with 50 GB pending messages&lt;/p&gt;

&lt;p&gt;Above is the log from the SDKPerf client who is able to resume operations after 9th attempt to reconnect with a 1 second interval. It immediately has access to the same 50 GB messages pending earlier. This is simply because the standby broker keeps its own copy and always be ready to take over and not having to read the data of the primary broker when it’s time for it to take over activity.&lt;/p&gt;

&lt;p&gt;I have also recorded this demo as a video for easier and more complete view of the action.&lt;/p&gt;

&lt;p&gt;If you like it, give it a try from &lt;a href="https://solace.dev/"&gt;solace.dev&lt;/a&gt; and head over to the &lt;a href="https://solace.community/"&gt;Solace Developer Community&lt;/a&gt; for any help needed or to share your quick wins!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/pubsub-event-broker-slow-consumers/"&gt;See How PubSub+ Event Broker Handles Slow Consumers&lt;/a&gt; appeared first on &lt;a href="https://solace.com"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>forarchitects</category>
      <category>fordevelopers</category>
    </item>
    <item>
      <title>Event-Driven Logging with Elastic Stack</title>
      <dc:creator>arih1299</dc:creator>
      <pubDate>Wed, 02 Dec 2020 14:00:30 +0000</pubDate>
      <link>https://dev.to/solacedevs/event-driven-logging-with-elastic-stack-4jfb</link>
      <guid>https://dev.to/solacedevs/event-driven-logging-with-elastic-stack-4jfb</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ykpZsAV9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/11/solace-blog-featured-image_elastic-green.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ykpZsAV9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/11/solace-blog-featured-image_elastic-green.jpg" alt=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;As part of becoming event-driven, you need to event-enable your logging infrastructure to get the most benefit out of doing so. I’ve written blog posts about &lt;a href="https://dev.to/solacedevs/take-your-distributed-system-to-the-next-level-with-event-driven-logging-38e6"&gt;how Solace helps with logging&lt;/a&gt; and &lt;a href="https://dev.to/solacedevs/integrating-solace-pubsub-with-logstash-2jd9"&gt;how to integrate Logstash with Solace&lt;/a&gt;. These posts assume that you aren’t already using something like Elastic stack. But what if you are? That’s the question I would like to answer with this post by presenting an example where the applications are deployed in a container platform that provides built-in log aggregator and search stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Case with Container Platform
&lt;/h2&gt;

&lt;p&gt;Let’s look at a specific example of an architecture where application logs are streamed into a centralized log system. In this sample, microservices are deployed as containers. These containers can be a Docker environment or Kubernetes environment such as Red Hat OpenShift Container Platform (OCP) or Google Kubernetes Engine (GKE) and many other distributions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Cluster-level Logging
&lt;/h3&gt;

&lt;p&gt;For Kubernetes platform, some distributions provide a native solution for cluster-level logging, but you can always build such capability on your own. You can use a node-level logging agent that runs on every node or use a sidecar pod for each application. Please refer to &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/"&gt;this documentation&lt;/a&gt; for more information on Kubernetes cluster-level logging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--saLcFZYt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/11/event-driven-logging-elastic-stack_01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--saLcFZYt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/11/event-driven-logging-elastic-stack_01.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Simplified Illustration of Kubernetes Cluster-level Logging&lt;/p&gt;

&lt;p&gt;The illustration above is using &lt;em&gt;fluentd&lt;/em&gt; as the logging agent that runs on all worker nodes and forward those logs to an Elastic stack. So, they are streaming logs and having a centralized logs server, but are they really event-driven?&lt;/p&gt;

&lt;h3&gt;
  
  
  The Missing Piece
&lt;/h3&gt;

&lt;p&gt;It’s nice for the IT administrators to be able to query against the complete set of logs from their entire Kubernetes cluster from a single web page. It is also nice that application developers don’t need to worry about developing their application to write and send logs in a uniform and centralized manner. But the log data is now accessible only from the log dashboard. It is easy for people to read, but it’s not accessible to any other systems in real time.&lt;/p&gt;

&lt;p&gt;The Elastic stack provides a nice tool for technical folks to dig into the logs to find errors or to investigate incidents. It is also very useful to track a specific transaction when there’s a customer complaint. But the customer service officers don’t usually have access to this system, and even if they do, it might not be that useful for them with all the gory details shown in the logs.&lt;/p&gt;

&lt;p&gt;What if those logs could be streamed in real-time to the whole enterprise in a way that’s useful to each recipient? What if specific systems such as customer service can subscribe to specific type of events? Any new systems developed in the future can easily subscribe to the stream of logs and have the relevant information in real time, including ad-hoc applications. And even better, it can be done with many languages and open standards protocols.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-Enabling Kubernetes Cluster-level Logging
&lt;/h2&gt;

&lt;p&gt;So now you have an Elastic stack as part of your cluster which has all of your cluster logs. Our idea of event-enabling the logging is to make these log events accessible by other systems in your enterprise, not just to the Elasticsearch log store. And what better way to achieve that than publishing the log events into Solace PubSub+ Event Broker, right?&lt;/p&gt;

&lt;p&gt;Publishing events to Solace PubSub+ Event Broker can be done with the many supported native and open standard messaging APIs and protocols, and also with REST. If we’re looking at something like a webhook, there is a &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/actions-webhook.html"&gt;Webhook action feature&lt;/a&gt; provided by Elasticsearch. But this is not the best way to go for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The ‘publishing’ will be done by Elasticsearch, but I think it’s better to let &lt;em&gt;fluentd&lt;/em&gt; do the publishing, as the log collector who has the events firsthand.&lt;/li&gt;
&lt;li&gt;Elasticsearch &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/actions.html"&gt;Actions&lt;/a&gt; are executed only if the specified conditions set on the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/how-watcher-works.html"&gt;Watchers&lt;/a&gt; are met. My understanding is that the Watcher feature is not available for free.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, what’s the (better) alternative? I’d say the events should be published from the log collector directly instead of from the log store. So, I was looking at &lt;em&gt;fluentd&lt;/em&gt;. I decided to use MQTT to publish events out to Solace PubSub+ and I’ll be using &lt;a href="https://github.com/toyokazu/fluent-plugin-mqtt-io/"&gt;this MQTT plugin&lt;/a&gt; for this blog. Do note that some distribution’s native cluster-level logging feature might not support changes to their logging collector component.&lt;/p&gt;

&lt;p&gt;I’d prefer the idea of having your own dedicated Elastic stack outside of the Kubernetes cluster. This is usually done by customers who bought their own enterprise license of the Elastic stack, or who need much bigger capacity, or simply have an existing stack. We then only need to publish events to Solace PubSub+ from this single external &lt;em&gt;fluentd&lt;/em&gt;. Take a look at the following illustration for such approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vCq-7pwG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/11/event-driven-logging-elastic-stack_02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vCq-7pwG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/11/event-driven-logging-elastic-stack_02.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: Event Enabling Kubernetes Cluster-level Logging&lt;/p&gt;

&lt;h3&gt;
  
  
  Topic Routing at the Heart of Event-Driven Architecture
&lt;/h3&gt;

&lt;p&gt;We can’t talk about event-driven architecture without talking about &lt;em&gt;topic&lt;/em&gt;. Topic routing is at the heart of what Solace PubSub+ Event Broker does. It enables an efficient and elegant way of routing events between systems and it also has the key capability we need to build a dynamic network as what we call an &lt;a href="https://solace.com/what-is-an-event-mesh/"&gt;&lt;em&gt;event mesh&lt;/em&gt;&lt;/a&gt;. Please check &lt;a href="https://docs.solace.com/PubSub-Basics/Understanding-Topics.htm"&gt;this documentation&lt;/a&gt; out for more reference on &lt;em&gt;topic&lt;/em&gt; in Solace PubSub+.&lt;/p&gt;

&lt;p&gt;Now, we will publish logs to Solace PubSub+ Event Broker with a specific topic. Note that I say ‘with’ not ‘to’ when I talk about topic in Solace PubSub+. That’s because these topics are metadata, part of the messages we send to the broker. They’re not something we need to create beforehand. This is key to having a very flexible and performant event broker, and I invite you to take a look at my buddy Aaron’s video linked in the document referred to in the above paragraph.&lt;/p&gt;

&lt;p&gt;To be able to publish to specific topic, we have options to set it up in the log collector itself or later in the MQTT output plugin. Or better yet, you can customize both to get your best fit solution. The idea here is to make sure we have a good design on the topic hierarchy to gain most benefit of these events. Take a look at this blog post for &lt;a href="https://solace.com/blog/topic-hierarchy-best-practices/"&gt;Solace’s best practices on topic architecture&lt;/a&gt; or these &lt;a href="https://docs.solace.com/Best-Practices/Topic-Architecture-Best-Practices.htm"&gt;technical documents&lt;/a&gt; for a deeper dive.&lt;/p&gt;

&lt;p&gt;We talked about Docker container and Kubernetes platform, but I’ll just quickly refer to how we can customize &lt;em&gt;tag&lt;/em&gt; in our Docker log driver documented &lt;a href="https://docs.docker.com/config/containers/logging/log_tags/"&gt;here&lt;/a&gt;. So you can specify some static topic hierarchy as well as some special template markup for that particular container. For Kubernetes, you’ll have to look at your distribution’s documentation, but I would assume it will be very similar to Docker.&lt;/p&gt;

&lt;p&gt;I have also linked the &lt;a href="https://github.com/toyokazu/fluent-plugin-mqtt-io"&gt;MQTT output plugin&lt;/a&gt; I am using for this blog, and here’s the &lt;a href="https://github.com/arih1299/fluentd-mqtt"&gt;sample project&lt;/a&gt; to run a container using &lt;em&gt;fluentd&lt;/em&gt; with MQTT output plugin configured to publish to Solace PubSub+ Event Broker running as another docker container. Pay attention to the &lt;a href="https://github.com/arih1299/fluentd-mqtt/blob/master/fluent.conf#L25-L26"&gt;&lt;em&gt;topic_rewrite_pattern&lt;/em&gt; and &lt;em&gt;topic_rewrite_replacement&lt;/em&gt;&lt;/a&gt; options to customize if you want to have a different topic to publish to.&lt;/p&gt;

&lt;p&gt;The topic hierarchy enables the event subscribers to subscribe to specific, filtered events based on their needs. And it’s done simply with the topic routing, no logic or filtering effort on the subscribers’ end. For example, if you have multiple clusters and multiple applications, a subscriber can subscribe to only a specific cluster or only to specific applications regardless of where the cluster is.&lt;/p&gt;

&lt;p&gt;In the sample project, we are using a sample tag of &lt;code&gt;acme/clusterx/appz/&amp;lt;container id&amp;gt;&lt;/code&gt;. In this case, a subscriber can subscribe to &lt;code&gt;acme/clusterx/&amp;gt;&lt;/code&gt; to get all log events from cluster X, or subscribe to &lt;code&gt;acme/*/appz/&amp;gt;&lt;/code&gt; to subscribe to app Z log events from any cluster. You can also subscribe to a specific log level if that is part of your topic hierarchy. All without having to predefine any topic in the broker! And it’s only gets better when we use it with an &lt;a href="https://solace.com/what-is-an-event-mesh/"&gt;event mesh&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;An event mesh is a configurable and dynamic infrastructure layer for distributing events among decoupled applications, cloud services and devices. It enables event communications to be governed, flexible, reliable and fast. An event mesh is created and enabled through a network of interconnected event brokers. In other words, an event mesh is an architecture layer that allows events from one application to be dynamically routed and received by any other application no matter where these applications are deployed (no cloud, private cloud, public cloud). This layer is composed of a network of &lt;a href="https://solace.com/what-is-an-event-broker/"&gt;event brokers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That means topic routing dynamically across multiple sites/environment, on-premises or Cloud, and with many stack and open standards APIs/protocols!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Event Mesh
&lt;/h2&gt;

&lt;p&gt;Once we event-enable our Kubernetes cluster logging by publishing events into Solace PubSub+, the log events are now accessible in real time for the whole enterprise as well as the external system’s event across Cloud environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://solace.com/wp-content/uploads/2020/11/elastic-stack-logging-1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tgVGrL31--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/11/elastic-stack-logging-1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first system to benefit is the customer service application, which now can consume the log events in a guaranteed fashion, within its own capacity. The broker will act as a shock-absorber during sudden increase of incoming log events traffic, as well as keeping the events safely stored if the application is down or disconnected for some time.&lt;/p&gt;

&lt;p&gt;Since the events are now available on the event mesh, we can expand and have many more systems subscribe to these events. They can use Solace native API (SMF), JMS for the legacy Java enterprise apps, Webhooks, or even Kafka connectors. And they can be subscribing right there on the same datacenter, or somewhere on the Cloud.&lt;/p&gt;

&lt;p&gt;The idea of having an event mesh is that these systems can dynamically subscribe to events from anywhere, and the mesh will route the events across. No fixed point-to-point stitching, and no wasted bandwidth for synchronizing events across multi cloud environment when there’s no one subscribing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I’ve covered the advantages as well as shortcomings of having a centralized logging system such as Kubernetes cluster-level logging and Elastic stack, and ways to event-enable it to realize greater potential of your enterprise logs. The event mesh capability uniquely provided by Solace PubSub+ extends this even more. Hopefully this idea sparked your interests and curiosity to go and event-enable your own systems and reap the event-driven architecture promises! What are you waiting for? Get on to &lt;a href="https://www.solace.dev/"&gt;Solace Developers&lt;/a&gt; to start building your own event-driven logging with Solace PubSub+ and Elastic!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/event-driven-logging-with-elastic-stack/"&gt;Event-Driven Logging with Elastic Stack&lt;/a&gt; appeared first on &lt;a href="https://solace.com"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>fordevelopers</category>
    </item>
    <item>
      <title>Take Your Distributed System to the Next Level with Event-Driven Logging</title>
      <dc:creator>arih1299</dc:creator>
      <pubDate>Fri, 10 Jul 2020 14:39:56 +0000</pubDate>
      <link>https://dev.to/solacedevs/take-your-distributed-system-to-the-next-level-with-event-driven-logging-38e6</link>
      <guid>https://dev.to/solacedevs/take-your-distributed-system-to-the-next-level-with-event-driven-logging-38e6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FjQSqEz8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/solace-blog-featured-image-_logging-green.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FjQSqEz8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/solace-blog-featured-image-_logging-green.jpg" alt=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Logging is an important aspect of a distributed system, but some applications don’t even have proper logging — some only have file-based logging, others have very basic logging, and some have proper structure with rolling and rotations set up. Many have embarked on the journey of having those logs stored into a database of some sort, for storing and querying the mountains of data flooding in from the many systems in their enterprises. While this approach helps search the contents of the logs, it also presents some challenges. The first challenge is writing those large volumes of logs into the database, and the second problem is the amount of time required to be able to query the large volume of data in the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PlnoDy-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PlnoDy-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-01.png" alt="event-driven architecture logging"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1: Common Pattern of Logging to a Relational Database&lt;/p&gt;

&lt;p&gt;I’ll walk through some common architecture patterns around logging infrastructure, and how Solace offers an event-driven logging alternative that offers a few key advantages over other patterns. Then I’ll share the steps to configure Logstash as the log storage component that consumes a stream of log events via Solace &lt;a href="https://solace.com/products/event-broker/"&gt;PubSub+ Event Broker&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Challenge: Ingestion
&lt;/h2&gt;

&lt;p&gt;If you’re using a relational database, the first common problem you’ll encounter is that writing to a relational database is not a fast operation for a database. This isn’t a big deal for small-scale operations, but for many enterprises this proves to be a significant operational issue.&lt;/p&gt;

&lt;p&gt;In the worst-case scenario, business processes are stuck waiting for logging operation to complete for several seconds while the actual customer transaction has completed within less than a second. This also bleeds to the infrastructure layer, where you have to increase the scale of your service infrastructure by two-fold or more to support the thousands of transactions that are hogging capacity, mostly to ‘wait’ for the blocking database writing operations.&lt;/p&gt;

&lt;p&gt;Some people have learned to not make the database writing operation part of the main business process flow, or not use a relational database at all. There is more than one way to do this, especially with the vast options of technology stack we have today.&lt;/p&gt;

&lt;p&gt;One pattern I would like to elaborate on is the use of a queueing mechanism as the intermediary pipeline. In this pattern, applications send the logs as messages to a message broker’s queue and immediately comes back to their transaction flow. This supposedly works better since sending to a queue should be much faster than a database insert operation, and the messages waiting to be written to database are safe and sound in the queue. Sounds clever, right?&lt;/p&gt;

&lt;p&gt;It does reduce blocking time to ‘send’ the log entries and buffering if the next step of writing to an actual database becomes slow, but there are big challenges and missing benefits when you only rely on a queue instead of a &lt;a href="https://solace.com/blog/publish-subscribe-messaging-pattern/"&gt;publish-subscribe pattern&lt;/a&gt;. I’ll get back to this in a minute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Second Challenge: The Query
&lt;/h2&gt;

&lt;p&gt;Once you have all your log entries recorded into a database, it’s all good right? Not always. Depending on what kind of database you are using, writing to and querying the database can take way longer than you’d like them to. You might face problems such as querying the database taking ages to complete, or it simply giving you data from an hour ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Case of Intermediary Queue
&lt;/h2&gt;

&lt;p&gt;Let’s revisit the intermediary queue approach. What could possibly go wrong? Let’s take a closer look.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0TvD_u-f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0TvD_u-f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-02.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2: Log Queue&lt;/p&gt;

&lt;p&gt;Imagine what would happen if the database is slow or simply stops working. Eventually, the queue will build up a massive number of unprocessed messages, and the broker will eventually stop ‘acknowledging’ the incoming log messages, leaving the applications again stuck waiting for the logging operations to complete. Kind of the same impact with the direct database write, just at a later time. Imagine if other critical business processes relying on the same message broker are also impacted!&lt;/p&gt;

&lt;p&gt;Consider a not-so-distant future, where some other systems need the same set of logs. Then you’re looking at either querying the log database (fully aware of the delayed data and slow response time) or slapping another send-to-queue code into the apps to have a duplicated stream. Doesn’t it sound like much fun, does it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-Driven Logging Architecture
&lt;/h2&gt;

&lt;p&gt;What’s an event-driven logging architecture? Well, each log entry is an event that other enterprise applications can subscribe to, receive, and react to. Source applications just need to worry about emitting each log as an event and that’s it! Those events can be routed to many applications via a range of APIs and protocols, now or later, without touching the producer of those events.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cu6FtBBc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cu6FtBBc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-03.png" alt="event-driven logging architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3: Event-Driven Logging&lt;/p&gt;

&lt;p&gt;This diagram illustrates a few key points. An event can be sent simultaneously to many recipients with the publish-subscribe pattern. In this diagram, we’re publishing to topics and consuming from queues. The queue is acting like a subscriber and persists the events for a specific application to consume in a guaranteed fashion. Learn more about &lt;a href="https://docs.solace.com/PubSub-Basics/Core-Concepts.htm#topic-queue-mapping"&gt;topic to queue mapping&lt;/a&gt;, one of Solace’s core concepts.&lt;/p&gt;

&lt;p&gt;Each application can subscribe to whatever topics indicate the unique set of information they need. Note that this filtering is done by the broker, &lt;strong&gt;not&lt;/strong&gt; after the events have been transferred all the way to the applications. In fact, the benefit of topic routing with wildcards is so great that we produced a &lt;a href="https://docs.solace.com/PubSub-Basics/Understanding-Topics.htm"&gt;video about topic wildcards&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern Search Stack and Big Data
&lt;/h2&gt;

&lt;p&gt;Say you’ve modernized your technology stack, including your log database. You now have a bunch of new shiny big data technologies lying around, and a quick new search engine to query your logs. Let me guess, Elasticsearch or Splunk? Apache Spark? Kafka as the ingestion to Hadoop-based platforms? Don’t worry, we’ve got your back!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_3f-iVyx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_3f-iVyx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/07/logging-blog-post_pic-04.png" alt="event-driven logging architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4: Event-Driven Logging – Modernized!&lt;/p&gt;

&lt;p&gt;It’s the same event-driven logging architecture, but with fresh new applications subscribing to the log events!&lt;/p&gt;

&lt;p&gt;I wrote a separate blog on how to integrate Solace PubSub+ Event Broker with Logstash as part of an ELK stack. &lt;a href="https://dev.to/arih1299/integrating-solace-pubsub-with-logstash-2e1j-temp-slug-2140591"&gt;Check it out&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Also take a look here for &lt;a href="https://docs.solace.com/Developer-Tools/Integration-Guides/Integration-Guides.htm"&gt;more stuff&lt;/a&gt; you can integrate with Solace.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;With the power of event-driven architecture in your hands now, why don’t you try out stream processing or maybe a machine learning application that also subscribes to your log event stream either directly via topic subscription or in a guaranteed fashion via queue just like our sample with Logstash. The &lt;em&gt;cloud&lt;/em&gt; is the limit!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/event-driven-logging-architecture/"&gt;Take Your Distributed System to the Next Level with Event-Driven Logging&lt;/a&gt; appeared first on &lt;a href="https://solace.com"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>fordevelopers</category>
    </item>
    <item>
      <title>Integrating Solace PubSub+ with Logstash</title>
      <dc:creator>arih1299</dc:creator>
      <pubDate>Thu, 04 Jun 2020 14:37:51 +0000</pubDate>
      <link>https://dev.to/solacedevs/integrating-solace-pubsub-with-logstash-2jd9</link>
      <guid>https://dev.to/solacedevs/integrating-solace-pubsub-with-logstash-2jd9</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post-featured-image.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post-featured-image.jpg"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/logstash" rel="noopener noreferrer"&gt;Logstash&lt;/a&gt; is a free and open source server-side data processing pipeline made by Elastic that ingests data from a variety of sources, transforms it, and sends it to your favorite “stash.” Because of its tight integration with Elasticsearch, powerful log processing capabilities, and over 200 pre-built open-source plugins that can help you easily index your data, Logstash is a popular choice for loading data into Elasticsearch.&lt;/p&gt;

&lt;p&gt;I will share here the steps needed to connect Solace PubSub+ with Logstash as a log storage component. For this demo, I’ll be using a Solace &lt;a href="https://solace.com/products/event-broker/" rel="noopener noreferrer"&gt;PubSub+ Event Broker&lt;/a&gt; and a Logstash server. I’ll use dummy log entries instead of real application logging for now. I will set up a Logstash server in my Google Cloud Platform and use Solace &lt;a href="https://solace.com/products/event-broker/cloud/" rel="noopener noreferrer"&gt;PubSub+ Event Broker: Cloud&lt;/a&gt; to host my event broker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solace PubSub+ Event Broker: Cloud
&lt;/h2&gt;

&lt;p&gt;It’s free to run a Solace PubSub+ Event Broker on Solace PubSub+ Event Broker: Cloud. Just go to &lt;a href="https://cloud.solace.com" rel="noopener noreferrer"&gt;https://cloud.solace.com&lt;/a&gt; and set up your own instance, then create a new event broker on AWS with a few clicks as shown here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-01-1024x591.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-01-1024x591.png" alt="Figure 1 Create your Solace PubSub+ Event Broker"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1 Create your Solace PubSub+ Event Broker&lt;/p&gt;

&lt;p&gt;Don’t blink! No need to go and make a cup of coffee. Your broker should be ready now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-02-1024x591.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-02-1024x591.png" alt="Figure 2 Your shiny new event broker"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2 Your shiny new event broker&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-03-1024x555.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-03-1024x555.png" alt="Figure 3 Connection information"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3 Connection information&lt;/p&gt;

&lt;p&gt;Leave it at that and set up the Logstash server next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logstash Server (ELK Stack) on Google Cloud Platform
&lt;/h2&gt;

&lt;p&gt;For the Logstash server, I’m using a Bitnami ELK image readily available from Google Cloud Platform Marketplace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-04-1024x591.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-04-1024x591.png" alt="Figure 4 Set up ELK Server on GCP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4 Set up ELK Server on GCP&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-05-1024x517.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-05-1024x517.png" alt="Figure 5 Configure the new ELK deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 5 Configure the new ELK deployment&lt;/p&gt;

&lt;p&gt;Once the deployment is done, you can log in to this new instance via SSH or gcloud or pick your favorite from the options available!&lt;/p&gt;

&lt;h2&gt;
  
  
  Download Solace JMS API
&lt;/h2&gt;

&lt;p&gt;The first thing you need to do is download the Solace JMS API Library to this server. This library will be used by Logstash to connect to Solace PubSub+ using JMS API. You can put the library anywhere, but for this demo I will create folder &lt;code&gt;/usr/share/jms&lt;/code&gt; and put all the JAR files underneath.&lt;/p&gt;

&lt;p&gt;You can &lt;a href="https://solace.com/download/" rel="noopener noreferrer"&gt;download the Solace JMS API Library&lt;/a&gt;. For this demo, I use &lt;code&gt;wget&lt;/code&gt; to download the library directly into the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-06.png" alt="Figure 6 Download the Solace JMS API Library"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 6 Download the Solace JMS API Library&lt;/p&gt;

&lt;p&gt;Once you have the zip file of the Solace JMS API Library, you can extract and copy the JAR files under &lt;code&gt;lib/&lt;/code&gt; directory to our &lt;code&gt;/usr/share/jms&lt;/code&gt; folder. You will use these libraries later on when we configure the JMS input for Logstash.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-07.png" alt="Figure 7 Copy the JAR files"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 7 Copy the JAR files&lt;/p&gt;

&lt;h2&gt;
  
  
  Reset Kibana User Password
&lt;/h2&gt;

&lt;p&gt;You will also use the Kibana dashboard later on to verify that your log entries can be processed correctly by the Logstash JMS input. For that, change the user password with your own password and restart the Apache web server to apply the new setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-08.png" alt="Figure 8 Change ELK default password"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 8 Change ELK default password&lt;/p&gt;

&lt;p&gt;Once you’ve restarted the Apache web server, log in to the Kibana by accessing &lt;code&gt;/app/kibana&lt;/code&gt; from your web browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-09.png" alt="Figure 9 Test the new Kibana password"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 9 Test the new Kibana password&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare the Logstash Queue
&lt;/h2&gt;

&lt;p&gt;Next you’ll configure Logstash to consume from a Queue to have a guaranteed delivery of log events to Logstash. To do that, go to the Solace PubSub+ Manager by clicking the &lt;strong&gt;Manage Service&lt;/strong&gt; link on the top right corner of your service console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-10-1024x295.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-10-1024x295.png" alt="Figure 10 Manage the event broker"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 10 Manage the event broker&lt;/p&gt;

&lt;p&gt;To create a new Queue, simply go to the &lt;strong&gt;Queue&lt;/strong&gt; menu on the left and click the green &lt;strong&gt;Create&lt;/strong&gt; button on the top right. Give any name you want but remember you will use this to configure the Logstash JMS input later on. For now, name it LogstashQ.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-11-1024x336.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-11-1024x336.png" alt="Figure 11 Create a new Queue"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 11 Create a new Queue&lt;/p&gt;

&lt;p&gt;Keep all the default values for this demo as I will not go into details around Queue for this demo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-12-1024x409.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-12-1024x409.png" alt="Figure 12 Use the default values for the new Queue"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 12 Use the default values for the new Queue&lt;/p&gt;

&lt;h2&gt;
  
  
  Topic to Queue Mapping
&lt;/h2&gt;

&lt;p&gt;This is a very important part of setting up the Queue for our log flow, in which you will make configurations for the Queue to subscribe to the topics that we want to get and later be consumed by Logstash. For this step, go to the &lt;strong&gt;Subscriptions&lt;/strong&gt; tab menu and click the &lt;strong&gt;+ Subscription&lt;/strong&gt; button on the top right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-13-1024x295.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-13-1024x295.png" alt="Figure 13 Add subscription to the new Queue"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 13 Add subscription to the new Queue&lt;/p&gt;

&lt;p&gt;Now, you want to attract all events related to the logs in the Acme enterprise. For that, add a topic subscription with a wildcard after the acme/logs/ hierarchy. You can also have multiple different subscriptions for this Queue if we wish.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-14-1024x295.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-14-1024x295.png" alt="Figure 14 Add acme/logs/&amp;gt; subscription"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 14 Add acme/logs/&amp;gt; subscription&lt;/p&gt;

&lt;h2&gt;
  
  
  Test the Log Queue
&lt;/h2&gt;

&lt;p&gt;To make sure the mapping is correct, try to publish a fake log event via the &lt;strong&gt;Try Me!&lt;/strong&gt; tab on the service console. Connect both the Publisher and Subscriber components and add &lt;code&gt;acme/logs/&amp;gt;&lt;/code&gt; subscription in the Subscriber section.&lt;/p&gt;

&lt;p&gt;Now test by sending a dummy log entry line as the Text payload of the test message, and verify that the message is successfully received by the Subscriber component.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-15-1024x830.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-15-1024x830.png" alt="Figure 15 Test Publishing a Sample Log Line"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 15 Test Publishing a Sample Log Line&lt;/p&gt;

&lt;p&gt;Now go to the Queues and into our LogstashQ. Verify that the same message is also safely queued in this queue via the &lt;strong&gt;Messages Queued&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-16.png" alt="Figure 16 Log event queued in the new Queue"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 16 Log event queued in the new Queue&lt;/p&gt;

&lt;h2&gt;
  
  
  JNDI Connection Factory
&lt;/h2&gt;

&lt;p&gt;We will be using a JNDI-based connection setup in the Logstash JMS input later on. For that, let’s verify that you have a JNDI Connection Factory for that use. Solace PubSub+ Event Broker comes with an out-of-the-box connection factory object that you can use, or you can create a new one if you want by going to the &lt;strong&gt;JMS JNDI&lt;/strong&gt; menu and &lt;strong&gt;Connection&lt;/strong&gt;  &lt;strong&gt;Factories&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-17.png" alt="Figure 17 Verify JNDI Connection Factory"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 17 Verify JNDI Connection Factory&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Logstash
&lt;/h2&gt;

&lt;p&gt;This Logstash configuration follows the samples already created by the ELK stack image from Bitnami. However, your environment might vary.&lt;/p&gt;

&lt;p&gt;Go to &lt;code&gt;/opt/bitnami/logstash&lt;/code&gt; and add a new JMS input section to the existing configuration file under the pipeline folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ari-elk-vm:~# root@ari-elk-vm:~#root@ari-elk-vm:~#root@ari-elk-vm:~# cd /opt/bitnami/logstash/root@ari-elk-vm:/opt/bitnami/logstash# vi pipeline/logstash.confroot@ari-elk-vm:/opt/bitnami/logstash#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please refer to the &lt;a href="https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html" rel="noopener noreferrer"&gt;Elastic documentation&lt;/a&gt; for JMS input plugin for more details. I have created the following configuration as per my newly created Solace PubSub+ Event Broker and the Logstash queue earlier.&lt;/p&gt;

&lt;p&gt;Since we are going to consume from a queue, you need to set the pub_sub parameter as false. Pay attention to the jndi_context parameters and make sure you have the correct information based on the connection information seen in PubSub+ Cloud service console earlier.&lt;/p&gt;

&lt;p&gt;If you’re not familiar with Solace, pay attention to the principal argument where you need to append ‘@’ followed by the Message VPN name after the username.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input{ beats { ssl =&amp;gt; false host =&amp;gt; "0.0.0.0" port =&amp;gt; 5044 } gelf { host =&amp;gt; "0.0.0.0" port =&amp;gt; 12201 } http { ssl =&amp;gt; false host =&amp;gt; "0.0.0.0" port =&amp;gt; 8080 } tcp { mode =&amp;gt; "server" host =&amp;gt; "0.0.0.0" port =&amp;gt; 5010 } udp { host =&amp;gt; "0.0.0.0" port =&amp;gt; 5000 } jms { include\_header =&amp;gt; false include\_properties =&amp;gt; false include\_body =&amp;gt; true use\_jms\_timestamp =&amp;gt; false destination =&amp;gt; 'LogstashQ' pub\_sub =&amp;gt; false jndi\_name =&amp;gt; '/jms/cf/default' jndi\_context =&amp;gt; { 'java.naming.factory.initial' =&amp;gt; 'com.solacesystems.jndi.SolJNDIInitialContextFactory' 'java.naming.security.principal' =&amp;gt; 'solace-cloud-client@msgvpn' 'java.naming.provider.url' =&amp;gt; 'tcps://yourhost.messaging.solace.cloud:20290' 'java.naming.security.credentials' =&amp;gt; 'yourpassword' } require\_jars=&amp;gt; ['/usr/share/jms/commons-lang-2.6.jar', '/usr/share/jms/sol-jms-10.8.0.jar', '/usr/share/jms/geronimo-jms\_1.1\_spec-1.1.1.jar'] }}output{ elasticsearch { hosts =&amp;gt; ["127.0.0.1:9200"] document\_id =&amp;gt; "%{logstash\_checksum}" index =&amp;gt; "logstash-%{+YYYY.MM.dd}" }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, the ELK image from Bitnami does not do auto-reload of the Logstash configuration. In that case, we can restart the Logstash service using the built-in ctlscript.sh script. Check the logstash log after that to make sure that it has successfully connected to our Solace PubSub+ Event Broker. The log should have similar entries as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-18-1024x141.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-18-1024x141.png" alt="Figure 18 Logstash log with JMS input"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 18 Logstash log with JMS input&lt;/p&gt;

&lt;p&gt;Also verify that the Logstash connection exists in our Solace PubSub+ Manager, via the &lt;strong&gt;Client Connections&lt;/strong&gt; menu and &lt;strong&gt;Solace Clients&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-19-1024x263.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-19-1024x263.png" alt="Figure 19 Verify the Logstash connection exists"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 19 Verify the Logstash connection exists&lt;/p&gt;

&lt;p&gt;Since you have already tested on sending a fake log entry earlier, by now the Logstash JMS input should have already consumed that message. Verify that by going into the LogstashQ and check the &lt;strong&gt;Consumers&lt;/strong&gt; tab, where you’ll want to look at the “Messages Confirmed Delivery” statistic and make sure the number has increased.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-20-1024x320.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-20-1024x320.png" alt="Figure 20 Verify Messages Confirmed Delivery"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 20 Verify Messages Confirmed Delivery&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Kibana Index Pattern
&lt;/h2&gt;

&lt;p&gt;To be able to see the log entry in the Kibana dashboard, create a simple index pattern of &lt;code&gt;logstash-*&lt;/code&gt; and use the &lt;code&gt;@timestamp&lt;/code&gt; field as the time filter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-21-1024x405.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-21-1024x405.png" alt="Figure 21 Create a new index pattern in Kibana"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 21 Create a new index pattern in Kibana&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-22-1024x405.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-22-1024x405.png" alt="Figure 22 Use @timestamp as the time filter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 22 Use @timestamp as the time filter&lt;/p&gt;

&lt;p&gt;You can now go back to the home dashboard and should see an entry exists there from the dummy log event. And that’s it! Now you should be able to stream log events by publishing to a PubSub+ topic like &lt;code&gt;acme/logs/app1/tx123/debug&lt;/code&gt;, for example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-23-1024x457.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsolace.com%2Fwp-content%2Fuploads%2F2020%2F06%2Flogstash-blog-post_picture-23-1024x457.png" alt="Figure 23 Verify the log entry shows up in the Kibana dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 23 Verify the log entry shows up in the Kibana dashboard&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;So that’s how to integrate Solace PubSub+ Event Broker with Logstash. There are a lot of other stacks that Solace can integrate with as well. &lt;a href="https://docs.solace.com/Developer-Tools/Integration-Guides/Integration-Guides.htm" rel="noopener noreferrer"&gt;Check them out&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/integrating-solace-with-logstash/" rel="noopener noreferrer"&gt;Integrating Solace PubSub+ with Logstash&lt;/a&gt; appeared first on &lt;a href="https://solace.com" rel="noopener noreferrer"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>solace</category>
      <category>pubsub</category>
      <category>eventdriven</category>
      <category>microservices</category>
    </item>
    <item>
      <title>How to set up Solace PubSub+ Event Broker with OAuth for MQTT against Keycloak</title>
      <dc:creator>arih1299</dc:creator>
      <pubDate>Thu, 12 Dec 2019 17:12:29 +0000</pubDate>
      <link>https://dev.to/solacedevs/how-to-set-up-solace-pubsub-event-broker-with-oauth-for-mqtt-against-keycloak-fcl</link>
      <guid>https://dev.to/solacedevs/how-to-set-up-solace-pubsub-event-broker-with-oauth-for-mqtt-against-keycloak-fcl</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6rKhoYp3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/blog-featured-image_keycloak.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6rKhoYp3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/blog-featured-image_keycloak.jpg" alt=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://oauth.net/2/"&gt;OAuth 2.0&lt;/a&gt; and &lt;a href="https://openid.net/connect/"&gt;OpenID Connect&lt;/a&gt; (OIDC) are getting more and more popular as authentication and authorization protocols. OIDC also uses JSON Web Tokens (JWT) as a simple token standard. Another protocol that is gaining popularity is &lt;a href="http://mqtt.org/"&gt;MQTT&lt;/a&gt;. Since Solace PubSub+ Event Broker supports all these protocols, why don’t we see how they all work together nicely in a simple demo? We will use the &lt;a href="https://www.keycloak.org/"&gt;Keycloak&lt;/a&gt; server as the authorization server and a simple dotnet core application to build a full end-to-end demo.&lt;/p&gt;

&lt;h1&gt;
  
  
  Set up the servers
&lt;/h1&gt;

&lt;p&gt;For this blog, I’m running both Solace PubSub+ Event Broker and the Keycloak server as Docker containers on macOS. The configuration steps are the same regardless where we run the servers. One thing to note is that we need connectivity from the Solace PubSub+ Event Broker to the Keycloak server.&lt;/p&gt;

&lt;p&gt;Run this command to set up Solace PubSub+ Event Broker software in your local Docker environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -d --network solace-net -p 8080:8080 -p 1883:1883 --shm-size=2g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --name=mypubsub solace/solace-pubsub-standard:9.3.0.22
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Run this command to set up the Keycloak authorization server in your local Docker environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -p 7777:8080 \ 
--network solace-net \ 
--name keycloak \ 
-e KEYCLOAK_USER=user \ 
-e KEYCLOAK_PASSWORD=password \ 
-e DB_VENDOR=H2 \ 
-d jboss/Keycloak:7.0.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If Port 8080 is already used on your local machine, change it  to any other available port (the first port in the -p argument).&lt;/p&gt;

&lt;p&gt;Using the docker network parameter enables you to access this host using hostname from other Docker instances. If you don’t have one yet, create one with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker network create solace-net
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once the Keycloak server container is started, we can verify it from the Keycloak homepage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u6REOFLS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u6REOFLS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1 Keycloak Homepage – use the port we published in the docker run command&lt;/p&gt;

&lt;h1&gt;
  
  
  Keycloak as the Authorization Server
&lt;/h1&gt;

&lt;p&gt;An authorization server grants clients the tokens they can use to access protected resources. In this setup, we are using the Keycloak server as the authorization server.&lt;/p&gt;

&lt;p&gt;In this section, we will set up the user account in the authorization server. We will use this user to get the access and ID token from the authorization server.&lt;/p&gt;

&lt;p&gt;The first step is to log in using the username and password defined during the Docker container creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UvUf7Ppk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UvUf7Ppk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 2 Use the username and password defined as environment variable&lt;/p&gt;

&lt;p&gt;By default, Keycloak is set up with a built-in realm called Master. For simplicity, we will use this realm for our user. If you want to create a new realm, you can do that as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Client
&lt;/h2&gt;

&lt;p&gt;The next step is to create a new client in the realm. We do this by clicking the &lt;strong&gt;Create&lt;/strong&gt; menu on the top right of the clients table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0dRbYXz1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0dRbYXz1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 3 Client Admin Page&lt;/p&gt;

&lt;p&gt;Enter a client ID and choose &lt;strong&gt;openid-connect&lt;/strong&gt; as the client protocol. We will use this client for our OpenID Connect test.&lt;/p&gt;

&lt;p&gt;We can leave the Root URL field empty for this demo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DBxreyUw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DBxreyUw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 4 Create a new client&lt;/p&gt;

&lt;p&gt;Next, enter the mandatory Redirect URLs for this client. Since we’re not going to use this for the Web, we can use a simple URL such as localhost/* for this demo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_dbShuJb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_dbShuJb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 5 Enter the Redirect URL&lt;/p&gt;

&lt;p&gt;Optionally, change the default Access Token Lifespan to a longer period if you want to use a single token for multiple tests spanning several minutes or more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MA0syrSV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MA0syrSV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 6 Change the default Access Token Lifespan&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Client Scope for a Custom Audience
&lt;/h2&gt;

&lt;p&gt;Additionally, we will add a custom audience called “pubsub+” to this client for audience validation. Keycloak will add the client ID as the audience attribute value and provide a few ways to add a custom audience. For this test, we create a client scope by the name of “pubsub+” and include a custom audience there. We then include this client scope in the client we created earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hJjXu4wT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hJjXu4wT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-7.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 7 Create a client scope to have a custom audience value&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sm91RGLj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sm91RGLj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 8 Add the client scope to the Solace client&lt;/p&gt;

&lt;h1&gt;
  
  
  Configure Solace PubSub+ Event Broker
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Create an OAuth Provider
&lt;/h2&gt;

&lt;p&gt;The first step is to configure an OAuth provider for OpenID Connect in the Solace PubSub+ Event Broker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dw6Rr-xZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dw6Rr-xZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 9 Create a new OAuth provider&lt;/p&gt;

&lt;p&gt;We will create the new OAuth provider based on the Keycloak authorization server. We will enable audience validation and authorization group as per our Keycloak client configuration, use the JWKS URL from the Keycloak, and use the &lt;code&gt;preferred_username&lt;/code&gt; field from the &lt;code&gt;id_token&lt;/code&gt; as the username claim source.&lt;/p&gt;

&lt;p&gt;We will look for audience and authorization group claims from the access_token since Keycloak will have those in access_token by default. This is not a mandatory option. Simply configure against how your authorization server would have the claims.&lt;/p&gt;

&lt;p&gt;Refer to the screenshot below for the configuration values and don’t forget to enable this provider by toggling the &lt;strong&gt;Enabled&lt;/strong&gt; option on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RztPdHhS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RztPdHhS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-10.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 10 Set up a new OAuth provider&lt;/p&gt;

&lt;p&gt;We will not configure Token Introspection for this test.&lt;/p&gt;

&lt;p&gt;For the username, we will find the username claim from the &lt;code&gt;id_token&lt;/code&gt; from the &lt;code&gt;preferred_username&lt;/code&gt; attribute rather than from &lt;code&gt;sub&lt;/code&gt;. This attribute should carry the value of “user” as the username we use in Keycloak. We will use the username “user” in our sample application.&lt;/p&gt;

&lt;p&gt;And as a bonus feature, not really part of OpenID Connect, we can enable API username validation so that the broker validates if the username provided in your application API call matches the username claim extracted from the token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8kNkNSvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8kNkNSvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-11.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 11 Set up a new OAuth provider (2)&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable OAuth
&lt;/h2&gt;

&lt;p&gt;Next, we will set up the Solace PubSub+ Event Broker’s Client Authentication to enable the OAuth Authentication. This is done by toggling the OAuth Authentication switch and then select one of the available OAuth providers as the default provider for OAuth authentication. This value is used when a client using OAuth authentication does not provide the OAuth provider information.&lt;/p&gt;

&lt;p&gt;Notice that we disable the Basic Authentication and Client Certificate Authentication for this test to ensure that our broker will only do OAuth authentication.&lt;/p&gt;

&lt;p&gt;To keep it simple, we will use default Authorization Type of Internal Database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_jXZ-WIU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_jXZ-WIU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-12.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 12 Enable OAuth Authentication and set the Default Provider Name&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an Authorization Group
&lt;/h2&gt;

&lt;p&gt;The next step is to make sure we have configured authorization groups to be used by the broker to validate the authorization claim in the token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u4tnA-6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u4tnA-6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-13.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 13 Create an authorization group&lt;/p&gt;

&lt;p&gt;Let’s create a sample authorization group by the name “pubsub+” to be used later by the OAuth client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZcNt_aIA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZcNt_aIA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-14.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 14 Enable and select profiles&lt;/p&gt;

&lt;p&gt;Make sure to enable this new authorization group and feel free to play around with the ACL and Client Profiles. For now, we will settle with the default profiles for both.&lt;/p&gt;

&lt;h1&gt;
  
  
  Ready for Test
&lt;/h1&gt;

&lt;p&gt;Now we have the Solace PubSub+ Event Broker and the Keycloak authorization server configured, we are ready to run some tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Project
&lt;/h2&gt;

&lt;p&gt;This is a sample .Net Core application to test the OAuth authentication and authorization features. It will take the two arguments &lt;code&gt;access_token&lt;/code&gt; and &lt;code&gt;id_token&lt;/code&gt; and subscribe and publish a message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk"&amp;gt;
  &amp;lt;PropertyGroup&amp;gt;
    &amp;lt;OutputType&amp;gt;Exe&amp;lt;/OutputType&amp;gt;
    &amp;lt;TargetFramework&amp;gt;netcoreapp2.2&amp;lt;/TargetFramework&amp;gt;
    &amp;lt;RootNamespace&amp;gt;solace_dotnet_mqtt&amp;lt;/RootNamespace&amp;gt;
  &amp;lt;/PropertyGroup&amp;gt;
  &amp;lt;ItemGroup&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;
  &amp;lt;ItemGroup&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;
  &amp;lt;ItemGroup&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;
  &amp;lt;ItemGroup&amp;gt;
    &amp;lt;PackageReference Include="M2MqttClientDotnetCore" Version="1.0.1"/&amp;gt;
  &amp;lt;/ItemGroup&amp;gt;
&amp;lt;/Project&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System; 
using System.Text; 
using M2Mqtt; 
using M2Mqtt.Messages; 

namespace solace_dotnet_mqtt 

{
    class Program
    {
        static void Main(string[] args)
        {
            if (args.Length &amp;lt; 2) {
                Console.WriteLine("Usage: dotnet run &amp;lt;access_token&amp;gt; &amp;lt;id_token&amp;gt;");
                Environment.Exit(-1);
            }

            MqttClient client = new MqttClient("localhost");

            client.MqttMsgPublishReceived += client_MqttMsgPublishReceived;
            string clientId = Guid.NewGuid().ToString();
            string solace_oauth_provider = "keycloak-openid";
            string oidcpass = "OPENID~" + solace_oauth_provider + "~" + args[1] + "~" + args[0];
            client.Connect(clientId, "user", oidcpass);
            string strValue = "Hello World!";
            client.Subscribe(new string[] { "test/topic" }, new byte[] { MqttMsgBase.QOS_LEVEL_AT_LEAST_ONCE });
            client.Publish("test/topic", Encoding.UTF8.GetBytes(strValue), MqttMsgBase.QOS_LEVEL_AT_LEAST_ONCE, false);
        }

        static void client_MqttMsgPublishReceived(object sender, MqttMsgPublishEventArgs e)
        {
            Console.WriteLine("Message received: " + System.Text.Encoding.UTF8.GetString(e.Message));
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Prepare the Tokens
&lt;/h2&gt;

&lt;p&gt;To get the tokens, we can use tools such as Postman to get new tokens from the Keycloak authorization server.&lt;/p&gt;

&lt;p&gt;We can simply create a new request and go to the &lt;strong&gt;Authorization&lt;/strong&gt; tab, select OAuth 2.0 as the type, and click the &lt;strong&gt;Get New Access Token&lt;/strong&gt; button on the right panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u9tXLFMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u9tXLFMW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-15.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 15 Use Postman to get the tokens using OAuth 2.0 Authorization&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xzsT9Pdd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xzsT9Pdd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-16.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 16 Get to the New Access Token menu&lt;/p&gt;

&lt;p&gt;Fill in the token request details as per the sample below. Make sure you enter the correct client ID.&lt;/p&gt;

&lt;p&gt;Since we have set the client ID with public access, we don’t need to enter any client secret. And for &lt;strong&gt;Scope,&lt;/strong&gt; we will use &lt;strong&gt;openid&lt;/strong&gt; so that this is handled as OpenID Connect.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;State&lt;/strong&gt; , we just put any value for this test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uWgoVJi5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uWgoVJi5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-17.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 17 Use Auth URL from the Keycloak server&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8J_Pwwc4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8J_Pwwc4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-18.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 18 Use Access Token URL from Keycloak and change the Client Authentication setting&lt;/p&gt;

&lt;p&gt;You will be presented with the Keycloak login page to authenticate yourself to be able to get the tokens. Use the username “user” and password “password” that we used when running the Docker container for the Keycloak server.&lt;/p&gt;

&lt;p&gt;Once you get the tokens, you can copy both the &lt;code&gt;access_token&lt;/code&gt; and &lt;code&gt;id_token&lt;/code&gt; for use in the test later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uLXwkw5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uLXwkw5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-19.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 19 Copy the Access Token&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Il6zcOAh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Il6zcOAh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-20.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 20 Copy the id_token&lt;/p&gt;

&lt;h2&gt;
  
  
  Peek into the Tokens
&lt;/h2&gt;

&lt;p&gt;You can peek into the tokens to see the contents and attribute values. You can go to &lt;a href="https://jwt.io"&gt;https://jwt.io&lt;/a&gt; and simply paste the token into the &lt;strong&gt;Encoded&lt;/strong&gt; text box on the left.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bSRtCj-b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bSRtCj-b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-21.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 21 Decode a JWT access token&lt;/p&gt;

&lt;p&gt;The highlighted &lt;strong&gt;aud&lt;/strong&gt; and &lt;strong&gt;scope&lt;/strong&gt; attributes are the ones we use in this test. As you can see, the &lt;strong&gt;aud&lt;/strong&gt; value of pubsub+aud is extracted from the token, as well as the scope of pubsub+.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xxaWRmWE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xxaWRmWE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/12/keycloak-post_image-22.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 22 Decode a JWT id_token&lt;/p&gt;

&lt;p&gt;As we can see, the &lt;code&gt;id_token&lt;/code&gt; will contain a &lt;code&gt;preferred_username&lt;/code&gt; attribute with the value “user”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run the Test Program
&lt;/h2&gt;

&lt;p&gt;To test with the provided sample program, simply run the dotnet run command with both tokens as arguments. For this sample, I have simply used localhost for the Solace PubSub+ Event Broker address as we are running it on Docker locally. Upon successful run, the program will simply print out “Message received: Hello World!” to the console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ari@Aris-MacBook-Pro solace-dotnet-mqtt % dotnet run [access_token] [id_token]Message received: Hello World!^C
ari@Aris-MacBook-Pro solace-dotnet-mqtt %
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I hope you find this blog post useful. For more information about the topic, please refer to the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://solace.com/products/event-broker/software/getting-started/"&gt;Getting Started with PubSub+ Standard Edition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.solace.com/Configuring-and-Managing/Configuring-Client-Authentication.htm#OAuth"&gt;OAuth Authentication Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.keycloak.org"&gt;Keycloak&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/how-to-set-up-solace-pubsub-event-broker-with-oauth-for-mqtt-against-keycloak/"&gt;How to set up Solace PubSub+ Event Broker with OAuth for MQTT against Keycloak&lt;/a&gt; appeared first on &lt;a href="https://solace.com"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>oauth</category>
      <category>mqtt</category>
      <category>solace</category>
    </item>
    <item>
      <title>Alternative to GO-JEK’s Kafka Ingestion Architecture</title>
      <dc:creator>arih1299</dc:creator>
      <pubDate>Fri, 27 Sep 2019 15:53:19 +0000</pubDate>
      <link>https://dev.to/solacedevs/alternative-to-go-jek-s-kafka-ingestion-architecture-3cn5</link>
      <guid>https://dev.to/solacedevs/alternative-to-go-jek-s-kafka-ingestion-architecture-3cn5</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sok4mPMg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-featured-image.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sok4mPMg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-featured-image.jpg" alt=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;GO-JEK is one of the big names in the South East Asia technology start-up scene. It started off with a ride-hailing service and continues to grow into many areas. With so many services and a huge user base, the engineering behind its services should be an interesting case study. In this article, I will look at GO-JEK’s Kafka ingestion architecture and try to provide an alternative to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;GO-JEK’s Existing Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A couple of GO-JEK engineers have written articles around GO-JEK’s engineering work. &lt;a href="https://blog.gojekengineering.com/kafka-4066a4ea8d0d"&gt;One of them&lt;/a&gt; explains how they do their event ingestion into a Kafka cluster. As the figure below shows, GO-JEK’s existing architecture starts with the Producer App and ends with the Mainstream Kafka cluster.&lt;br&gt;&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iCX1FOG---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-post_image-1-1024x606.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iCX1FOG---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-post_image-1-1024x606.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;According to the blog post, GO-JEK has solved a few problems and made the following improvements in the architecture:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Producer application developers don’t need to learn a new API , namely the Kafka client API, since they can just use REST API calls.&lt;/li&gt;
&lt;li&gt;Messages are now acknowledged back to the producing applications.&lt;/li&gt;
&lt;li&gt;No buffering technique or storage is required for the producer applications when the Kafka cluster is not available.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why an Alternative Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Looking at the existing architecture, I can’t help but think that there can be a simpler architecture with fewer components to solve the same set of problems. As an external party looking at only a blog post, I may have missed some details or made incorrect assumptions. But, the beauty of it is that this would be a fresh point of view and hopefully it will offer some value to the owner of this architecture.&lt;/p&gt;

&lt;p&gt;So why is an alternative architecture necessary? Because having a couple of Kafka clusters, Java applications, and a cache to move data shouldn’t need so much setup and overhead. I would like to take a stab at providing a simpler architecture, one that hopefully has fewer moving parts, is built more as an infrastructure rather than part of the business applications, and can avoid additional complexities.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Proposed Architecture lterations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;All we want to do is to get the events from the producer applications across the Kafka cluster. How hard could it be? Well, for starters, the producer applications now need to be able to “talk” to Kafka.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GepZMN8p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-post_image-2-1024x133.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GepZMN8p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-post_image-2-1024x133.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Proposed Architecture Iteration 1&lt;/p&gt;

&lt;p&gt;So, REST is what the applications use? That’s totally fine. Then we should have a thing between the applications and the Kafka cluster that connects REST to the applications and talks to Kafka on the other end. And it must be a centralized, shared thing so new applications can just get onboard without the need to keep adding this new thing. This new “REST to KAFKA Thing” must be able to give acknowledgements back to the producing applications as well as buffer the events when the Kafka cluster is not accessible.&lt;/p&gt;

&lt;p&gt;Back to the original blog post, there was &lt;a href="https://medium.com/@prakharmathur_345/hey-alexander-74b6700d72fe"&gt;a comment&lt;/a&gt; that suggested the writer use Kafka REST Proxy and Mirror Maker. One of the reasons why the writer did not accept the suggestion is because this suggestion would lack the ability to buffer and retry the event publishing to Kafka. This is where the “Fronting Failover”, “Fronting Worker”, and Redis came into the picture. It looks like a lot of new moving parts being added to deal with the fact that Kafka cluster can be unavailable at times.&lt;/p&gt;

&lt;p&gt;Are there any other ideas? Of course.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wLJsg2LO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-post_image-3-1024x356.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wLJsg2LO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/09/go-jek-blog-post_image-3-1024x356.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Proposed Architecture Iteration 2&lt;/p&gt;

&lt;p&gt;This time two components are added: a &lt;a href="https://solace.com/products/event-broker/"&gt;Solace PubSub+ Event Broker&lt;/a&gt; and a &lt;a href="https://docs.solace.com/Developer-Tools/Integration-Guides/Kafka-Connect.htm"&gt;Solace PubSub+ Connector for Kafka Source&lt;/a&gt;. Solace PubSub+ Event Broker gives you the REST interface as well as the guaranteed messaging feature with acknowledgement. So, the producer applications don’t need to worry about losing their events. It also leverages the Solace Messaging API and the Solace PubSub+ Connector for Kafka Source  as the source connector to stream the events directly to the Kafka cluster using the Kafka Connect API.&lt;/p&gt;

&lt;p&gt;As an enterprise-grade event broker, PubSub+ provides high-availability and replication features without any dependency on other software or technology. This surely helps in terms of fewer moving parts to manage by the operation team. This PubSub+ Event Broker will also queue up the events in case the connector is not able to pass through the events into Kafka. This is where the guaranteed messaging feature comes into play.&lt;/p&gt;

&lt;p&gt;The question that may arise is how big of a buffer the message broker could handle if the Kafka cluster is unavailable. No problem.  PubSub+ event brokers &lt;a href="https://solace.com/blog/slow-consumer-handling-demo/"&gt;handle slow consumers&lt;/a&gt; in such a way that they do not impact the performance of the producers or fast consumers.  For guaranteed messaging, publishers and fast consumers are identified and prioritized over slow or recovering consumers even as message storage to slow consumers continue to increase.&lt;/p&gt;

&lt;p&gt;The Solace PubSub+ Event Broker Appliance currently supports up to 6 TB of message spool for this buffering need, while the software allows around 800 GB of message spool. It’s basically a matter of sizing and deployment choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Solace PubSub+ fits the bill as an infrastructure thing that does not really need a lot of attention. It is more like your network equipment that just needs to be set up once and can be left untouched most of the time. But the Solace PubSub+ Connector for Kafka is still a piece of software! It is still the better option to integrate with Kafka using the Kafka Connect API. When Kafka or whatever your event store natively supports open standards such as JMS and MQTT, there will be no need for such additional connectors.&lt;/p&gt;

&lt;p&gt;So, use REST, checked; message acknowledgement, checked; buffering, checked!&lt;/p&gt;

&lt;p&gt;Visit the Solace website to get more information about the &lt;a href="https://solace.com/products/platform/"&gt;Solace PubSub+ Platform&lt;/a&gt; and the &lt;a href="https://solace.com/with-kafka/"&gt;Solace PubSub+ Connector for Kafka&lt;/a&gt;. Share your developer experience and ask questions at &lt;a href="https://solace.community"&gt;Solace Community&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/go-jeks-kafka-ingestion-architecture/"&gt;Alternative to GO-JEK’s Kafka Ingestion Architecture&lt;/a&gt; appeared first on &lt;a href="https://solace.com"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>solace</category>
      <category>kafka</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
