<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Josh Wood</title>
    <description>The latest articles on DEV Community by Josh Wood (@jmlw).</description>
    <link>https://dev.to/jmlw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jmlw"/>
    <language>en</language>
    <item>
      <title>Load Balanced Websockets with Spring Cloud Gateway</title>
      <dc:creator>Josh Wood</dc:creator>
      <pubDate>Mon, 26 Aug 2019 15:45:00 +0000</pubDate>
      <link>https://dev.to/jmlw/load-balanced-websockets-with-spring-cloud-gateway-3ke5</link>
      <guid>https://dev.to/jmlw/load-balanced-websockets-with-spring-cloud-gateway-3ke5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1451340124423-6311db67a5d9%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1451340124423-6311db67a5d9%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Load Balanced Websockets with Spring Cloud Gateway"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ability to have real-time two-way communication between the client and the server is a key feature in most modern web apps.&lt;/p&gt;

&lt;p&gt;A simple approach to setting up WebSockets in Spring Boot is covered in &lt;a href="https://blog.joshmlwood.com/websockets-with-spring-boot/" rel="noopener noreferrer"&gt;Simple WebSockets with Spring Boot&lt;/a&gt;, which uses an in-memory message broker. This approach falls short, though, when you scale up and add additional servers. Users connected to different servers would have no way of communicating or getting updates pushed to them for something that's happened on another server. Let's explore how to appropriately scale up WebSockets in our sample app. This will allow any clients the ability to communicate with each other, regardless of the server they are connected to, as well as subscribe to updates that may be happening on a server that they are not connected to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;

&lt;p&gt;We need a couple of applications to complete our setup. These include an &lt;em&gt;API Gateway&lt;/em&gt;, a &lt;em&gt;WebSocket Server&lt;/em&gt;, and a &lt;em&gt;Eureka&lt;/em&gt; discovery server so all of our services can find each other.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: This project depends on a Eureka server for service discovery. It's possible to modify it to use just DNS names but doing so would require mapping all instances of the WebSocket server to the same DNS name for our API Gateway to properly route requests.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will need to create multiple applications, so first, create a directory to contain everything related to this post and call it &lt;code&gt;spring-cloud-gateway-websocket&lt;/code&gt;. Once that directory is created, &lt;code&gt;cd&lt;/code&gt; into it, and run the following commands to generate a sample project.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the API Gateway
curl https://start.spring.io/starter.zip \
    -d dependencies=actuator,cloud-eureka,cloud-gateway \
    -d name=gateway \
    -d artifactId=gateway \
    -d baseDir=gateway | tar -xzvf -

# Create the Eureka Server
curl https://start.spring.io/starter.zip \
    -d dependencies=actuator,cloud-eureka-server \
    -d name=eureka \
    -d artifactId=eureka \
    -d baseDir=eureka | tar -xzvf -

# Create the WebSocket Server
curl https://start.spring.io/starter.zip \
    -d dependencies=websocket,webflux,web,actuator,cloud-eureka \
    -d name=websocket-server \
    -d artifactId=websocket-server \
    -d baseDir=websocket-server | tar -xzvf -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands will automatically generate projects from &lt;a href="https://start.spring.io" rel="noopener noreferrer"&gt;Spring Initializr&lt;/a&gt;. We are adding &lt;em&gt;Actuator&lt;/em&gt; to all the projects to ensure we can easily test if they are running and healthy, and Eureka will utilize Actuator's health monitoring to check the state of each application instance. By default this monitoring just reports 'UP' as long as the application is running; it is good enough in most circumstances and can be extended to have more fine-grained control over the current application status if needed.&lt;/p&gt;

&lt;p&gt;For the 'WebSocket Server' project, we are also adding &lt;code&gt;spring-boot-starter-webflux&lt;/code&gt; as a dependency. We require this dependency because the WebSocket Message Broker Relay relies on Reactor Netty under the covers to perform reactive, asynchronous operations for communication with the message broker. Another option is to include &lt;code&gt;artifactiId=spring-boot-starter-reactor-netty&lt;/code&gt; with &lt;code&gt;groupId=org.springframework.boot&lt;/code&gt; instead, as I have in the demo repository. It is a smaller dependency, but at this time &lt;code&gt;reactor-netty&lt;/code&gt; appears to not be available on Spring Initializr, so we can depend on it through &lt;code&gt;webflux&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  WebSocket Server
&lt;/h2&gt;

&lt;p&gt;Now that the base projects are generated, we have to do some configuration in the &lt;em&gt;WebSocket Server&lt;/em&gt; application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application Configuration
&lt;/h3&gt;

&lt;p&gt;The default generated project will contain an &lt;code&gt;application.properties&lt;/code&gt; in the resources directory. We can rename this to &lt;code&gt;application.yml&lt;/code&gt; as it will be slightly less verbose to work with than the properties style of configuration.&lt;/p&gt;

&lt;p&gt;For the app to function as intended in the demo, we need a few configuration keys supplied in our &lt;code&gt;application.yml&lt;/code&gt;, like the following snippet.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  application:
    name: websocket-server
eureka:
  client:
    serviceUrl:
      defaultZone: ${EUREKA_URI:http://localhost:8761/eureka}
    healthcheck:
      enabled: true
  instance:
    prefer-ip-address: true
broker:
  relay:
    host: ${BROKER_HOST:localhost}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are using the Spring expression language here where we surround the value of a key with &lt;code&gt;${}&lt;/code&gt; to set the value of a key from an environment variable. This tells Spring that it should attempt to find an environment variable or system property (or potentially from other property sources) by the name of the string in the curly braces and use that as the value of the key in the &lt;code&gt;application.yml&lt;/code&gt;. The behavior of this can be customized to supply a default when no value is found in the environment, which is why we have a &lt;code&gt;:&lt;/code&gt; in the configuration value. Any value to the right of the colon is used as the default for the configuration key if no environment variable that matches the name on the left is found. This pattern is useful for defining a default value when you are running an application on your localhost and is a useful mechanism for supplying the production configuration values by environment variables.&lt;/p&gt;

&lt;p&gt;In the configuration above, we have an application name defined. This is important since the Eureka integration will use the Spring application name by default to identify instance groups of the application (i.e. the server application, instances 1 and 2 vs the gateway application) and it will be important in allowing our gateway to route requests later in this post. Additionally, we have a configuration value for the Eureka service URL's default zone. This value is to tell the discovery client where to connect when looking for the service discovery server. In this case, if we want to run the whole application locally, we can have a copy of our Eureka server running with its default configuration (port 8761), and start up our WebSocket Server which will try to connect Eureka at &lt;code&gt;http://localhost:8761/eureka&lt;/code&gt;. Having this and the Broker relay host defined as environment variables will be helpful when we configure everything to run with docker-compose.&lt;/p&gt;

&lt;p&gt;Since we are using Eureka to perform service discovery in our application cluster, we need to enable it for our applications, otherwise, the auto-configuration provided by Spring Boot will not run.&lt;/p&gt;

&lt;p&gt;It is sufficient to annotate the main application class or any configuration class with &lt;code&gt;@EnableDiscoveryClient&lt;/code&gt;. Once you do so, Spring will automatically instantiate all configurations related to the Service Discovery Client that are required for this demo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Spring WebSocket Configuration
&lt;/h3&gt;

&lt;p&gt;We need to tell Spring how to forward messages and where our WebSocket endpoint should live. Create a configuration class to enable &lt;em&gt;broker-backed messaging&lt;/em&gt; and to configure WebSockets.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {

    @Value("${broker.relay.host}")
    private String brokerRelayHost;

    @Override
    public void configureMessageBroker(MessageBrokerRegistry registry) {
        registry.enableStompBrokerRelay("/queue", "/topic")
            .setRelayHost(brokerRelayHost);
        registry.setApplicationDestinationPrefixes("/app");
    }

    @Override
    public void registerStompEndpoints(StompEndpointRegistry registry) {
        registry.addEndpoint("/websocket")
            .setAllowedOrigins("*");
        registry.addEndpoint("/sockjs")
            .setAllowedOrigins("*")
            .withSockJS();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration class has a couple of important bits that are worth paying attention to. First, this class uses the &lt;code&gt;@Configuration&lt;/code&gt; stereotype to identify it to Spring as a &lt;em&gt;Component,&lt;/em&gt; which is a Configuration type. Also, it has the &lt;code&gt;@EnableWebSocketMessageBroker&lt;/code&gt; annotation. This annotation configures Spring to "enable broker-backed messaging over WebSocket using a higher-level messaging sub-protocol," as noted in the annotation's javadoc. In more simple terms, this allows Spring to talk to a message broker via a protocol like STOMP (Simple Text Oriented Message Protocol) or AMQP (Advanced Message Queuing Protocol), etc., rather than raw TCP WebSocket protocol which will enable more features of the message broker than the low-level protocol would.&lt;/p&gt;

&lt;p&gt;The configuration class also extends a &lt;code&gt;WebSocketMessageBrokerConfigurer&lt;/code&gt;, an interface provided by Spring for us to modify the configuration provided by the &lt;code&gt;@EnableWebSocketMessageBroker&lt;/code&gt; annotation. In the example above, we've done two things: first, we've configured a Message Broker via the MessageBrokerRegistry, and second, we've configured a STOMP Endpoint via the StompEndpointRegistry. For the MessageBrokerRegistry, we've told Spring to relay all messages received from the WebSocket protocol to our &lt;code&gt;brokerRelayHost&lt;/code&gt; for any destinations (or endpoints for REST or MVC style language) that are prefixed with &lt;code&gt;/topic&lt;/code&gt; or &lt;code&gt;/queue&lt;/code&gt;. These two prefixes are chosen to be those supported by the STOMP message broker relay (in this case I will choose RabbitMQ to act as the message broker), and all messages destined for those prefixes will be forwarded over the message broker to (potentially) be re-broadcast to all other instances of our server app. As the source for &lt;code&gt;enableStompBrokerRelay&lt;/code&gt; interface method notes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enable a STOMP broker relay and configure the destination prefixes supported by the message broker. Check the STOMP documentation of the message broker for supported destinations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also, depending on the message broker you choose, you might wish to configure a different path separator. Some message brokers may require you to use &lt;code&gt;.&lt;/code&gt; as a path separator to fully utilize the broker's path matching abilities. To use a period as a path separator, the registry can be configured with a new ant matcher for the path like so: &lt;code&gt;registry.setPathMatcher(new AntPathMatcher("."))&lt;/code&gt;. A good example of this would be &lt;a href="https://www.rabbitmq.com/tutorials/tutorial-five-java.html" rel="noopener noreferrer"&gt;RabbitMQ's Topic Exchange tutorial&lt;/a&gt; which gives a good overview of using wildcards or path substitutions for a fan-out pattern.&lt;/p&gt;

&lt;p&gt;What about the case that we want to use the same RabbitMQ instance as the message broker for multiple applications? The config above can remain mostly the same, but Spring's integration provides a method to configure application-specific prefixes, &lt;code&gt;setApplicationDestinationPrefixes&lt;/code&gt;. This method will configure the broker integration to filter messages destined for annotated methods. So, as an example, a message destined for &lt;code&gt;/app/my/message/endpoint&lt;/code&gt; would target a method annotated with &lt;code&gt;@MessageMapping("/my/message/endpoint")&lt;/code&gt;. Spring will automatically strip the prefix as defined in the configuration so the application itself does not need context of any prefixes used for routing. On the other hand, any message destined for &lt;code&gt;/topic/some/notification/topic&lt;/code&gt; or &lt;code&gt;/queue/some/work/queue&lt;/code&gt; will be directed to the message broker.&lt;/p&gt;

&lt;p&gt;We still need to tell Spring where our WebSocket should live—rather, at which endpoint it should be available. To do so, we use the &lt;code&gt;StompEndpointRegistry&lt;/code&gt; to register any endpoints we want to expose and the configurations for those endpoints. For this demo, allowed origins are set to &lt;code&gt;*&lt;/code&gt; to allow any origin to connect (in case you don't host the client HTML page from the same server), and also to configure the SockJS fallback option. The SockJS fallback allows the application to utilize plain HTTP for WebSocket-like communication as an alternative when the WebSocket protocol is not available or cannot be established between client and server.&lt;/p&gt;

&lt;p&gt;This wraps up all the configuration required and mirrors the &lt;a href="https://blog.joshmlwood.com/websockets-with-spring-boot/" rel="noopener noreferrer"&gt;previous WebSocket post&lt;/a&gt; very closely aside from configuring a 'relay' rather than a 'Simple WebSocket Broker.'&lt;/p&gt;

&lt;h3&gt;
  
  
  Make the Message Payload
&lt;/h3&gt;

&lt;p&gt;Like in the &lt;a href="https://blog.joshmlwood.com/websockets-with-spring-boot/" rel="noopener noreferrer"&gt;previous post&lt;/a&gt;, we need a class to represent the data passed back and forth between the client and the server. This class just needs to be plain old Java object, but with the caveat that it needs a default constructor for Jackson to properly deserialize JSON into the object.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Message {
    private String message;

    public Message() {
        // Required for Jackson
    }

    public Message(String message) {
        this.message = message;
    }

    public String getMessage() {
        return message;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Message Controller
&lt;/h3&gt;

&lt;p&gt;Next up, we need a controller to handle the "web" facing part of the application and to map incoming WebSocket messages onto a method. The message controller is more or less the same as a "rest" controller or MVC controller, but rather than defining HTTP verb mappings, we specify "MessageMappings."&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Controller
public class WebsocketController {
    private static final Logger LOGGER = LoggerFactory.getLogger(WebsocketController.class);

    @Value("${server.port}")
    private String port;

    @MessageMapping("/incoming")
    @SendTo("/topic/outgoing")
    public String incoming(Message message) {
        LOGGER.info(String.format("received message: %s", message));
        return String.format("Application on port %s responded to your message: \"%s\"", port, message.getMessage());
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The controller has two special mappings if you're familiar with HTTP based mappings. First, it has a &lt;code&gt;@MessageMapping&lt;/code&gt; which instructs Spring to accept any messages destined for &lt;code&gt;/incoming&lt;/code&gt; (with the prefix configured previously!). It also uses the message body as the input to the annotated method. The other special annotation is the &lt;code&gt;@SendTo&lt;/code&gt; mapping. This mapping is useful for redirecting the output of our method to a specific destination. In this case, the destination is defined as &lt;code&gt;/topic/outgoing&lt;/code&gt;, which will be redirected to the message broker as it has a prefix that was configured for the relay above. Without the SendTo annotation, the output of the MessageMapping annotation would automatically direct the return of our mapped method back to the channel it received the message on. Depending on the use case, this isn't always desirable. This could be further customized with the &lt;code&gt;@SendToUser&lt;/code&gt; annotation, however, which would extract a user's username from the headers of the input message.&lt;/p&gt;

&lt;p&gt;For demonstration, we want to add one more method to our controller. We want to make sure that we can generate a message on multiple instances of our WebSocket Server (hence the "load balanced" part). To ensure that we can receive messages regardless of the server instance that generated them, let's create a timed method in our controller that will automatically publish messages to the same topic that our incoming message handler publishes to.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Scheduled(fixedRate = 15000L)
    public void timed() {
        try {
            // simulate randomness in our timed responses to the client
            Thread.sleep(RANDOM.nextInt(10) * 1000);
            LOGGER.info("sending timed message");
            simpMessagingTemplate.convertAndSend(
                "/topic/outgoing",
                String.format("Application on port %s pushed a message!", port)
            );
        } catch (InterruptedException exception) {
            LOGGER.error(String.format("Thread sleep interrupted. Nested exception %s", exception.getMessage()));
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, add a private variable and new constructor to support the 'simpMessagingTemplate' used in the timed method.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...

    private final SimpMessagingTemplate simpMessagingTemplate;

    ...

    @Autowired
    public WebsocketController(SimpMessagingTemplate simpMessagingTemplate) {
        this.simpMessagingTemplate = simpMessagingTemplate;
    }

    ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since this method is annotated with &lt;code&gt;@Scheduled&lt;/code&gt;, do not forget to annotate a configuration class or the main application class with &lt;code&gt;@EnableScheduling&lt;/code&gt;, otherwise the scheduled method will not fire.&lt;/p&gt;

&lt;p&gt;This finishes off the configuration needed for the WebSocket Server to ensure that we can send and receive WebSocket messages!&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;

&lt;p&gt;We are utilizing a multi-stage build so we only require Docker installed on our machine to build and test the application.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM maven:3-jdk-8-alpine AS build
WORKDIR /opt/src
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src src
RUN mvn package -Dmaven.test.skip=true spring-boot:repackage

FROM openjdk:8-jre-alpine
COPY --from=build /opt/src/target/websocket-server-0.0.1-SNAPSHOT.jar /opt/app.jar
ENTRYPOINT ["java","-jar","/opt/app.jar"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The idea here is to cache as much as possible so that in the event of source code changes (&lt;code&gt;COPY src src&lt;/code&gt; happens after &lt;code&gt;mvn dependency:go-offine&lt;/code&gt;), Docker can cache all the dependencies from Maven so we don't need to download them multiple times.&lt;/p&gt;

&lt;p&gt;Then since we have the source files, we can run Maven to package and &lt;code&gt;spring-boot:repackage&lt;/code&gt; the application into a fat jar to run later.&lt;/p&gt;

&lt;p&gt;The second &lt;code&gt;FROM&lt;/code&gt; step is the second dockerfile stage, so we create a new container that contains just that base &lt;code&gt;FROM&lt;/code&gt; along with the jar file and an entrypoint which defines what command(s) to run when the docker container is started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gateway
&lt;/h2&gt;

&lt;p&gt;Next, the gateway needs to be configured to route requests to the appropriate server instances. Without this, the application will not have reachable endpoints when it is run in clustered mode.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application Configuration
&lt;/h3&gt;

&lt;p&gt;The default generated project will contain an application.properties in the resources directory. We can rename this to &lt;code&gt;application.yml&lt;/code&gt; as it will be slightly less verbose to work with than the properties style of configuration.&lt;/p&gt;

&lt;p&gt;For the app to function as intended in the demo, we need a few configuration keys supplied in our &lt;code&gt;application.yml&lt;/code&gt;.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  application:
    name: gateway
eureka:
  client:
    serviceUrl:
      defaultZone: ${EUREKA_URI:http://localhost:8761/eureka}
    healthcheck:
      enabled: true
  instance:
    prefer-ip-address: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are configuring Eureka the same as in the WebSocket Server application above. The only difference is that we do not have to define a value for the "broker host" and we've given this application a different name: &lt;code&gt;gateway&lt;/code&gt;. It's not important at this point for Gateway to have a name defined in the configuration since we are not relying on Eureka to route requests to our gateway application. We can simply reach it by &lt;code&gt;http://localhost:8080&lt;/code&gt; since this is the default interface and port Spring will listen on.&lt;/p&gt;

&lt;p&gt;We are relying on Eureka's discovery client to help the gateway know where to find the service(s) it will be routing requests to, so we need to annotate the main application class or any configuration class with &lt;code&gt;@EnableDiscoveryClient&lt;/code&gt;. Discovery Client tracks the IP and the port of every application in our cluster, which is how the gateway knows that &lt;code&gt;http://weboscket-server&lt;/code&gt; is actually &lt;code&gt;http://docker-ip-address:application-port&lt;/code&gt;. We need to configure the gateway to fetch this information from Eureka since the route configuration we are using depends on the mapping for client-side load balancing and forwarding traffic to the correct IP within the Docker network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Routes in the Gateway
&lt;/h3&gt;

&lt;p&gt;Setting up the gateway to route traffic to our WebSocket Server instances is pretty simple. Spring Cloud Gateway provides an object to create the route mapping, &lt;code&gt;RouteLocatorBuilder&lt;/code&gt;, which we will use to customize all back-end routing in our application.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Configuration
public class RouteLocatorConfiguration {
    @Bean
    public RouteLocator myRoutes(RouteLocatorBuilder builder) {
        return builder.routes()
            .route(predicateSpec -&amp;gt; predicateSpec
                .path("/**")
                .uri("lb://websocket-server")
            )
            .build();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;RouteLocatorBuilder&lt;/code&gt; allows us to define a lambda function as a provider to define how to match incoming requests to routes. In this demo, we are using just the "path" matcher, which takes in an AntMatcher style string pattern to determine if the incoming request matches or not; anything prefixed with &lt;code&gt;/&lt;/code&gt; will match, and is then routed to the URI provided with the PredicateSpec.&lt;/p&gt;

&lt;p&gt;In the URI defined above, we don't give Gateway the IP address or the port of the application for routing. As noted earlier, this is so that Gateway can use the Service Discovery Client to infer which application we are trying to route to using the first part of the URI path (the hostname of the service). Each Service Discovery Client instance will connect to the Eureka server to report which port and IP address they are running on. This is very helpful in the case we don't know what IP address our service will be assigned when deploying to a container orchestration platform (like Amazon Elastic Container Service or Kubernetes, etc.) or when we are running our app locally as a cluster with Docker Compose.&lt;/p&gt;

&lt;p&gt;The URI also contains a non-standard protocol prefix &lt;code&gt;lb&lt;/code&gt;. The &lt;code&gt;lb&lt;/code&gt; prefix is supplied by Service Discovery and it instructs the route locator to lookup the real route(s) for a given service by Service Discovery name (the &lt;code&gt;spring.application.name&lt;/code&gt; configured in the &lt;code&gt;application.yml&lt;/code&gt;). This prefix is also an instruction to perform client-side load balancing, so for all known IP addresses of service &lt;code&gt;websocket-server&lt;/code&gt;, the Discovery Client should return them per some load-balancing algorithm like &lt;em&gt;round-robin&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;At this point, our application is all configured and ready to run except for a client to connect to it and a message broker to handle the incoming WebSocket connections and message routing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;

&lt;p&gt;Just like the dockerfile for the WebSocket Server, we are utilizing a multi-stage build so we only require Docker installed on our machine to build and test the application.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM maven:3-jdk-8-alpine AS build
WORKDIR /opt/src
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src src
RUN mvn package -Dmaven.test.skip=true spring-boot:repackage

FROM openjdk:8-jre-alpine
COPY --from=build /opt/src/target/gateway-0.0.1-SNAPSHOT.jar /opt/app.jar
ENTRYPOINT ["java","-jar","/opt/app.jar"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Eureka
&lt;/h2&gt;

&lt;p&gt;The base project generated for Eureka is sufficient to get us up and running, but the plan for later is to run with docker-compose, so we need to create a dockerfile for this project as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile
&lt;/h3&gt;

&lt;p&gt;The dockerfile for the Eureka server is exactly the same as the other dockerfiles, aside from the &lt;code&gt;COPY&lt;/code&gt; command which specifies a different jar to copy into the final image.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM maven:3-jdk-8-alpine AS build
WORKDIR /opt/src
COPY pom.xml .
RUN mvn dependency:go-offline
COPY src src
RUN mvn package -Dmaven.test.skip=true spring-boot:repackage

FROM openjdk:8-jre-alpine
COPY --from=build /opt/src/target/eureka-0.0.1-SNAPSHOT.jar /opt/app.jar
ENTRYPOINT ["java","-jar","/opt/app.jar"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Client
&lt;/h2&gt;

&lt;p&gt;We have all the code we need set up for our API to work, but we still need a UI to connect up to the app to test it. This could be anything from a mobile app or a desktop application to another server application or a simple HTML page. For demonstration, an HTML page is the easiest route since you likely already have a web browser (otherwise how are you browsing this post? [...seriously, I'd like to know...]). It's very easy to get a basic page up and running with a few utilities pulled from a CDN in script tags.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This section is nearly identical to the &lt;strong&gt;Create a Client&lt;/strong&gt; section in my &lt;a href="https://blog.joshmlwood.com/websockets-with-spring-boot/" rel="noopener noreferrer"&gt;Simple WebSockets with Spring Boot&lt;/a&gt; post.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the WebSocket Client
&lt;/h3&gt;

&lt;p&gt;To keep things simple, we can just create a basic static HTML page with jQuery to provide some user interaction.&lt;/p&gt;

&lt;p&gt;To start, create an HTML page. In the header, add jQuery and StompJS from CDN. You can also add SockJS if you'd like to experiment with backward compatibility, but it isn't a requirement.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js"&amp;gt;&amp;lt;/script&amp;gt;
    &amp;lt;script src="https://cdn.jsdelivr.net/npm/@stomp/stompjs@5.0.0/bundles/stomp.umd.js"&amp;gt;&amp;lt;/script&amp;gt;
    &amp;lt;script src="https://cdn.jsdelivr.net/npm/sockjs-client@1/dist/sockjs.min.js"&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To style things a little more nicely, I've also added Bootstrap, but that is just to make it look a little prettier than basic HTML layout and is not required.&lt;/p&gt;

&lt;p&gt;Now that we have the required libraries included, we can write some HTML for controls to connect and disconnect, a form to send messages, and a table to hold the responses from our server.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;div class="container" id="main-content"&amp;gt;
  &amp;lt;div class="row"&amp;gt;
    &amp;lt;div class="col-md-6"&amp;gt;
      &amp;lt;form class="form-inline"&amp;gt;
        &amp;lt;div class="form-group"&amp;gt;
          &amp;lt;label for="connect"&amp;gt;WebSocket connection:&amp;lt;/label&amp;gt;
          &amp;lt;button class="btn btn-default" id="connect" type="submit"&amp;gt;Connect&amp;lt;/button&amp;gt;
          &amp;lt;button class="btn btn-default" disabled="disabled" id="disconnect" type="submit"&amp;gt;Disconnect
          &amp;lt;/button&amp;gt;
        &amp;lt;/div&amp;gt;
      &amp;lt;/form&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div class="col-md-6"&amp;gt;
      &amp;lt;form class="form-inline"&amp;gt;
        &amp;lt;div class="form-group"&amp;gt;
          &amp;lt;label for="message"&amp;gt;Message:&amp;lt;/label&amp;gt;
          &amp;lt;input class="form-control" id="message" placeholder="Your message here..." type="text"&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;button class="btn btn-default" id="send" type="submit"&amp;gt;Send&amp;lt;/button&amp;gt;
      &amp;lt;/form&amp;gt;
    &amp;lt;/div&amp;gt;
  &amp;lt;/div&amp;gt;
  &amp;lt;div class="row"&amp;gt;
    &amp;lt;div class="col-md-12"&amp;gt;
      &amp;lt;table class="table table-striped" id="responses"&amp;gt;
        &amp;lt;thead&amp;gt;
        &amp;lt;tr&amp;gt;
          &amp;lt;th&amp;gt;Messages&amp;lt;/th&amp;gt;
        &amp;lt;/tr&amp;gt;
        &amp;lt;/thead&amp;gt;
        &amp;lt;tbody id="messages"&amp;gt;
        &amp;lt;/tbody&amp;gt;
      &amp;lt;/table&amp;gt;
    &amp;lt;/div&amp;gt;
  &amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have the skeleton of the HTML out of the way, we can write some functions to handle connecting, disconnecting, sending messages, and receiving messages.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var stompClient = null;

    function connect() {
      stompClient = new window.StompJs.Client({
        webSocketFactory: function () {
          return new WebSocket("ws://localhost:8080/websocket");
        }
      });
      stompClient.onConnect = function (frame) {
        frameHandler(frame)
      };
      stompClient.onWebsocketClose = function () {
        onSocketClose();
      };

      stompClient.activate();
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the connection function, we will create a global variable to hold our Stomp Client and use the StompJS library to create a new client instance. We've given our client a configuration object to use an anonymous function for the WebSocket factory so we can use the browser's built-in &lt;code&gt;WebSocket&lt;/code&gt; object, and connect to the correct URL. If we wanted to use &lt;code&gt;SockJS&lt;/code&gt; instead of the browser's built-in WebSocket implementation, we can just replace the return of that anonymous function with &lt;code&gt;new window.SockJS("http://localhost:8080/sockjs");&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: we use the &lt;code&gt;window&lt;/code&gt; keyword since we've registered the SockJS library as a global library in the browser window. In a modern web app with Angular, React, Vue, etc., you would probably just use an import local to the component using it and it would then be accessible with just the new command like &lt;code&gt;new SockJS(...)&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We have also assigned some functions to the &lt;code&gt;onConnect&lt;/code&gt; and the &lt;code&gt;onWebsocketClose&lt;/code&gt; hooks.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function onSocketClose() {
    if (stompClient !== null) {
        stompClient.deactivate();
    }
    setConnected(false);
    console.log("Socket was closed. Setting connected to false!")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;onSocketClose&lt;/code&gt; function is helpful to properly update our view so that when we lose or close the connection to the socket, the UI updates to enable or disable specific components. Here we can also see the &lt;code&gt;setConnected&lt;/code&gt; function which is responsible for handling the display changes when our socket connects or disconnects:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function setConnected(connected) {
    $("#connect").prop("disabled", connected);
    $("#connectSockJS").prop("disabled", connected);
    $("#disconnect").prop("disabled", !connected);
    if (connected) {
        $("#responses").show();
    } else {
        $("#responses").hide();
    }
    $("#messages").html("");
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to write a method to handle the messages that are sent from the server.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function frameHandler(frame) {
    setConnected(true);
    console.log('Connected: ' + frame);
    stompClient.subscribe('/topic/outgoing', function (message) {
        showMessage(message.body);
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;frameHandler&lt;/code&gt; function takes in an object called &lt;code&gt;frame&lt;/code&gt;. Each frame may represent a different state of the WebSocket or messages pushed from the server. Mozilla has great documentation on WebSockets that is worth a glance. What is important to us is that when we receive a frame the socket will be connected and we want to subscribe to a &lt;em&gt;topic&lt;/em&gt; from our server. This topic will be where the server writes messages destined for the client. We also have a function callback that is responsible for handling each message sent from the server. The message here is just a string message (since we're using STOMP as our protocol over WebSocket). The implementation below will prepend the newest message to the top of our messages table we created earlier.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function showMessage(message) {
      $("#responses").prepend("&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;" + message + "&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;");
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we also need the ability to send a message to the server.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function sendMessage() {
      stompClient.publish({
        destination: "/app/incoming",
        body: JSON.stringify({'message': $("#message").val()}) 
      });
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function instructs the stopClient to publish a message on the topic &lt;code&gt;/app/incoming&lt;/code&gt; with a body containing our inputs from the HTML form. Once the stop client publishes to this topic, the server will receive the message, route it to our &lt;code&gt;@MessageMapping&lt;/code&gt; with the configured &lt;code&gt;/incoming&lt;/code&gt; destination.&lt;/p&gt;

&lt;p&gt;We should also have a manual disconnect method to close out the connection to the WebSocket, just for demonstration purposes. It simply deactivates the StompClient (and all subscriptions) if the client is not null. Since this is the same functionality as the &lt;code&gt;onSocketClose&lt;/code&gt; function, we can just proxy that call here.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function disconnect() {
    onSocketClose();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last bit we need to do to get a functional client is to set up jQuery listeners on our buttons and configure a document ready function.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$(function () {
    $("form").on('submit', function (e) {
        e.preventDefault();
    });
    $("#connect").click(function () {
        connect();
    });
    $("#connectSockJS").click(function () {
        connectSockJs();
    });
    $("#disconnect").click(function () {
        disconnect();
    });
    $("#send").click(function () {
        sendMessage();
    });
    $("document").ready(function () {
        disconnect();
    });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file should be placed in the root of the project as &lt;code&gt;example.html&lt;/code&gt;, and we can open it directly with a browser. The other option is to place our HTML file in the application's resources directory, named &lt;code&gt;example.html&lt;/code&gt; (&lt;code&gt;websocket-server/src/main/resources/static/example.html&lt;/code&gt;). Then we will be able to access &lt;a href="http://localhost:8080/example.html" rel="noopener noreferrer"&gt;http://localhost:8080/example.html&lt;/a&gt; when we start the application with docker-compose. Without docker-compose (more specifically the Gateway), it will be a little more difficult since the project is configured to have the WebSocket Server instance(s) listen on random ports; it'll be important to check the logs to see what port the server is running on, otherwise we may not be able to access it unless we also have the Gateway and Eureka applications running.&lt;/p&gt;

&lt;p&gt;For demonstration, once the application is up and running with Gateway and Eureka, I've included a copy of the HTML file we created here in a static resources directory in the WebSocket Server application. Since we have the Gateway configured to forward ALL requests to the WebSocket Server application, this should allow us to host the HTML from that app and access it via &lt;code&gt;http://localhost:8080/example.html&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Message Broker
&lt;/h2&gt;

&lt;p&gt;The final important piece of the application is a Message Broker. This is what will be responsible for managing connected application instances and clients and determining where to route messages. There are many message brokers to choose from, but the two that will be easiest to drop in for our application are RabbitMQ and ActiveMQ as they both support the STOMP protocol and have nearly identical out of the box configurations.&lt;/p&gt;

&lt;p&gt;For this post, I've chosen to use RabbitMQ since it's very popular and has good performance outside of highly specialized applications that require a very large number of message producers. To get this set up, we just need to create a directory in our project called &lt;code&gt;rabbitmq&lt;/code&gt;, and we can create a Dockerfile in that directory like the following:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM rabbitmq:3.7-management
RUN rabbitmq-plugins enable --offline rabbitmq_mqtt rabbitmq_federation_management rabbitmq_stomp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This does some configuration on startup which configures the MQTT, federation management, and STOMP plug-ins for RabbitMQ. Only the STOMP plug-in is required for this demo, but the others may be useful if you want to reuse this for other projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run the Application
&lt;/h2&gt;

&lt;p&gt;Now everything is set up and ready to go. We can run RabbitMQ by building the docker image and running it in a new container, start up the Eureka server, Gateway, and WebSocket Server. It gets very tedious to keep starting up these dependencies, so instead, we can create a &lt;code&gt;docker-compose.yml&lt;/code&gt; which will allow us to automatically build all the applications, create containers, network them together, and start up the applications!&lt;/p&gt;

&lt;h3&gt;
  
  
  (Docker) Compose the App Cluster
&lt;/h3&gt;

&lt;p&gt;To use docker-compose, just create a &lt;code&gt;docker-compose.yml&lt;/code&gt; in the root of the project directory with the following contents:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'

services:
  rabbitmq:
    build: rabbitmq
    labels:
      kompose.service.type: nodeport
    ports:
      - '15672:15672'
    volumes:
      - 'rabbitmq_data:/bitnami'

  eureka:
    build: ./eureka
    ports:
      - '8761:8761'

  gateway:
    build: ./gateway
    ports:
      - '8080:8080'
    depends_on:
      - eureka
    environment:
      - EUREKA_URI=http://eureka:8761/eureka

  websocket-server-1:
    build: ./websocket-server
    depends_on:
      - eureka
      - rabbitmq
    environment:
      - EUREKA_URI=http://eureka:8761/eureka
      - BROKER_RELAY_HOST=rabbitmq

  websocket-server-2:
    build: ./websocket-server
    depends_on:
      - eureka
      - rabbitmq
    environment:
      - EUREKA_URI=http://eureka:8761/eureka
      - BROKER_RELAY_HOST=rabbitmq

volumes:
  rabbitmq_data:
    driver: local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've set up all of our services here and configured docker-compose to build the Dockerfiles from each of the projects. When the containers start up, we've also supplied some environment variables to tell the applications where to find Eureka as well as the hostname for our Broker Relay Host.&lt;/p&gt;

&lt;p&gt;Now, run the application with &lt;code&gt;docker-compose build; docker-compose up&lt;/code&gt; from the root of the project directory and all the servers will build and start up.&lt;/p&gt;

&lt;p&gt;Verify that the application has started up by visiting &lt;a href="http://localhost:8080/actuator/health" rel="noopener noreferrer"&gt;http://localhost:8080/actuator/health&lt;/a&gt; to check if the Gateway is healthy. Also, verify that the Gateway and WebSocket Server instances have all connected to Eureka by visiting the Eureka dashboard at &lt;a href="http://localhost:8761" rel="noopener noreferrer"&gt;http://localhost:8761&lt;/a&gt;. If all is well, then you should be able to open &lt;a href="http://localhost:8080/example.html" rel="noopener noreferrer"&gt;http://localhost:8080/example.html&lt;/a&gt;, hit the &lt;em&gt;connect&lt;/em&gt; button, and start receiving timed messages from both instances of the WebSocket Server. Depending on which instance you connect to, you should also be able to send messages and receive a response from one of the servers. You can try opening the page in a new tab or different browser and hopefully be routed to the second instance when you connect to the WebSocket endpoint and verify that you can see messages sent from all other clients connected.&lt;/p&gt;

&lt;p&gt;Here are a couple gifs demonstrating what we should expect to see after the project is configured and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.joshmlwood.com%2Fcontent%2Fimages%2F2019%2F08%2Fws-demo-1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.joshmlwood.com%2Fcontent%2Fimages%2F2019%2F08%2Fws-demo-1.gif" alt="Load Balanced Websockets with Spring Cloud Gateway"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&amp;lt;!--kg-card-end: image--&amp;gt;&amp;lt;!--kg-card-begin: image--&amp;gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.joshmlwood.com%2Fcontent%2Fimages%2F2019%2F08%2Fws-demo-2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.joshmlwood.com%2Fcontent%2Fimages%2F2019%2F08%2Fws-demo-2.gif" alt="Load Balanced Websockets with Spring Cloud Gateway"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&amp;lt;!--kg-card-end: image--&amp;gt;&amp;lt;!--kg-card-begin: hr--&amp;gt;&lt;/p&gt;




&lt;p&gt;That covers all of the required setup and configuration to get a basic load-balanced WebSocket connection up and running. More advanced setups would include Spring Security to authenticate the initial WebSocket connection over HTTP when negotiating the protocol upgrade, as well as more advanced routing of messages along with tracking connected users and authorizing specific requests over the WebSocket protocol. That might be the topic of a future post...&lt;/p&gt;

&lt;h2&gt;
  
  
  Get the Code
&lt;/h2&gt;

&lt;p&gt;If you want to just get the demo application, see my &lt;a href="https://github.com/jmlw/demo-projects/tree/master/spring-cloud-gateway-websocket" rel="noopener noreferrer"&gt;repository on GitHub (Spring Cloud Gateway WebSocket)&lt;/a&gt; and look for the README.md for info on running.&lt;/p&gt;

</description>
      <category>springcloud</category>
      <category>springframework</category>
      <category>websocket</category>
    </item>
    <item>
      <title>External Application Config with Spring Cloud Kubernetes</title>
      <dc:creator>Josh Wood</dc:creator>
      <pubDate>Fri, 02 Aug 2019 16:30:00 +0000</pubDate>
      <link>https://dev.to/jmlw/external-application-config-with-spring-cloud-kubernetes-168n</link>
      <guid>https://dev.to/jmlw/external-application-config-with-spring-cloud-kubernetes-168n</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yAyyzDJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1555611206-10075b5b7580%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yAyyzDJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1555611206-10075b5b7580%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="External Application Config with Spring Cloud Kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A common pattern when deploying applications to a development, staging, and production environment is to build a jar or docker image one time, then supply different configuration values in the deployment for each stage. This configuration could be a Spring profile in separate yaml documents, additional properties files, environment variables, or some other configuration mechanism.&lt;/p&gt;

&lt;p&gt;When deploying to Kubernetes, configuring a Spring application becomes a little more difficult. The option still exists to run our application with a profile, and just "enable" that profile specific application.yml. The downside here is all of our config was deployed in the docker image, so updating it requires a new deployment. We still have the option to configure environment variables in the Kubernetes deployment yaml and have Kubernetes map the provided value into the container created to run our application. A nicer option that integrates directly into the Spring bootstrap process is utilizing the Spring Cloud Kubernetes Config and a ConfigMap stored in the cluster. This allows us to define an environment specific &lt;code&gt;application.yml&lt;/code&gt; in a Kubernetes &lt;em&gt;ConfigMap&lt;/em&gt; and Spring will automatically find and merge the data into existing configuration properties. The added bonus to this approach is that changing the configuration only requires updating the ConfigMap and restarting the Spring context to read the new properties.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: this post will require access to a Kubernetes cluster or Minikube running locally and will expect you to have some operational knowledge of Kubernetes. If you do not have access to Kubernetes, you can install Minikube by following the &lt;a href="https://kubernetes.io/docs/tasks/tools/install-minikube/"&gt;official instructions from Kubernetes.io&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Create and Configure the project
&lt;/h1&gt;

&lt;p&gt;We need a project to start with so head to &lt;a href="https://start.spring.io/"&gt;Spring Initializr&lt;/a&gt; to generate a new project. Once there, select &lt;em&gt;Spring Web Starter&lt;/em&gt; as the only dependency, and ensure Java, version 1.8, and latest spring versions are set. For our purpose, we don't need anything additional from the project creation, there will be other dependencies to add later. We are electing web support so that we have an API we can test rather than relying on only application logs for validation; it's more fun to see a project working when calling an API rather than reading logs. If you do not wish to set up a project from scratch then you can clone &lt;a href="https://github.com/jmlw/demo-projects.git"&gt;the demo repo&lt;/a&gt; and navigate to the &lt;code&gt;spring-cloud-kubernetes-config-demo&lt;/code&gt; directory to see the completed project.&lt;/p&gt;

&lt;h1&gt;
  
  
  Add Spring Cloud Kubernetes Dependencies
&lt;/h1&gt;

&lt;p&gt;Now that we have a basic project with web support, we need to add one more dependency to allow Spring to read ConfigMaps and Secrets from Kubernetes: &lt;code&gt;spring-cloud-kubernetes-config&lt;/code&gt; with &lt;code&gt;groupId: org.springframework.cloud&lt;/code&gt;. If you're using Maven, your pom should look similar to this (parts omitted for brevity):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependencies&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-web&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-cloud-kubernetes-config&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        ...
    &amp;lt;/dependencies&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you're not familiar with Spring Cloud, we have one more thing to add to the pom to get this to compile correctly, the dependency management section to allow spring cloud dependencies from a specific release train to be integrated into our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependencyManagement&amp;gt;
        &amp;lt;dependencies&amp;gt;
            &amp;lt;dependency&amp;gt;
                &amp;lt;groupId&amp;gt;org.springframework.cloud&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;spring-cloud-dependencies&amp;lt;/artifactId&amp;gt;
                &amp;lt;version&amp;gt;Greenwich.SR2&amp;lt;/version&amp;gt;
                &amp;lt;type&amp;gt;pom&amp;lt;/type&amp;gt;
                &amp;lt;scope&amp;gt;import&amp;lt;/scope&amp;gt;
            &amp;lt;/dependency&amp;gt;
        &amp;lt;/dependencies&amp;gt;
    &amp;lt;/dependencyManagement&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So what does it do for us? The &lt;code&gt;spring-cloud-kubernetes-config&lt;/code&gt; dependency is one of the &lt;code&gt;spring-cloud-starter-kubernetes&lt;/code&gt; family. It hooks in to the Spring bootstrap process to provide an additional properties source from a Kubernetes ConfigMap and Secret that share the same name as our &lt;code&gt;spring.application.name&lt;/code&gt; configured in the &lt;code&gt;application.yml&lt;/code&gt;. Additionally, it doesn't require a bean or any extra configuration in the project; an autoconfiguration is responsible for instantiating the configuration beans which makes it transparent to set up once the dependency has been added to a project. More info on the &lt;a href="https://github.com/spring-cloud/spring-cloud-kubernetes"&gt;Spring Cloud Kubernetes project can be found on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Awesome! We can call it a day now. We've done it, and it's glorious!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DzCfkRwl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1436076863939-06870fe779c2%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DzCfkRwl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1436076863939-06870fe779c2%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="External Application Config with Spring Cloud Kubernetes"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@wilstewart3?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit"&gt;Wil Stewart&lt;/a&gt; / &lt;a href="https://unsplash.com/?utm_source=ghost&amp;amp;utm_medium=referral&amp;amp;utm_campaign=api-credit"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, not quite. We still need to use the dependency and add some configuration to validate that it is working as expected. Then of course, we need to deploy to Kubernetes. So, next up...&lt;/p&gt;

&lt;h1&gt;
  
  
  Spring Configuration
&lt;/h1&gt;

&lt;p&gt;Let's add some stuff to our &lt;code&gt;application.yml&lt;/code&gt;. First, make sure that the application name is set to &lt;code&gt;spring-cloud-kubernetes-config-demo&lt;/code&gt; so that your project will match the demo and this tutorial. Now add a couple of application configuration keys to the yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app:
  config: Default value
  environmentVariable: ${ENVIRONMENT_CONFIG:Default value}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this snippet, the &lt;code&gt;config&lt;/code&gt; key is set to "Default value" under all circumstances, but the second key &lt;code&gt;environmentVariable&lt;/code&gt; is defined to default to "Default value", or if &lt;code&gt;ENVIRONMENT_CONFIG&lt;/code&gt; is defined on your host or the application host, then that value will be used instead.&lt;/p&gt;

&lt;p&gt;We'll use these configuration keys to demonstrate how Spring will map data from our Kubernetes cluster and container environment into the application at deploy / startup time. Now we need to actually use these somewhere so we can see how to configure them through Kubernetes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Using the application.yml config
&lt;/h1&gt;

&lt;p&gt;The simplest way to verify these values will be to create a controller that also logs the values out during construction. This can be as simple or complex as you desire, but for my purposes, the example below will suffice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@RestController
public class ConfigurableController {
    private static final Logger LOGGER = LoggerFactory.getLogger(ConfigurableController.class);

    private String externalConfig;
    private String environmentVariable;

    public ConfigurableController(
            @Value("${app.config}") String externalConfig,
            @Value("${app.environmentVariable}") String environmentVariable
    ) {
        this.externalConfig = externalConfig;
        this.environmentVariable = environmentVariable;
        LOGGER.info(String.format("app.config: %s\napp.environmentVariable: %s", externalConfig, environmentVariable));
    }

    @GetMapping("/")
    public Map&amp;lt;String, String&amp;gt; getConfig() {
        Map&amp;lt;String, String&amp;gt; config = new HashMap&amp;lt;&amp;gt;();
        config.put("app.config", externalConfig);
        config.put("app.environmentVariable", environmentVariable);
        return config;
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;At this point, the app should run and have a single endpoint available at &lt;code&gt;http://localhost:8080/&lt;/code&gt; which will return a map of response data containing the dynamic values mapped in at construction time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "app.config": "Default value",
    "app.environmentVariable": "Default value"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can create some of the Kubernetes objects we'll need to deploy the application into a Kubernetes cluster.&lt;/p&gt;

&lt;h1&gt;
  
  
  Kubernetes ConfigMap(s)
&lt;/h1&gt;

&lt;p&gt;We will take advantage of the default configurations used by the Spring Cloud Kubernetes Config dependency; the default is to look for a ConfigMap with the same name as our application if we can detect that the application is running within Kubernetes. So, we need to create a yaml to represent our ConfigMap. Since we will be using the default configuration for Spring to search for a ConfigMap of the same name as our Spring application name, make sure that the ConfigMap name here matches the name defined in your application.yml. This example will not work otherwise.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: spring-cloud-kubernetes-config-demo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Technically this is all that's needed but it doesn't provide any configuration, much less anything that is useful to our Spring application. We can add a top level &lt;code&gt;data&lt;/code&gt; key to the yaml where we can play any configuration's we'd like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: spring-cloud-kubernetes-config-demo
data:
  application.yaml: |-
    app:
      config: Configuration from Kubernetes!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The data key in this example has a couple of important features; first it has a single nested key, in this case "application.yaml" and that key uses pipe hyphen (&lt;code&gt;|-&lt;/code&gt;) to indicate all the values nested under it is a block and represents a single multi-line value. See (the Block Chomping Indicator yaml-multiline.info)[&lt;a href="https://yaml-multiline.info/"&gt;https://yaml-multiline.info/&lt;/a&gt;] for additional information. The important part is that it allows us to define all of the custom values of our application.yml in a single key in this ConfigMap. The other major point which is not obvious in this example is that since &lt;code&gt;application.yaml&lt;/code&gt; is the only key in the data section, the name of it doesn't matter. We could in fact just have this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: spring-cloud-kubernetes-config-demo
data:
  some-configs-here: |-
    app:
      config: Configuration from Kubernetes!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The name in this case does not matter. However, if we wanted to use our ConfigMap for more than just storing an environment specific application yaml, such as an environment variable for a bash script used to start our application, or some other important configuration that's required before Spring starts up, then we &lt;strong&gt;must name the key application.yaml&lt;/strong&gt;. If we do not, then spring-cloud-kubernetes-config will be unable to find the relevant data to map in to Spring's composite property source, thus we will not have the expected configuration values applied to our application at startup time.&lt;/p&gt;

&lt;p&gt;While we're at it, we can also play with creating another ConfigMap that stores a value which we can later use to map in to our application via an environment variable. We can use a Kubernetes Deployment to actually inject the value from our ConfigMap into our running container which will be shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# environment-variable-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: environment-variable-config
data:
  ENVIRONMENT_CONFIG: Configuration from Docker environment in Kubernetes!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Kubernetes Deployment
&lt;/h1&gt;

&lt;p&gt;Since we have configs defined, we can move on to creating the deployment. As noted at the start of this post, this is assuming you have some Kubernetes knowledge already. So, we're going to configure a very simple deployment that should allow access to the application without needing additional infrastructure such as Ingress, or anything more complicated than having network access to the IP of the node the application is deployed on. To achieve this, we'll create a deployment that exposes the port of our application, and a service that configures Kubernetes to expose a Node Port and map that port on our node back to the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      serviceAccountName: demo-service-account
      containers:
        - name: spring-cloud-kubernetes-config-demo
          image: jmlw/spring-cloud-kubernetes-config-demo
          imagePullPolicy: Never
          ports:
            - containerPort: 8080
          env:
            - name: ENVIRONMENT_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: environment-variable-config
                  key: ENVIRONMENT_CONFIG
---
kind: Service
apiVersion: v1
metadata:
  name: demo
spec:
  selector:
    app: demo
  ports:
    - protocol: TCP
      port: 8080
      nodePort: 30000
      name: http
  type: NodePort
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We've called our app here &lt;code&gt;demo&lt;/code&gt;, and we're expecting Kubernetes to find the docker image for the application locally. This means we have to build the source from the same docker context that Kubernetes is using. If you are using Minikube, you can easily attach your current terminal to the docker context from Minikube by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eval $(minikube docker-env)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that our terminal should be configured to run docker commands in the Minikube environment, we can build the application locally and then attempt to deploy it. I've configured maven in the sample project to include the Spotify Dockerfile plugin so we can easily build our docker image with familiar tooling.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;build&amp;gt;
        &amp;lt;plugins&amp;gt;
            &amp;lt;plugin&amp;gt;
                &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;spring-boot-maven-plugin&amp;lt;/artifactId&amp;gt;
            &amp;lt;/plugin&amp;gt;
            &amp;lt;plugin&amp;gt;
                &amp;lt;groupId&amp;gt;com.spotify&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;dockerfile-maven-plugin&amp;lt;/artifactId&amp;gt;
                &amp;lt;version&amp;gt;1.4.9&amp;lt;/version&amp;gt;
                &amp;lt;configuration&amp;gt;
                    &amp;lt;repository&amp;gt;${dockerhub.username}/${project.artifactId}&amp;lt;/repository&amp;gt;
                    &amp;lt;buildArgs&amp;gt;
                        &amp;lt;JAR_FILE&amp;gt;target/${project.build.finalName}.jar&amp;lt;/JAR_FILE&amp;gt;
                    &amp;lt;/buildArgs&amp;gt;
                &amp;lt;/configuration&amp;gt;
            &amp;lt;/plugin&amp;gt;
        &amp;lt;/plugins&amp;gt;
    &amp;lt;/build&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With this plugin, we can run &lt;code&gt;./mvnw clean compile package dockerfile:build&lt;/code&gt; and it will build our app into a docker image named and tagged &lt;code&gt;jmlw/spring-cloud-kubernetes-config-demo:latest&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Before moving on, we need to define one more thing for our app so that our Spring Cloud Kubernetes Config dependency is able to do its job. By default current versions of Kubernetes enable RBAC (role based access control), so you'll need to grant our deployment explicit access to the Kubernetes APIs that it will need to discover ConfigMaps and Secretes that it should be allowed to read.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: demo-role
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources: ["pods", "configmaps"]
    verbs: ["get", "watch", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: demo-role-binding
  namespace: default
subjects:
  - kind: ServiceAccount
    name: demo-service-account
    namespace: default
roleRef:
  kind: Role
  name: demo-role
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: demo-service-account
  namespace: default
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With this rbac.yaml, we can grant permission to the service account 'demo-service-account' read access to pods and ConfigMaps. If you need or want, you can also add "secretes" to the list of resources in the role definition.&lt;/p&gt;

&lt;p&gt;Once the docker image is built, we can use &lt;code&gt;kubectl&lt;/code&gt; to apply the yamls we've defined to our Kubernetes cluster and watch as the application starts up and configures itself. To actually deploy, you can create the yaml files listed above and then run &lt;code&gt;kubectl apply -f rbac.yaml environment-variable-config.yaml app-config.yaml deployment.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Otherwise, if you're following the source from the &lt;a href="https://github.com/jmlw/demo-projects/tree/master/spring-cloud-kubernetes-config-demo"&gt;demo-projects repository&lt;/a&gt;, then you can apply the same resulting yaml manifest by &lt;code&gt;kubectl apply -f deployments/&lt;/code&gt; which will deploy all yamls within the deployments directory.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if you'd like to skip building the app locally or within your Kubernetes cluster, you can switch the ImagePullPolicy to 'Always' which will cause Kubernetes to pull 'latest' from Dockerhub&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Validate
&lt;/h1&gt;

&lt;p&gt;First, make sure the pod we deployed has started up and is healthy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods

# expected:
# NAME READY STATUS RESTARTS AGE
# spring-cloud-kubernetes-config-demo 1/1 Running 1 2m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If the app is not running, debugging why the application is failing to start is outside of the scope of this post. However, the most like causes are 1) missing the env variable defined in the deployment which depends on a reference to a named ConfigMap, 2) the docker image is missing or is incompatible with your host, or 3) Java/Spring is failing to start which is likely a configuration issue.&lt;/p&gt;

&lt;p&gt;Now that the app has started, check the logs from Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs "$(kubectl get pods | grep demo-deployment | awk '{print $1}')"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should see some log statements printed out from the construction of our controller similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2019-08-31 23:25:48.459 INFO 46773 --- [main] c.j.s.ConfigurableController : app.config: Configuration from Kubernetes!
app.environmentVariable: Configuration from Docker environment in Kubernetes!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can actually call the endpoint of our application. If you're running in Minikube, you can just run the following which will call the root endpoint on the URL of our service name demo within Minikube.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl "$(minikube service demo --url)"
# expected output (or similar):
# {
# "app.config":"Configuration from Kubernetes!",
# "app.environmentVariable":"Configuration from Docker environment in Kubernetes!"
# }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Additional Configuration Options
&lt;/h1&gt;

&lt;p&gt;On top of the basic option of mapping in a single ConfigMap's application.yaml key or the only key of a ConfigMap, you can configure Spring to search for additional ConfigMaps from namespaces outside of the current namespace. There are other configuration options available that you can find &lt;a href="https://github.com/spring-cloud/spring-cloud-kubernetes"&gt;on GitHub in the Spring Cloud Kubernetes repository&lt;/a&gt;. One interesting option that I have yet to try in a production environment is using the &lt;code&gt;spring.cloud.kubernetes.reload.enabled&lt;/code&gt; value set to true. This allows Spring to hot-reload configuration properties dependent on the &lt;code&gt;spring.cloud.kubernetes.reload.strategy&lt;/code&gt;, which could be &lt;code&gt;refresh&lt;/code&gt;, &lt;code&gt;restart_context&lt;/code&gt;, or &lt;code&gt;shutdown&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A lingering question you might have is, why not just map in environment variables like &lt;code&gt;ENVIRONMENT_CONFIG&lt;/code&gt; above? Honestly, it's just as easy in most cases unless your application.yml has special configuration for nearly every key. The biggest drawback of mapping these in via environment variables is that all of your configurations are defined three times; once in the application.yml, once in the deployment.yaml, and once in the configmap.yaml. That leaves three potential places for typos that could cause incorrect configuration or worse, application crashes. Otherwise, relying on a little spring magic, you can just use the ConfigMap and application.yaml key to provide the configuration, and the keys do not need to exist in the packaged appliation.yml either. In my mind, this is slightly higher cognitive overhead for the huge benefit of not duplicating, misspelling, or failing to update configuration values.&lt;/p&gt;

&lt;p&gt;As always, a full working demo for this can be found in my &lt;a href="https://github.com/jmlw/demo-projects/tree/master/spring-cloud-kubernetes-config-demo"&gt;demo-projects repository&lt;/a&gt;. Any questions or problems, feel free to open an issue and I'll review as quickly as possible.&lt;/p&gt;

&lt;p&gt;Happy coding and navigating the Kubernetes sea with a little Spring in your Boot!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>springcloud</category>
      <category>springframework</category>
    </item>
    <item>
      <title>Simple WebSockets with Spring Boot</title>
      <dc:creator>Josh Wood</dc:creator>
      <pubDate>Tue, 16 Apr 2019 19:57:00 +0000</pubDate>
      <link>https://dev.to/jmlw/simple-websockets-with-spring-boot-18h9</link>
      <guid>https://dev.to/jmlw/simple-websockets-with-spring-boot-18h9</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZhfPNMsK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1501696226977-1fbff6555a97%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZhfPNMsK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1501696226977-1fbff6555a97%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Simple WebSockets with Spring Boot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In traditional web applications, it's only possible to interact with the server via a request and wait for a response. With modern interactive applications, this approach is not ideal for any user interaction when we want to get updates from the server without having to continuously make requests to learn if anything interesting has happened on the server. The solution is to provide a bi-directional, persistent means of communication between the client and the server. This is where WebSocket comes in.&lt;/p&gt;

&lt;p&gt;WebSocket is a communication protocol over a TCP connection which provides full-duplex, or bi-directional communication. WebSocket is also persistent, so the client is able to open a connection, and retain that connection with the server for the duration of the client's session, unlike HTTP which is just a single request, response, then close the connection. These properties of WebSocket make it ideal for real-time communication between clients and servers. Let's take a look at how to set up a very basic WebSocket connection with Spring Framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create and Configure the Project
&lt;/h2&gt;

&lt;p&gt;To start off, we need a new spring based project. You can use any tool you wish, but to follow along here you can use &lt;a href="https://start.spring.io/"&gt;Spring Initializr&lt;/a&gt;. Once on the page, choose your preferred build tool, Maven or Gradle (I'll be using Maven), keep the default selected version of Spring, fill in the group and artifact data (for me it will be &lt;code&gt;com.joshmlwood&lt;/code&gt; and &lt;code&gt;websocket-demo&lt;/code&gt;, and we'll use the (currently) default version of Java 8.&lt;/p&gt;

&lt;p&gt;Now, we need to add the &lt;code&gt;WebSocket&lt;/code&gt; starter to the project under dependencies, and we can click the &lt;em&gt;Generate&lt;/em&gt; button. We now have a zip file that is a basis for our Spring Framework 5.x, and Spring Boot 2.x based application. Your pom file should resemble the snippet below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;parent&amp;gt;
        &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
        &amp;lt;artifactId&amp;gt;spring-boot-starter-parent&amp;lt;/artifactId&amp;gt;
        &amp;lt;version&amp;gt;2.1.4.RELEASE&amp;lt;/version&amp;gt;
        &amp;lt;relativePath/&amp;gt; &amp;lt;!-- lookup parent from repository --&amp;gt;
    &amp;lt;/parent&amp;gt;
    &amp;lt;dependencies&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-websocket&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;

        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-test&amp;lt;/artifactId&amp;gt;
            &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt;
        &amp;lt;/dependency&amp;gt;
    &amp;lt;/dependencies&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Application Configuration
&lt;/h2&gt;

&lt;p&gt;If you used Spring Initializr to create your application, you should already have a &lt;code&gt;@SpringBootApplication&lt;/code&gt; annotated main class. This gives us a base Spring application to work with and build from.&lt;/p&gt;

&lt;p&gt;We now need to configure and enable a WebSocket broker in our application. To do so, we create a new configuration class that implements &lt;code&gt;WebSocketMessageBrokerConfigurer&lt;/code&gt; and is annotated with &lt;code&gt;@EnableWebSocketMessageBroker&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
    @Override
    public void configureMessageBroker(MessageBrokerRegistry registry) {
        registry.enableSimpleBroker("/topic");
        registry.setApplicationDestinationPrefixes("/app");
    }

    @Override
    public void registerStompEndpoints(StompEndpointRegistry registry) {
        registry.addEndpoint("/websocket");
        registry.addEndpoint("/sockjs")
                .withSockJS();
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We've overridden two methods from our &lt;code&gt;WebSocketMessageBrokerConfigurer&lt;/code&gt; to provide a basic configuration for our application. The &lt;code&gt;configureMessageBroker&lt;/code&gt; method sets up a simple (in-memory) message broker for our application. The &lt;code&gt;/topic&lt;/code&gt; designates that any destination prefixed with &lt;code&gt;/topic&lt;/code&gt; will be routed back to the client. We've also configured 'application destination prefixes' of just &lt;code&gt;/app&lt;/code&gt;. This configuration allows Spring to understand that any message sent to a WebSocket channel name prefixed with &lt;code&gt;/app&lt;/code&gt; should be routed to a &lt;code&gt;@MessageMapping&lt;/code&gt; in our application.&lt;/p&gt;

&lt;p&gt;It's important to keep in mind that using the &lt;code&gt;simpleMessageBroker&lt;/code&gt; will not work with more than one application instance and it does not support all of the features a full message broker like RabbitMQ, ActiveMQ, etc... provide.&lt;/p&gt;

&lt;p&gt;Here we've also registered some &lt;em&gt;STOMP&lt;/em&gt; (Simple Text Oriented Messaging Protocol) endpoints. STOMP is simply a nice abstraction on top of WebSocket to allow us to send text (think JSON) as our message payload. Without STOMP, we would need to rely on some other higher level message protocol, or use the WebSocket TCP transport layer raw which would be much less user-friendly for our server and our client. The endpoint &lt;code&gt;/websocket&lt;/code&gt; will allow us to connect to &lt;code&gt;ws://localhost:8080/websocket&lt;/code&gt; with the default Spring port configuration. Interestingly we also have this &lt;code&gt;/sockjs&lt;/code&gt; endpoint. This endpoint is special as it uses the SockJS fallback protocol which allows a client which does not support WebSocket natively mimic a WebSocket over an HTTP connection. So for the SockJS endpoint, our connector string would look like &lt;code&gt;http://localhost:8080/sockjs&lt;/code&gt;. This is just here as an exercise to show it's possible to configure a fallback if you need to support very old browsers or a client that doesn't support WebSocket natively, but we won't use it for the remainder of the post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Make the Payload Model
&lt;/h2&gt;

&lt;p&gt;We need a model to represent the state transfer between the client and the server. We can start off with something very simple, just a simple POJO with a &lt;code&gt;from&lt;/code&gt; and &lt;code&gt;message&lt;/code&gt; field will suffice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Message {
    private String from;
    private String message;

    public Message() {
        // required for Jackson
    }

    // constructor and getters
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will allow us to use Spring's default implementation of Jackson Object Mapper to convert our messages to and from JSON strings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Message Controller
&lt;/h2&gt;

&lt;p&gt;Much like in Spring Web MVC (MVC and Rest endpoints), we have the idea of a "controller" which hosts the topic endpoints to send and receive messages over our WebSocket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Controller
public class MessageController {

    @MessageMapping("/send")
    @SendTo("/topic/messages")
    public Message send(Message message) {
        LocalDateTime timestamp = LocalDateTime.now();
        return new Message(message.getFrom(), message.getMessage(), timestamp);
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This controller looks a lot like a rest controller, but instead of using a &lt;code&gt;@RequestMapping&lt;/code&gt;, we use a &lt;code&gt;@MessageMapping&lt;/code&gt; to add a hook for receiving messages on the &lt;code&gt;/app/send&lt;/code&gt; topic. An important difference, however, is that we use &lt;code&gt;@SendTo&lt;/code&gt; annotation to instruct Spring to write the return value of our method to the &lt;code&gt;/topic/messages&lt;/code&gt; topic, which our client will be subscribed to. In this method, we are going to just forward the message content as received, but add a timestamp from the server to help differentiate messages that originated from a client and ones that originated from the server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Client
&lt;/h2&gt;

&lt;p&gt;Now that we have a server waiting to send and receive requests on our WebSocket, we need a client to actually connect to it. To keep things simple, we can just create a basic static HTML page with Query to provide some interaction.&lt;/p&gt;

&lt;p&gt;To start, create an HTML page, and in the header add at a minimum, jQuery and StompJS from CDN. You can also ass SockJS if you'd like to experiment with backward compatibility, but it isn't a requirement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js"&amp;gt;&amp;lt;/script&amp;gt;
    &amp;lt;script src="https://cdn.jsdelivr.net/npm/@stomp/stompjs@5.0.0/bundles/stomp.umd.js"&amp;gt;&amp;lt;/script&amp;gt;
    &amp;lt;script src="https://cdn.jsdelivr.net/npm/sockjs-client@1/dist/sockjs.min.js"&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To style things a little more nicely, I've also added Bootstrap, but that is just to make it looks a little prettier than basic HTML layout and is not required.&lt;/p&gt;

&lt;p&gt;Now that we have the required libraries included, we can make some HTML for our controls to connect, disconnect, a form to send messages, and a table to hold the responses from our server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;div class="container" id="main-content"&amp;gt;
    &amp;lt;div class="row"&amp;gt;
        &amp;lt;div class="col-md-10"&amp;gt;
            &amp;lt;form class="form-inline"&amp;gt;
                &amp;lt;div class="form-group"&amp;gt;
                    &amp;lt;label for="connect"&amp;gt;WebSocket connection:&amp;lt;/label&amp;gt;
                    &amp;lt;button class="btn btn-default" id="connect" type="submit"&amp;gt;Connect&amp;lt;/button&amp;gt;
                    &amp;lt;button class="btn btn-default" id="connectSockJS" type="submit"&amp;gt;ConnectSockJS&amp;lt;/button&amp;gt;
                &amp;lt;/div&amp;gt;
            &amp;lt;/form&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;div class="col-md-2"&amp;gt;
            &amp;lt;form class="form-inline"&amp;gt;
                &amp;lt;div class="form-group"&amp;gt;
                    &amp;lt;button class="btn btn-default" disabled="disabled" id="disconnect" type="submit"&amp;gt;
                        Disconnect
                    &amp;lt;/button&amp;gt;
                &amp;lt;/div&amp;gt;
            &amp;lt;/form&amp;gt;
        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div class="row"&amp;gt;
        &amp;lt;div class="col-md-12"&amp;gt;
            &amp;lt;form class="form-inline"&amp;gt;
                &amp;lt;div class="form-group"&amp;gt;
                    &amp;lt;label for="from"&amp;gt;Username:&amp;lt;/label&amp;gt;
                    &amp;lt;input class="form-control" id="from" placeholder="Username..." type="text"&amp;gt;
                    &amp;lt;label for="message"&amp;gt;Message:&amp;lt;/label&amp;gt;
                    &amp;lt;input class="form-control" id="message" placeholder="Your message here..." type="text"&amp;gt;
                &amp;lt;/div&amp;gt;
                &amp;lt;button class="btn btn-default" id="send" type="submit"&amp;gt;Send&amp;lt;/button&amp;gt;
            &amp;lt;/form&amp;gt;
        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div class="row"&amp;gt;
        &amp;lt;div class="col-md-12"&amp;gt;
            &amp;lt;table class="table table-striped" id="responses"&amp;gt;
                &amp;lt;thead&amp;gt;
                &amp;lt;tr&amp;gt;
                    &amp;lt;th&amp;gt;Messages&amp;lt;/th&amp;gt;
                &amp;lt;/tr&amp;gt;
                &amp;lt;/thead&amp;gt;
                &amp;lt;tbody id="messages"&amp;gt;
                &amp;lt;/tbody&amp;gt;
            &amp;lt;/table&amp;gt;
        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have the skeleton of the HTML out of the way, we can write some functions to handle connecting, disconnecting, sending messages, and receiving messages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var stompClient = null;
function connect() {
    stompClient = new window.StompJs.Client({
        webSocketFactory: function () {
            return new WebSocket("ws://localhost:8080/websocket");
        }
    });
    stompClient.onConnect = function (frame) {
        frameHandler(frame)
    };
    stompClient.onWebsocketClose = function () {
        onSocketClose();
    };

    stompClient.activate();
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Starting with the connection function we will create a global variable to hold our Stomp Client then uses the StompJS library to create a new instance. We've given our client a configuration object to provide an anonymous function for a WebSocket factory. This is to ensure that we use the browser's built-in &lt;code&gt;WebSocket&lt;/code&gt; object, and are connecting to the correct URL. It would be trivial to parameterize the URL in the connect function and store it as an external configuration as well. In the instance that we would like to use &lt;code&gt;SockJS&lt;/code&gt; instead of the browser's built-in WebSocket implementation, we can just replace the return of that anonymous function with &lt;code&gt;new window.SockJS("http://localhost:8080/sockjs");&lt;/code&gt;. Keep in mind here, we use the &lt;code&gt;window&lt;/code&gt; keyword since we've registered the SockJS library as a global library in the browser window. In a modern web app with Angular or React, you would probably just use an import local to the component using it and it would then be accessible with just the new command like &lt;code&gt;new SockJS(...)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We have also assigned some functions to the &lt;code&gt;onConnect&lt;/code&gt; and the &lt;code&gt;onWebsocketClose&lt;/code&gt; hooks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function onSocketClose() {
    if (stompClient !== null) {
        stompClient.deactivate();
    }
    setConnected(false);
    console.log("Socket was closed. Setting connected to false!")
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;onSocketClose&lt;/code&gt; function is helpful to properly update our view so that when we lose the connection or close the connection to the socket, the user has some context that has happened. Here we can also see the &lt;code&gt;setConnected&lt;/code&gt; function which is responsible for handling the display changes when our socket connects or disconnects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function setConnected(connected) {
    $("#connect").prop("disabled", connected);
    $("#connectSockJS").prop("disabled", connected);
    $("#disconnect").prop("disabled", !connected);
    if (connected) {
        $("#responses").show();
    } else {
        $("#responses").hide();
    }
    $("#messages").html("");
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we need to write a method to handle the messages that are sent from the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function frameHandler(frame) {
    setConnected(true);
    console.log('Connected: ' + frame);
    stompClient.subscribe('/topic/messages', function (message) {
        showMessage(message.body);
    });
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;frameHandler&lt;/code&gt; function takes in an object called &lt;code&gt;frame&lt;/code&gt;. Each frame may represent a different state of the WebSocket or messages pushed from the server. Mozilla has great documentation on WebSockets that is worth a glance. What is important to us is that when we receive a frame, we will be connected and we will want to subscribe to a &lt;em&gt;topic&lt;/em&gt; from our server. This topic will be where the server will write messages destined for the client. We also have a function callback that is responsible for handling each message sent from the server. The message here is just a string message (since we're using STOMP as our protocol over WebSocket). The implementation below is just to prepend the newest message to the top of our messages table we created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function showMessage(message) {
    var msg = JSON.parse(message);
    $("#responses").prepend("&amp;lt;tr&amp;gt;" +
        "&amp;lt;td class='timeStamp'&amp;gt;" + msg['timeStamp'] + "&amp;lt;/td&amp;gt;" +
        "&amp;lt;td class='from'&amp;gt;" + msg['from'] + "&amp;lt;/td&amp;gt;" +
        "&amp;lt;td&amp;gt;" + msg['message'] + "&amp;lt;/td&amp;gt;" +
        "&amp;lt;/tr&amp;gt;");
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we also need the ability to send a message to the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function sendMessage() {
    stompClient.publish({
        destination:"/app/send",
        body: JSON.stringify({
            'from': $("#from").val(),
            'message': $("#message").val()
        })
    });
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This function instructs the stopClient to publish a message on the topic &lt;code&gt;/app/send&lt;/code&gt; with a body containing our inputs from the HTML form. Once the stop client publishes to this topic, the server will receive the message, route it to our &lt;code&gt;@MessageMapping&lt;/code&gt; with the configured &lt;code&gt;/send&lt;/code&gt; topic destination.&lt;/p&gt;

&lt;p&gt;We should also have a manual disconnect method to close out the connection to the WebSocket, just for demonstration purposes. It simply deactivates the StompClient (and all subscriptions) if the client is not null.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function disconnect() {
    if (stompClient !== null) {
        stompClient.deactivate();
    }
    setConnected(false);
    console.log("Disconnected");
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The last bit we need to do to get a functional client is to set up jQuery listeners on our buttons and configure a document ready function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$(function () {
    $("form").on('submit', function (e) {
        e.preventDefault();
    });
    $("#connect").click(function () {
        connect();
    });
    $("#connectSockJS").click(function () {
        connectSockJs();
    });
    $("#disconnect").click(function () {
        disconnect();
    });
    $("#send").click(function () {
        sendMessage();
    });
    $("document").ready(function () {
        disconnect();
    });
});
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If we place our HTML file in the application's resources directory, named demo.html (&lt;code&gt;src/main/resources/static/demo.html&lt;/code&gt;) we will be able to access &lt;a href="http://localhost:8080/demo.html"&gt;http://localhost:8080/demo.html&lt;/a&gt; when we start the application up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Screenshot
&lt;/h2&gt;

&lt;p&gt;Here's how my demo page looks with Bootstrap styling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D6YCw_iv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.joshmlwood.com/content/images/2019/04/Screen-Shot-2019-04-16-at-2.14.00-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D6YCw_iv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.joshmlwood.com/content/images/2019/04/Screen-Shot-2019-04-16-at-2.14.00-PM.png" alt="Simple WebSockets with Spring Boot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that the from was 'josh' and the message was 'test', and we also see that the server attached a timestamp to the message for when it was received.&lt;/p&gt;

&lt;p&gt;Initially configuring a WebSocket seems like a daunting task, but once the basics are out of the way it's a relatively simple implementation especially when making use of the implementations provided by the Spring team.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get the code
&lt;/h3&gt;

&lt;p&gt;If you want to just get the demo application, see my &lt;a href="https://github.com/jmlw/demo-projects/tree/master/simple-websocket-demo"&gt;repository on GitHub (Simple Spring Websockets Demo)&lt;/a&gt;&lt;/p&gt;

</description>
      <category>springframework</category>
      <category>websocket</category>
    </item>
    <item>
      <title>Random Server Ports and Spring Cloud Service Discovery with Netflix Eureka</title>
      <dc:creator>Josh Wood</dc:creator>
      <pubDate>Fri, 01 Feb 2019 00:27:00 +0000</pubDate>
      <link>https://dev.to/jmlw/random-server-ports-and-spring-cloud-service-discovery-with-netflix-eureka-18fg</link>
      <guid>https://dev.to/jmlw/random-server-ports-and-spring-cloud-service-discovery-with-netflix-eureka-18fg</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1501823129913-a386fe76d87a%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1501823129913-a386fe76d87a%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Random Server Ports and Spring Cloud Service Discovery with Netflix Eureka"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently, I've had an issue were I want to be able to run multiple spring boot services locally for testing and development purposes. Unfortunately they all run on the same port so they fail to start!&lt;/p&gt;

&lt;p&gt;Fixing this is simple enough, in our Spring Boot application, we can just configure a server port in our &lt;code&gt;application.yml&lt;/code&gt; like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server:
    port: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can choose &lt;code&gt;0&lt;/code&gt; for our port to be randomly chosen at startup. If we want to specify another port, we can manually set it to &lt;code&gt;8081&lt;/code&gt; or &lt;code&gt;8082&lt;/code&gt;, etc... However, manually specifying ports is painful, especially if you have many services you'd like to run.&lt;/p&gt;

&lt;p&gt;Another downside to specifying ports manually is deployments. When you deploy your application to a server (on-prem or in the cloud), then you have to make sure that there are no port collisions on the server the application is being deployed to. This is a tedious and time consuming effort for the team responsible for deploying services, and annoying to the developer for documenting the port(s) the application requires. To complicate matters further, imagine you are deploying in a docker container and using a container orchestration framework. Likely you will not know ahead of time which node the application will actually be deployed on, if it's successfully deployed at all. In this instance your orchestrator could run into port conflicts, or fail to provision your application due to a lack of nodes with your container's port available.&lt;/p&gt;

&lt;p&gt;Setting a random port comes with downsides as well. It is difficult to access your service locally since it is always on a different port after restarting. This can be addressed by using a gateway, like Spring Cloud Gateway and configuring auto-discovery using Spring Cloud Discovery with Netflix Eureka to direct traffic to your application through a common gateway port. Again, we will run into another problem, your application during initialization will report port &lt;code&gt;0&lt;/code&gt; (random) to Eureka as the port it's running on, resulting in an unreachable service.&lt;/p&gt;

&lt;p&gt;This is not what we want, so we need a way to customize initialization of the application to choose a random port for us and allow Eureka / Service Discovery to configure itself with that new random port. We can achieve this in Spring Boot 2 (Spring Framework 5+) by extending the &lt;code&gt;WebServerFactoryCustomizer&lt;/code&gt; class like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Configuration
public class WebServerFacotryCustomizerConfiguration implements WebServerFactoryCustomizer&amp;lt;ConfigurableServletWebServerFactory&amp;gt; {

    @Value("${port.number.min:8080}")
    private Integer minPort;

    @Value("${port.number.max:8090}")
    private Integer maxPort;

    @Override
    public void customize(ConfigurableServletWebServerFactory factory) {
        int port = SocketUtils.findAvailableTcpPort(minPort, maxPort);
        factory.setPort(port);
        System.getProperties().put("server.port", port);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we'll use Spring's SocketUtils to find a port that is available in our range, set that port on the servlet web server factory configuration, and also set it as a system property. Doing this will ensure that our application will get an available port that doesn't collide with a port already in use and it will also allow our service discovery to initialize itself properly!&lt;/p&gt;

&lt;p&gt;On a side note, I have not tested this in a production environment or deployed in any manner, so it's possible there will be some issues. This solution may also be difficult to use in your deployment process which may require knowing the port of the application ahead of time. However if your services all depend on Eureka for service discovery amongst themselves, Eureka will report the correct application port and IP address for your service to be reachable. As long as your services do not require additional port forwarding for your port(s) be be reachable outside of the host this should work just fine.&lt;/p&gt;

</description>
      <category>springcloud</category>
      <category>springframework</category>
    </item>
    <item>
      <title>Home File Server - SnapRAID</title>
      <dc:creator>Josh Wood</dc:creator>
      <pubDate>Mon, 07 Jan 2019 04:26:04 +0000</pubDate>
      <link>https://dev.to/jmlw/home-file-server-snapraid-1alo</link>
      <guid>https://dev.to/jmlw/home-file-server-snapraid-1alo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5tAas42N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1484662020986-75935d2ebc66%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5tAas42N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1484662020986-75935d2ebc66%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Home File Server - SnapRAID"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first question is, what is SnapRAID? Snapraid is a Snapshot Parity RAID-like system, or from the blurb on (SnapRAID's website)[&lt;a href="https://www.snapraid.it"&gt;https://www.snapraid.it&lt;/a&gt;], "A backup program for disk arrays. It stores parity information of your data and recovers from up to six disk failures".&lt;/p&gt;

&lt;p&gt;RAID and redundancy is very popular, but what does RAID solve? Why would you want to use it? RAID (Redundant Array of Independent Disks) is meant to solve problems of potentially losing operational data when a disk in your array fails and to keep the array online and running. It ensures you still have a copy of that data somewhere else in the array to restore the data from when you replace the bad drive. It's great for enterprise level systems with many many disks, and proper backup solutions (tape, array clones, off-site, cloud, etc...). However, this isn't generally great for a home file server or a file server that doesn't have a lot of small, frequently changing files.&lt;/p&gt;

&lt;p&gt;For my use case, this is overkill, and RAID has quite a few scary edges to it that make me shy away from it. These are generally related to hardware raid, which means you are reliant on the exact make and model of hardware to support your RAID array, and should any piece of your hardware fail, it's possible to lose all data in the array. Mitigating this is as simple as moving to a software-based RAID, but this generally requires more compute resources than I'm willing to invest in a simple file server.&lt;/p&gt;

&lt;h2&gt;
  
  
  SnapRAID for Snapshot Parity
&lt;/h2&gt;

&lt;p&gt;To try to capture the best of both worlds, JBOD (Just a Bunch Of Drives) storage array, and some sort of data parity, SnapRAID is the perfect solution. This means I can keep a bunch of hard drives in my home server and drop files anywhere I want on them, and at the same time, that data will have some redundancy as protection from one of those drives failing. So, why not use something like ZFS which sounds like it solves just about the same problem? In short, ZFS does not allow me to easily add new drives to my JBOD pool without much ado, but configuring SnapRAID to protect a new drive under its parity calculation is as simple as editing a single file and re-running the Sync function to generate parity changes resulting from adding the new drive.&lt;/p&gt;

&lt;p&gt;There is one major downside to using SnapRAID instead of a realtime raid parity calculation though. For SnapRAID to function, you must run the &lt;code&gt;Sync&lt;/code&gt; command, which will read data from all of your disks, run a computation to calculate the &lt;code&gt;parity&lt;/code&gt; of your data (the result of the calculation allows you to regenerate the data should one of the disks go missing by figuring out the missing bits required to make the parity of the remaining disks match the existing parity). Since it depends on this sync command, which is run at a point in time, it only gives you redundancy for your data at that point in time. If you were to sync your array, then add a new file to a disk and that disk crashes, your new file is missing and not recoverable since it was added after the parity calculation was run. The upside is if you don't add files very often you're not likely to lose files between sync commands! This sounds like the perfect option for a home file server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation and Configuration
&lt;/h2&gt;

&lt;p&gt;To get set up, first you need to download and compile SnapRAID, so let's get Debian based Linux OS ready by ensuring we're up to date, and have &lt;code&gt;gcc&lt;/code&gt;, &lt;code&gt;git&lt;/code&gt;, and &lt;code&gt;make&lt;/code&gt; installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get upgrade
sudo apt-get install gcc git make -y
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now let's download, compile and install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/Downloads
wget https://github.com/amadvance/snapraid/releases/download/v11.3/snapraid-11.3.tar.gz
tar xzvf snapraid-11.3.tar.gz
cd snapraid-11.3/
./configure
make
make check
make install
cd ..
cp ~/snapraid-11.3/snapraid.conf.example /etc/snapraid.conf
cd ..
rm -rf snapraid*
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you don't have disks ready and you need to partition them, then also install &lt;code&gt;parted&lt;/code&gt; and &lt;code&gt;gdisk&lt;/code&gt;, then partition your disk(s):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install parted gdisk
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Partition &lt;code&gt;disk b&lt;/code&gt; (&lt;code&gt;/dev/sdb&lt;/code&gt;) and repeat for all disks that need to be partitioned (warning! This will destroy data on your disks):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo parted -a optimal /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 1 -1
(parted) align-check
alignment type(min/opt) [optimal]/minimal? optimal
Partition number? 1
1 aligned
(parted) quit
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I find it very helpful to configure Parity disks and Data disks to live in separate mount directories within my system, it also makes configuring tools like MergerFS much simpler since you can reference a whole directory vs individual mounted drives. Configure a place to mount your data and parity drives:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p /mnt/data/{disk01,disk02,disk03,disk04}
mkdir -p /mnt/parity/parity-01
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Configure your drives to be mounted via &lt;code&gt;/etc/fstab&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo blkid ### Make note of the UUID of each disk
sudo vi /etc/fstab
### Append the following as suited for your disks

# Data Disks
UUID=disk01 /data/disk01 ext4 defaults 0 2
UUID=disk02 /data/disk02 ext4 defaults 0 2
UUID=disk03 /data/disk03 ext4 defaults 0 2
UUID=disk04 /data/disk04 ext4 defaults 0 2

# Snapraid Disks
UUID=parity01 /parity/parity01 ext4 defaults 0 0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note: You can configure your fstab's last column to &lt;code&gt;0&lt;/code&gt; for the data disks to avoid boot time disk checks.&lt;/p&gt;

&lt;p&gt;Make a file system on each of the disks (assuming the disks are sdb1 through sde1):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdb1
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdc1
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdd1
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sde1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This also makes a reservation of 2% so if a disk is the same size as your parity disk, you're not able to completely fill it, otherwise, there will not be enough room for the additional data that the sync operation needs to store about your data parity. For the parity disk, however, we can make a 0% reservation since we don't need to preserve any space or prevent it from filling completely:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdf1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have a filesystem on each of our drives, they're ready to be used. Mount them all with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount -a
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now configure SnapRAID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi /etc/snapraid.conf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is similar to my configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parity /mnt/parity/parity01/snapraid.parity

content /var/snapraid/content
content /mnt/data/disk01/content
content /mnt/data/disk02/content
content /mnt/data/disk03/content
content /mnt/data/disk04/content

disk d1 /mnt/data/disk01/
disk d2 /mnt/data/disk02/
disk d3 /mnt/data/disk03/
disk d4 /mnt/data/disk04/

exclude *.bak
exclude *.unrecoverable
exclude /tmp/
exclude /lost+found/
exclude .AppleDouble
exclude ._AppleDouble
exclude .DS_Store
exclude .Thumbs.db
exclude .fseventsd
exclude .Spotlight-V100
exclude .TemporaryItems
exclude .Trashes
exclude .AppleDB

block_size 256

autosave 250
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create a directory for the content file on your root drive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /var/snapraid
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now run the 1st sync to calculate parity for your drives. This may take a long time depending on the amount of data you have.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo snapraid sync
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that everything is set up and the first sync of the data array is in progress, how should parity be kept in sync? Originally I found a similar article to this one on &lt;a href="https://zachreed.me"&gt;ZachReed.me&lt;/a&gt;, where he had written a very nice script to use the &lt;code&gt;diff&lt;/code&gt; command of SnapRAID, check the number of changed files output by that command, and compare against a threshold. If the threshold was breached, no further action would be taken and the user is emailed to let them know there are more changed or deleted files than the threshold. For the past few years, I've used the script to run nightly sync jobs, and a weekly partial &lt;code&gt;scrub&lt;/code&gt; of my array to ensure everything is up-to-date and there is no "bit-rot". You can find the reference to the (original script here)[&lt;a href="http://zackreed.me/updated-snapraid-sync-script/"&gt;http://zackreed.me/updated-snapraid-sync-script/&lt;/a&gt;], and the (updated script which supports split parity here)[&lt;a href="https://zackreed.me/snapraid-split-parity-sync-script/"&gt;https://zackreed.me/snapraid-split-parity-sync-script/&lt;/a&gt;].&lt;/p&gt;

&lt;h2&gt;
  
  
  Concluding
&lt;/h2&gt;

&lt;p&gt;This is the base setup I use for my home server's data array, but I actually have more than just 4 data drives, so I employ SnapRAID's recommended 2-parity setup at the moment. This means, for every 4 data drives, I have at least 1 parity drive. At the moment this is 6 data drives (ranging from 2 TB to 4 TB) and 2 parity drives at 4 TB each (a requirement by SnapRAID for your parity drives to be at least as big as your biggest data drive). A couple of these are quite old and I expect them to fail soon, when they do I'll be sure to write a quick post on running the recovery to replace the drives and rebuild the data that was destroyed by the failing drive (I've already had practice with this one so hopefully the next time will go much better, or I can replace the drives before they fail!).&lt;/p&gt;

&lt;p&gt;This post also does not cover the &lt;code&gt;pool&lt;/code&gt; feature of SnapRAID, which joins multiple drives together into one big "folder". However, I find this feature lacking in SnapRAID and prefer to use MergerFS as my drive pooling solution (coming in a future blog post).&lt;/p&gt;

&lt;p&gt;One final note is that it's possible to use SnapRAID on encrypted volumes as well. You could entirely encrypt data drives and parity drives, automatically mount and decrypt them at boot with a key file securely stored under your root account on your server after successfully entering a passphrase to unlock your root filesystem, and it can all be done remotely through SSH. This will probably be coming in a future blog post as well.&lt;/p&gt;

&lt;p&gt;Any questions or comments, it should be possible to reach out through the Disqus comments below.&lt;/p&gt;

</description>
      <category>fileserver</category>
    </item>
    <item>
      <title>Elasticsearch - IDs are hard</title>
      <dc:creator>Josh Wood</dc:creator>
      <pubDate>Wed, 01 Aug 2018 21:16:00 +0000</pubDate>
      <link>https://dev.to/jmlw/elasticsearch-ids-are-hard-4923</link>
      <guid>https://dev.to/jmlw/elasticsearch-ids-are-hard-4923</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LikRpIpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.joshmlwood.com/content/images/2018/07/elasticsearch-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LikRpIpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.joshmlwood.com/content/images/2018/07/elasticsearch-2.png" alt="Elasticsearch - IDs are hard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sometimes RTFM (read the f****** manual) is really the best solution, but when building quickly and being agile, there's not always time to read every page of the manual.&lt;/p&gt;

&lt;p&gt;I learned recently that Elasticsearch (and Amazon DynamoDB coincidentally) enforces a limit on document IDs. I discovered this because of generated document IDs used map from DynamoDB documents to Elasticsearch documents. For Elasticsearch, the limit of the document ID is 512 &lt;em&gt;bytes&lt;/em&gt;. If you are creating document IDs, make sure you account for this limit.&lt;/p&gt;

&lt;p&gt;Specifically, the error encountered was &lt;code&gt;id is too long, must be no longer than 512 bytes but was: 513&lt;/code&gt;. Taking a look at the &lt;a href="https://github.com/elastic/elasticsearch"&gt;Elasticsearch source code on GitHub&lt;/a&gt;, or more specifically the &lt;a href="https://github.com/elastic/elasticsearch/blob/d56de9890d895cd3038aa12c6d320512f6e88b9c/server/src/main/java/org/elasticsearch/action/index/IndexRequest.java#L182"&gt;IndexRequest.java&lt;/a&gt; class, it is fairly clear how this error is generated. An index request will validate the document being processed to ensure it conforms to internal constraints of Elasticsearch, and if not, it will return a descriptive error for the contstraint that was violated.&lt;/p&gt;

&lt;p&gt;Here are two examples from languages I use frequently at work, Python and Java:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_len_bytes(a_string):
    bytes_of_a_string = bytes(a_string, 'utf-8')
    return len(bytes_of_a_string)

public int getLengthBytes(String aString) {
    byte[] utf8Bytes = aString.getBytes("UTF-8");
    return utf8Bytes.length;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To handle this, my team and I discussed possible solutions to allow us to continue saving these documents, even for generated IDs that are too long. Some possible solutions we came up with were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make a hash of the document ID (this would mostly guarantee unique keys, and the document ID is still idempotent)&lt;/li&gt;
&lt;li&gt;Truncate the document ID (less desirable as it's possible to generate duplicate document IDs)&lt;/li&gt;
&lt;li&gt;Reject documents where the document ID is too long&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end we decided on the 3rd option. For our use case anything longer than 512 bytes is uncommon. So we can take this naive approach and push off handling IDs that are too long to some point in the future.&lt;/p&gt;

&lt;p&gt;As noted earlier, we were even finding size limitations with DynamoDB, AWS's managed NoSQL document store. DynamoDB limitations are laid out in &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-partition-sort-keys"&gt;Partition &amp;amp; Sort Key Limits&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The exception we encountered in this case was a generic exception thrown by DynamoDB, &lt;code&gt;ValidationException&lt;/code&gt;. Looking at the exception more closely, it was similar to &lt;code&gt;One or more parameter values were invalid: Aggregated size of all range keys has exceeded the size limit of 1024 bytes (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: SOME_AWS_REQUEST)&lt;/code&gt;. Basically, this means that if you are writing a single record then the range key is too long and should be shortened or the record thrown away.&lt;/p&gt;

&lt;p&gt;However, you will most likely see this error when attempting a &lt;code&gt;batch_write_item&lt;/code&gt; (Python) or &lt;code&gt;batchWriteItem&lt;/code&gt; (Java). Here, the error means that given the list of records, the total bytes of all range keys in all records is larger than 1024 bytes, so the request cannot be processed. Oh, and don't forget, batch operations in DynamoDB can only handle up to 100 total items anyway, so if you expect similar sizes of range keys, you have about 120 bytes per range key available.&lt;/p&gt;

&lt;p&gt;I came up with a pretty terrible solution to handle both of these cases. First, I built a &lt;code&gt;chunk&lt;/code&gt; function which take a list of things and a &lt;code&gt;chunk_size&lt;/code&gt;, then returns a list of lists where the nested lists are at most the &lt;code&gt;chunk_size&lt;/code&gt; length. "What about the 'aggregated size of all range keys' error you talked about...?" For that case, each chunk from the previous function was handed to a &lt;code&gt;chunk_by_bytes&lt;/code&gt; function. This function is given a list of items to chunk, a field name to chunk by, and a maximum size of the concatenation of all values of that field from the given list. It returns a list of one or more lists where the concatenation of the given field does not exceed the given size. This approach was good enough to resolve the errors seen, except for the cases where a single record was too large. But for those cases, the data is just dropped, and logged out so it can be reviewed later.&lt;/p&gt;

&lt;p&gt;So this just leads to another RTFM moment. Partition keys (hash key) can be at most 2048 bytes while sort keys (range key) can be at most 1024 bytes. Don't forget that a DynamoDB item can only be as large as 400 kilobytes, which includes the name(s) of your attribute(s) in UTF-8 encoding. This is important to keep in mind, especially if you attempt to save entire Google Vision API results to a single DynamoDB record, and just swallow exceptions without reporting them, as a coworker of mine discovered recently while building a prototype.&lt;/p&gt;

&lt;h1&gt;
  
  
  In conclusion
&lt;/h1&gt;

&lt;p&gt;Every technology you touch imposes its own limits on your data. You must work to make sure your data can conform to these limits, or ensure your data can be encoded in some way as to fit within the limits. If you take the route of encoding data, it will be simple for your application to decode. But the downside is that this reduces the ability to search or query the data that is encoded. This also means that your records are much more difficult to work with and use for debugging if a human needs to interact with the data. Your use case and the need for debuggability will determine whether the data should or could be reformatted, rejected, or encoded, and which approach will work best for you, your team, and your application. Personally I would be as transparent as possible, and only keep "good" data in a format that's easy to read for machine and human, as well as mock. Anything else will likely make debugging and observability difficult or impossible.&lt;/p&gt;

</description>
      <category>elasticsearch</category>
    </item>
    <item>
      <title>Spring Data Elasticsearch and GeoPoints</title>
      <dc:creator>Josh Wood</dc:creator>
      <pubDate>Tue, 05 Jul 2016 15:33:00 +0000</pubDate>
      <link>https://dev.to/jmlw/spring-data-elasticsearch-and-geopoints-21dm</link>
      <guid>https://dev.to/jmlw/spring-data-elasticsearch-and-geopoints-21dm</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1524146128017-b9dd0bfd2778%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1524146128017-b9dd0bfd2778%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Spring Data Elasticsearch and GeoPoints"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some backstory... I was working on a Java app with an Angular 1.x frontend. The base project was generated by &lt;a href="http://jhipster.github.io/" rel="noopener noreferrer"&gt;JHipster&lt;/a&gt; and uses PostgreSQL for the DB and Spring Boot.&lt;/p&gt;

&lt;p&gt;The application relies heavily on being able to return a list of items that are "near" the provided location from the client. So really just a geospatial search. To accomplish this I decided to use Elasticsearch's built in geo functions. I figured the easiest way to do so would be to use my JPA entity that is annotated as a Hibernate entity could double for my search engine entity as long as I used a string to represent my geopoint. It was pretty simple to set that up, just concatenate latitude, a comma, and longitude, or the reverse depending on use cases or to conform to Elasticsearch's requirements. Now we can easily store it in the database without a serializer and deserializer, and it should work out of the box with Elasticsearch.&lt;/p&gt;

&lt;p&gt;I wrote a couple tests with dummy data to make sure that I can do geo based searches and return exactly what I expect. I.E. I don't return a lat/lon that's in New York when I search for results within 1 kilometer of Chicago Loop. When I ran the tests, all I saw was red. I was very disheartened by this. I followed exactly what the Spring Data Elasticsearch developers did with their tests. I thought I had it set up correctly. I made sure my geopoint string with annotated with &lt;code&gt;@GeoPointField&lt;/code&gt;, and that my entity was annotated as a &lt;code&gt;@Document&lt;/code&gt; for ElasticSearch. But alas, no matter what I tried Elasticsearch could not process my string as a geopoint.&lt;/p&gt;

&lt;p&gt;Then I took another look at the documentation and the tests on github for Spring Data Elasticsearch. You can look at the &lt;a href="https://github.com/spring-projects/spring-data-elasticsearch/blob/b2f0300856cb3dd9bfeade3e1db26da96bc2d88a/src/test/java/org/springframework/data/elasticsearch/core/geo/LocationMarkerEntity.java" rel="noopener noreferrer"&gt;LocationMarkerEntity.java&lt;/a&gt; entity for their tests as an example. In their tests they are able to return items using the geo spacial queries for the geopoint String fields. The only difference between my entity and their entity was some additonal options on the &lt;code&gt;@Document&lt;/code&gt; annotation. I thought that since I was using the &lt;code&gt;@GeoPointField&lt;/code&gt; annotation on my string, it would work without a hitch, however that's not the case. As you can see below, the source for the Spring Data Elasticsearch also included this &lt;code&gt;type = "geo-annotation-point-type"&lt;/code&gt; option in the &lt;code&gt;@Document&lt;/code&gt; annotation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package org.springframework.data.elasticsearch.core.geo;

import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.GeoPointField;

@Document(indexName = "test-geo-index", type = "geo-annotation-point-type", shards = 1, replicas = 0, refreshInterval = "-1")
public class LocationMarkerEntity {
    // some code omitted for berevity
    @GeoPointField
    private String locationAsString;

    @GeoPointField
    private double[] locationAsArray;
    // omitted
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I searched the &lt;a href="http://docs.spring.io/spring-data/elasticsearch/docs/current/reference/html/" rel="noopener noreferrer"&gt;Spring Data Elasticsearch docs&lt;/a&gt; but I didn't find a reference to this type parameter. I did a cursory search of the Elasticsearch docs, but didn't turn anything up there. I also looked through the &lt;a href="https://github.com/spring-projects/spring-data-elasticsearch" rel="noopener noreferrer"&gt;Spring Data Elasticsearch source&lt;/a&gt; to see if this parameter was defined in the &lt;code&gt;@Document&lt;/code&gt; annotation but I didn't find it. I likely missed it in the docs or in the source, but it lead to a frustrating few days as I spent time trying to find why my geo indexing wasn't working correctly.&lt;/p&gt;

&lt;p&gt;Hopefully this will help anybody else having the same issue with the Spring Data Elasticsearch framework.&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>springframework</category>
    </item>
  </channel>
</rss>
