<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stephan</title>
    <description>The latest articles on DEV Community by Stephan (@stephan007).</description>
    <link>https://dev.to/stephan007</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/stephan007"/>
    <language>en</language>
    <item>
      <title>Tracking players using ML solutions</title>
      <dc:creator>Stephan</dc:creator>
      <pubDate>Mon, 30 Dec 2019 08:59:04 +0000</pubDate>
      <link>https://dev.to/stephan007/tracking-players-using-ml-solutions-kc0</link>
      <guid>https://dev.to/stephan007/tracking-players-using-ml-solutions-kc0</guid>
      <description>&lt;p&gt;A challenging problem is tracking individuals using machine learning solutions.   I will investigate several possible solutions for my open source sports analysis project.&lt;/p&gt;

&lt;p&gt;For each solution I will use the same short basketball video which starts with several isolated players and ends with multiple occlusions.   The output video is in slow motion so we can more easily observe what's happening.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Occlusion occurs if an object you are tracking is hidden (occluded) by another object. Like two persons walking past each other, or a car that drives under a bridge.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First the players need to be localised, this is often done using a convolutional neural network (for example Mask-RCNN).  In the second phase the tracking needs to happen where often key points of the human body are linked to a unique person and visualised. &lt;/p&gt;

&lt;p&gt;If we would have multiple cameras we could actually track each player based on his/her face recognition.   See my article on &lt;a href="https://www.linkedin.com/pulse/face-recognition-action-devoxx-stephan-janssen/"&gt;Face Recognition at Devoxx&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As always, please don't hesitate to suggest any other possible techniques or solutions which I should investigate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0rgem5l2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71562554-35814200-2a82-11ea-924e-087089f2ab65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0rgem5l2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71562554-35814200-2a82-11ea-924e-087089f2ab65.png" alt="OpenCVTracking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tracking with OpenCV
&lt;/h2&gt;

&lt;p&gt;OpenCV has build-in support to track objects and the actual implementation is very straight forward.  With just a couple of lines of code you can track a moving object as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;cv2&lt;/span&gt;

&lt;span class="c1"&gt;# ...
&lt;/span&gt;
&lt;span class="c1"&gt;# Create MultiTracker object
&lt;/span&gt;&lt;span class="n"&gt;multiTracker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MultiTracker_create&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize MultiTracker 
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;pBox&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;players_boxes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Add rectangle to track within given frame  
&lt;/span&gt;  &lt;span class="n"&gt;multiTracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TrackerCSRT_create&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pBox&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# ...
&lt;/span&gt;
&lt;span class="c1"&gt;# get updated location of objects in subsequent frames
&lt;/span&gt;&lt;span class="n"&gt;success&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;boxes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;multiTracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# draw new boxes on current frame
&lt;/span&gt;
&lt;span class="c1"&gt;# ...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see from the annotated video it does a pretty good job right until a player passes (overlaps) another player (= occlusion). &lt;/p&gt;

&lt;p&gt;A total of 6 persons were tracked, only the referee and one yellow player was tracked correctly until the end.  Hmmm not the result I was hoping for.&lt;/p&gt;

&lt;p&gt;Video result can be viewed on &lt;a href="https://www.youtube.com/watch?v=6b__GMsoW4k"&gt;YouTube&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uULjNXrV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574448-04882800-2ae9-11ea-88df-b422d0fb71dc.png" class="article-body-image-wrapper"&gt;&lt;img width="970" alt="KeyPointsTracking" src="https://res.cloudinary.com/practicaldev/image/fetch/s--uULjNXrV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574448-04882800-2ae9-11ea-88df-b422d0fb71dc.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tracking with Key Points
&lt;/h2&gt;

&lt;p&gt;The next experiment is with COCO R-CNN KeyPoints which can easily be enabled using Detectron2.&lt;/p&gt;

&lt;p&gt;As you can see from the output below, the rectangle around each person uses the same colour as long as it's tracking the same person.   However when occlusion happens the same result is experienced as with OpenCV.... chaos. &lt;/p&gt;

&lt;p&gt;Video result can be viewed on &lt;a href="https://www.youtube.com/watch?v=QjRLdIkTbo4"&gt;YouTube&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tracking with Pose Flow
&lt;/h2&gt;

&lt;p&gt;Pose Flow shows a unique number for each person that it tracks and uses the same coloured rectangle around the same person in addition to the key points of the human body.&lt;/p&gt;

&lt;p&gt;It starts out really confident but again (as with the previous experiments) when occlusion appears it basically looses track of the overlapping players.   &lt;/p&gt;

&lt;p&gt;Video result can be viewed on &lt;a href="https://www.youtube.com/watch?v=JeBNX8YHBY4"&gt;YouTube&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BTW Full scientific details on the Pose Flow algorithm can be download &lt;a href="https://arxiv.org/pdf/1802.00977.pdf"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Shout out to &lt;a href="https://www.linkedin.com/in/jgilewski/"&gt;Jarosław Gilewski&lt;/a&gt; for his &lt;a href="https://github.com/jagin/detectron2-pipeline"&gt;Detectron2 Pipeline&lt;/a&gt; project which allowed me to rapidly run the above simulations!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pose Track challenge
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;PoseTrack is a large-scale benchmark for human pose estimation and articulated tracking in video. We provide a publicly available training and validation set as well as an evaluation server for benchmarking on a held-out test set. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://posetrack.net/leaderboard.php"&gt;PoseTrack&lt;/a&gt; organises every year tracking and detection competitions for single frame and multiple frames pose datasets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W0SEAMYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574487-2bdef500-2ae9-11ea-83bf-96638158d126.png" class="article-body-image-wrapper"&gt;&lt;img width="1089" alt="PoseTrackingCompetition" src="https://res.cloudinary.com/practicaldev/image/fetch/s--W0SEAMYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574487-2bdef500-2ae9-11ea-83bf-96638158d126.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 2018 the "Pose Flow team" ended on the 13th position and the "Key Track" team won the multi-person tracking challenge.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pose tracking is an important problem that requires identifying unique human pose-instances and matching them temporally across different frames of a video. However, existing pose tracking methods are unable to accurately model temporal relationships and require significant computation, often computing the tracks offline.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9G6MqL2C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574573-9b54e480-2ae9-11ea-9a12-ec64e0afbb04.png" class="article-body-image-wrapper"&gt;&lt;img width="1149" alt="KeyTrack" src="https://res.cloudinary.com/practicaldev/image/fetch/s--9G6MqL2C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574573-9b54e480-2ae9-11ea-9a12-ec64e0afbb04.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Key Track solution
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;KeyTrack introduces Pose Entailment, where a binary classification is made as to whether two poses from different time steps are the same person. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unfortunately I was unable to find an (open source) Key Track implementation.  If you know where to find it please let me know and I will try it out.&lt;/p&gt;

&lt;p&gt;The scientific paper can be downloaded &lt;a href="https://arxiv.org/pdf/1912.02323.pdf"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  The ideal camera setup?
&lt;/h2&gt;

&lt;p&gt;One idea is to change the angle of the stationary camera(s) so you get a birds-eye view hopefully limiting the number of occlusions that can happen.&lt;/p&gt;

&lt;p&gt;The ideal setup would be a top-down stationary camera above the middle of the court. The IP enabled camera should need a super wide angle lens.  Similar to the &lt;a href="https://sites.uclouvain.be/ispgroup/Softwares/APIDIS"&gt;European APIDIS&lt;/a&gt; basketball project where the Université catholique de Louvain were involved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5tPdRcme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574513-4c0eb400-2ae9-11ea-8c67-823cb533a875.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5tPdRcme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574513-4c0eb400-2ae9-11ea-8c67-823cb533a875.jpg" alt="MegaVideo_xl"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The APIDIS project used seven 2 megapixel colour IP cameras (&lt;a href="https://sales.arecontvision.com/product/MegaVideo+Series/AV2100"&gt;Arecont Vision AV2100&lt;/a&gt;) recording at 22 fps with timestamp for each frame at 1600x1200 pixels.&lt;/p&gt;

&lt;p&gt;And (as shown below) an additional two cameras, each recording a side of the basketball court.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CNPciUpp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574531-66e12880-2ae9-11ea-8051-658e0398a61f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CNPciUpp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/179457/71574531-66e12880-2ae9-11ea-8051-658e0398a61f.png" alt="basketballcourt2"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
    <item>
      <title>The journey towards creating a Basketball mini-map</title>
      <dc:creator>Stephan</dc:creator>
      <pubDate>Sat, 21 Dec 2019 15:22:24 +0000</pubDate>
      <link>https://dev.to/stephan007/the-journey-towards-creating-a-basketball-mini-map-73o</link>
      <guid>https://dev.to/stephan007/the-journey-towards-creating-a-basketball-mini-map-73o</guid>
      <description>&lt;p&gt;One specific goal of the open source basketball analytics machine learning project is to provide a mini-map of the players. Basically a top-down view of the court with the different players represented as coloured circles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309746-f14fcb00-240b-11ea-864c-4c1333ea82b0.png" class="article-body-image-wrapper"&gt;&lt;img alt="2dColouredMap" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309746-f14fcb00-240b-11ea-864c-4c1333ea82b0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eventually we could also draw the players movement on the 2D view to detect patterns of basketball plays.&lt;/p&gt;

&lt;p&gt;Let's have a closer look how this can be accomplished using Python, OpenCV and machine learning libraries.&lt;/p&gt;

&lt;p&gt;BTW Suggestions and comments are always very welcome to improve this open source project. I've also included a fully working tutorial of this article so you can experiment with the provided code (link in footer of article).&lt;/p&gt;

&lt;h1&gt;
  
  
  Where to place the camera?
&lt;/h1&gt;

&lt;p&gt;I've done two camera experiments, one where the camera is positioned in the corner and another where it's placed in the middle (as shown in the pictures below).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309764-2f4cef00-240c-11ea-9c6b-3ee0540865d0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309764-2f4cef00-240c-11ea-9c6b-3ee0540865d0.jpeg" alt="court"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The best result for image transformations was achieved where the camera was positioned in the middle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309754-0fb5c680-240c-11ea-933c-74ee165c619c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309754-0fb5c680-240c-11ea-933c-74ee165c619c.jpg" alt="3DBasketballMiddleView (1)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read also my article on how to &lt;a href="https://www.linkedin.com/pulse/how-record-basketball-game-budget-stephan-janssen/" rel="noopener noreferrer"&gt;record a basketball game on a budget&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Identify the players with Detectron2
&lt;/h1&gt;

&lt;p&gt;With Mask R-CNN models you can easily identify objects in an image. #Yolo&lt;/p&gt;

&lt;p&gt;I played with Yolo last week but wanted to experiment with &lt;a href="https://github.com/facebookresearch/detectron2" rel="noopener noreferrer"&gt;Detectron2&lt;/a&gt; (powered by PyTorch). This is an open source project from Facebook, it implements state-of-the-art object detection algorithms. It's amazing what it can detect, let's have a closer look.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309771-3d9b0b00-240c-11ea-8dd4-76717f16cd8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309771-3d9b0b00-240c-11ea-8dd4-76717f16cd8b.png" alt="persons1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Funny enough the used model thinks that the right basketball hoop is a TV with 56% probability. It also correctly found a chair with 61% probability.&lt;/p&gt;

&lt;p&gt;We'll need to filter out the persons and actually work only with the players that are on the court. The used picture has all the players grouped together because it's the start of a game, as a result only 8 out of 10 players were found.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://competitions.codalab.org/competitions/19507" rel="noopener noreferrer"&gt;COCO Panoptic Segmentation&lt;/a&gt; model detects the ceiling, walls and floor and colours them accordingly. This will be very interesting input for the court detection, because now we can limit the "search" in the floor polygon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309777-54416200-240c-11ea-87fa-1214248b60d0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309777-54416200-240c-11ea-87fa-1214248b60d0.png" alt="persons2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Detectron2 also supports Human Pose Estimation which we'll use in the future to classify basketball actions of players.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309785-628f7e00-240c-11ea-8747-5c0681d55c2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309785-628f7e00-240c-11ea-8747-5c0681d55c2a.png" alt="persons3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Retrieving the position of each player is accomplished using following python code.&lt;/p&gt;

&lt;p&gt;The DefaultPredictor.predictor method returns a list of rectangle coordinates (pred_boxes) of each identified object. The object classes are stored in pred_classes, where person objects are marked as 0.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309801-a1253880-240c-11ea-8022-ac95a596b131.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309801-a1253880-240c-11ea-8022-ac95a596b131.png" alt="code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the automatic court detection is not yet ready, I had to provide the polygon coordinates of the court manually.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pts_src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;258&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;       &lt;span class="c1"&gt;# left bottom - bottom corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;308&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;     &lt;span class="c1"&gt;# middle bottom corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;798&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;280&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;     &lt;span class="c1"&gt;# right bottom - bottom corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;798&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;220&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;     &lt;span class="c1"&gt;# right bottom - top corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;612&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;176&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;     &lt;span class="c1"&gt;# top right rorner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;186&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;168&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;     &lt;span class="c1"&gt;# top left corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;        &lt;span class="c1"&gt;# left bottom - top corner
&lt;/span&gt;    &lt;span class="p"&gt;])&lt;/span&gt;   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drawing this polygon onto the image allowed me to debug my court coordinates and adjust them when needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309807-b8642600-240c-11ea-91a1-66b08d2452ec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309807-b8642600-240c-11ea-91a1-66b08d2452ec.png" alt="court_poly"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Representing a player on the court
&lt;/h1&gt;

&lt;p&gt;We will draw a blue circle for each player by iterating over the predicated coordinates of the found objects (boxes). We should only include Person objects which are positioned within the court polygon coordinates using : Point(player_pos).within(court) statement&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Use the boxes info from the tensor prediction result
#
# x1,y1 ------
# |          |
# |          |
# |          |
# --------x2,y2
#
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;shapely.geometry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Point&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Polygon&lt;/span&gt;

&lt;span class="n"&gt;color&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;   &lt;span class="c1"&gt;# BLUE
&lt;/span&gt;&lt;span class="n"&gt;thickness&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="n"&gt;radius&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;

&lt;span class="n"&gt;i&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;box&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;pred_boxes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# Include only class Person
&lt;/span&gt;  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;pred_classes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  

    &lt;span class="n"&gt;x1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;box&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;y1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;box&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="n"&gt;x2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;box&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;y2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;box&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="n"&gt;xc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;x2&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;x1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;player_pos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;xc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;court&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Polygon&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;src_pts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Draw only players that are within the basketball court
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nc"&gt;Point&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;player_pos&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;within&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;court&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
      &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;circle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;im&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;player_pos&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;radius&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;thickness&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lineType&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shift&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great, we now have marked 8 players on the basketball court and two which are hidden in the back 🏀💪🏻&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309816-de89c600-240c-11ea-8f77-fd32159d4f57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309816-de89c600-240c-11ea-8f77-fd32159d4f57.png" alt="playersoncourt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Image transformations
&lt;/h1&gt;

&lt;p&gt;Using homography image transformation we can morph the above image onto a 2D court image shown below.&lt;/p&gt;

&lt;p&gt;No alt text provided for this image&lt;br&gt;
We declare the similar court coordinates (the same 7 points starting with left bottom - bottom corner, etc.) but now from the 2D image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Four corners of the court + mid-court circle point in destination image 
# Start top-left corner and go anti-clock wise + mid-court circle point
&lt;/span&gt;&lt;span class="n"&gt;dst_pts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;355&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;       &lt;span class="c1"&gt;# left bottom - bottom corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;317&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;351&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;      &lt;span class="c1"&gt;# middle bottom corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;563&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;351&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;      &lt;span class="c1"&gt;# right bottom - bottom corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;629&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;293&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;      &lt;span class="c1"&gt;# right bottom - top corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;628&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;        &lt;span class="c1"&gt;# top right rorner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;          &lt;span class="c1"&gt;# top left corner
&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;299&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;         &lt;span class="c1"&gt;# left bottom - top corner
&lt;/span&gt;    &lt;span class="p"&gt;])&lt;/span&gt;   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now for the Homography call which behind the scene uses matrix multiplication mathematics.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Calculate Homography
&lt;/span&gt;
&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findHomography&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;src_pts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dst_pts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;img_out&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warpPerspective&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;im&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_dst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;img_dst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output image (img_out) shows the player dots within a 2D view of the court!! 😱&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309833-01b47580-240d-11ea-801c-2670906113f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71309833-01b47580-240d-11ea-801c-2670906113f5.png" alt="transformed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The basketball mini-map solution is almost here.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Mask strategy
&lt;/h2&gt;

&lt;p&gt;One approach to get the player coordinates on the transformed basketball court image is via a colour mask.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;lower_range&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;                         &lt;span class="c1"&gt;# Set the Lower range value of blue in BGR
&lt;/span&gt;&lt;span class="n"&gt;upper_range&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;155&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;155&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;                     &lt;span class="c1"&gt;# Set the Upper range value of blue in BGR
&lt;/span&gt;&lt;span class="n"&gt;mask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_out&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lower_range&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;upper_range&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;     &lt;span class="c1"&gt;# Create a mask with range
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bitwise_and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_out&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;img_out&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c1"&gt;# Performing bitwise and operation with mask in img variable                            
&lt;/span&gt;
&lt;span class="n"&gt;mask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lower_range&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;upper_range&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
&lt;span class="nf"&gt;cv2_imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71524874-acc2a480-28cf-11ea-9ba7-b23797b21232.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71524874-acc2a480-28cf-11ea-9ba7-b23797b21232.png" alt="mask"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can retrieve the coordinates of the "non zero" pixels in the mask and use these coordinates to draw a circle on the 2D basketball court image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#get all non zero values
&lt;/span&gt;&lt;span class="n"&gt;coord&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findNonZero&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Radius of circle 
&lt;/span&gt;&lt;span class="n"&gt;radius&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

&lt;span class="c1"&gt;# Blue color in BGR 
&lt;/span&gt;&lt;span class="n"&gt;color&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="c1"&gt;# Line thickness of 2 px 
&lt;/span&gt;&lt;span class="n"&gt;thickness&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;

&lt;span class="n"&gt;court_img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./court.jpg&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;pos&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;coord&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;center_coordinates&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pos&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;pos&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;circle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;court_img&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;center_coordinates&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;radius&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;thickness&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 

&lt;span class="nf"&gt;cv2_imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;court_img&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Update (29 Dec 2019)
&lt;/h3&gt;

&lt;p&gt;By studying the player trail tracking visualisation I ran into this &lt;a href="https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/" rel="noopener noreferrer"&gt;ball tracking example&lt;/a&gt;. That demo uses the OpenCV findContours method to retrieve the coordinates of the masked ball. So instead of using cv2.findNonZero(mask) which returns all the non zero pixels in the mask, I can now retrieve just 8 coordinates of the players within the mask using the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cnts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findContours&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; 
                        &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RETR_EXTERNAL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CHAIN_APPROX_SIMPLE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;cnts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;imutils&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;grab_contours&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cnts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great, a (draft) workable solution has arrived :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71556013-1f986080-2a33-11ea-957c-31758c958104.png" class="article-body-image-wrapper"&gt;&lt;img alt="Basketball" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71556013-1f986080-2a33-11ea-957c-31758c958104.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See also video example output on &lt;a href="https://www.youtube.com/watch?v=tpavRDeDlTI" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is still missing?
&lt;/h1&gt;

&lt;p&gt;We still need to identify the players per team which can be achieved using colour detection.&lt;/p&gt;

&lt;p&gt;If we can identify each individual player we could also do player tracking on the mini map.&lt;/p&gt;

&lt;p&gt;I did create a &lt;a href="https://github.com/stephanj/basketballVideoAnalysis/tree/master/mini-map-tutorial" rel="noopener noreferrer"&gt;full tutorial&lt;/a&gt; which will take you step-by-step through the above journey.&lt;/p&gt;

&lt;p&gt;Hopefully this is enough to experiment and maybe come up with some practical suggestions on how to finalise the 2D mapping?!&lt;/p&gt;

&lt;p&gt;Peace,&lt;/p&gt;

&lt;p&gt;Stephan&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Open Source Sports Video Analysis using Maching Learning</title>
      <dc:creator>Stephan</dc:creator>
      <pubDate>Mon, 16 Dec 2019 08:32:45 +0000</pubDate>
      <link>https://dev.to/stephan007/open-source-sports-video-analysis-using-maching-learning-2ag4</link>
      <guid>https://dev.to/stephan007/open-source-sports-video-analysis-using-maching-learning-2ag4</guid>
      <description>&lt;p&gt;The mission is to develop an &lt;a href="https://github.com/stephanj/basketballVideoAnalysis/wiki" rel="noopener noreferrer"&gt;open source machine learning solution&lt;/a&gt; which will use computer vision to analyse (home made) sports videos. &lt;/p&gt;

&lt;p&gt;For starters I want to focus on Basketball games but the solution should also be applicable to any sport which has players and a court. &lt;/p&gt;

&lt;p&gt;Further documentation, code examples and eventually a working open-source solution will get published on GitHub.&lt;/p&gt;

&lt;p&gt;Feel free to contact me if you want to help out, have suggestions or know about existing open source projects that we can (re)use.&lt;/p&gt;

&lt;h1&gt;
  
  
  Project Goals
&lt;/h1&gt;

&lt;p&gt;Short (1 and 2) and long term (3 and 4) project goals.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Player tracking per team.&lt;/li&gt;
&lt;li&gt;Video mapping onto 2D basketball court.&lt;/li&gt;
&lt;li&gt;Game play action detection (with tagging) and analytics&lt;/li&gt;
&lt;li&gt;More advanced game analytics like lay-up, dunk, pick &amp;amp; roll, running distance, etc.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Basically similar to the football analytics video shown below but then for basketball and open sourced.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=hs_v3dv6OUI" rel="noopener noreferrer"&gt;&lt;img alt="FootbalAnalysis" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70863800-28c20180-1f4c-11ea-9774-fd1bcb79e464.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Machine Learning Models
&lt;/h1&gt;

&lt;p&gt;Based on the &lt;a href="https://web.stanford.edu/class/ee368/Project_Spring_1415/Reports/Cheshire_Halasz_Perin.pdf" rel="noopener noreferrer"&gt;Player Tracking and Analysis of Basketball Plays&lt;/a&gt; paper, the following machine learning models need to be created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849391-f34ee280-1e7d-11ea-8044-db5b78b07ba9.png" class="article-body-image-wrapper"&gt;&lt;img alt="Models" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849391-f34ee280-1e7d-11ea-8044-db5b78b07ba9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1) Court Detection - find lines of the court&lt;br&gt;
2) Person Detection - detect individuals ✅&lt;br&gt;
3) Player Detection and Color Classification - players detection standing on the court and separate these individuals into two teams&lt;br&gt;
4) Player Tracking - Keep positions information frame by frame&lt;br&gt;
5) Mapping via Homography - translate onto a court&lt;/p&gt;
&lt;h2&gt;
  
  
  Court Detection
&lt;/h2&gt;

&lt;p&gt;Explained in &lt;a href="https://people.cs.nctu.edu.tw/~yushuen/data/BasketballVideo15.pdf" rel="noopener noreferrer"&gt;Court Reconstruction for Camera Calibration in Broadcast Basketball Videos&lt;/a&gt; [2]&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The video frames that we obtained from Youtube were initially converted from the BGR to the HSV (hue, saturation&lt;br&gt;
and value) color model. We then focused on the H-plane in order to create a binary model of the system. Then, we&lt;br&gt;
proceeded to perform erosion and dilation of the image in order to get rid of artifacts that were not related to the court. Subsequently, we made use of the Canny edge detector to detect the lines in our system. Finally, we performed the Hough transform in order to detect the straight lines in the system.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70851421-323c6280-1e95-11ea-80b9-dde97f12cf1d.png" class="article-body-image-wrapper"&gt;&lt;img alt="Court" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70851421-323c6280-1e95-11ea-80b9-dde97f12cf1d.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Court detection strategies
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71198821-f0e0f400-2294-11ea-8253-3d6ff20fcbf9.png" class="article-body-image-wrapper"&gt;&lt;img alt="Courts" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71198821-f0e0f400-2294-11ea-8253-3d6ff20fcbf9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the images above, these are not NBA courts.  Many criss-cross lines define a basketball court which will make it very hard to auto detect.  &lt;/p&gt;

&lt;p&gt;Let's have a closer look at a few strategies.  &lt;/p&gt;
&lt;h4&gt;
  
  
  Naive Court Detection
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Converting the image to HSV&lt;/li&gt;
&lt;li&gt;Isolating pixels within a given hue range&lt;/li&gt;
&lt;li&gt;Developing a bitwise-AND mask&lt;/li&gt;
&lt;li&gt;Using Canny edge detection&lt;/li&gt;
&lt;li&gt;Using Hough Transformation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71198482-29cc9900-2294-11ea-927b-277d6298a972.png" class="article-body-image-wrapper"&gt;&lt;img alt="Masking" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71198482-29cc9900-2294-11ea-927b-277d6298a972.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Python code "court_detection1.py" is included in this project.&lt;/p&gt;

&lt;p&gt;Basketball court detection (with too many lines) will be very difficult to identify using above strategies.&lt;/p&gt;
&lt;h5&gt;
  
  
  Binary segmentation using auto encoders
&lt;/h5&gt;

&lt;p&gt;An auto encoder for sports field segmentation will be required as explained in the &lt;a href="https://www.researchgate.net/publication/330534530_Classificazione_di_Azioni_Cestistiche_mediante_Tecniche_di_Deep_Learning" rel="noopener noreferrer"&gt;Classification of Actions&lt;/a&gt;  by Simone Francia (see section 3.2.2 : Auto encoder model of the basketball court)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71194460-3b11a780-228c-11ea-8463-4dc84c2b4e5a.png" class="article-body-image-wrapper"&gt;&lt;img alt="court" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71194460-3b11a780-228c-11ea-8463-4dc84c2b4e5a.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h5&gt;
  
  
  Field Segmentation Datasets
&lt;/h5&gt;

&lt;p&gt;In order for training to work, a 100,000-frame dataset of basketball courts is required.&lt;br&gt;
To do this, about 1000 frames needs to be extracted from each game which is then used for the creation of the data set. &lt;br&gt;
The size of the dataset can be increased through simple data augmentation techniques.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71196279-07d11780-2290-11ea-87d7-63d342130ddd.png" class="article-body-image-wrapper"&gt;&lt;img alt="CourtMarkers" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71196279-07d11780-2290-11ea-87d7-63d342130ddd.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Through the OpenCV function cv2.polylines it is possible to create n points {p1, p2, .., pn } on the image plane. These points are then used to draw a polygon.&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;draw_poly_box&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Draw polylines bounding box.

    Parameters
    ----------
    frame : OpenCV Mat
        A given frame with an object
    pts : numpy array
        consists of bounding box information with size (n points, 2)
    color : list
        color of the bounding box, the default is green

    Returns
    -------
    new_frame : OpenCV Mat
        A frame with given bounding box.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;new_frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;temp_pts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;int32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;temp_pts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;temp_pts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;polylines&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;temp_pts&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;thickness&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;new_frame&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This polygon, annotated by manually, is interpreted as a field (basketball court) and colored white inside, while the outside will be black.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71196872-a8bfd280-2290-11ea-97f5-f6fc4beedea6.png" class="article-body-image-wrapper"&gt;&lt;img alt="BlackWhiteCourt" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71196872-a8bfd280-2290-11ea-97f5-f6fc4beedea6.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Data Augmentation for the Dataset Field
&lt;/h5&gt;

&lt;blockquote&gt;
&lt;p&gt;The annotation of the field has been carried out for one frame every second, and being the videos of 25 fps, it is equivalent to annotating a frame every 25. The annotation of 1000 frames is not sufficient to create a robust auto-coder model; for this reason, some Data Augmentation solutions have been adopted in order to provide the autoencoder model with a sufficient number of examples for training.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Every court image can also be rotated with an angle ranging from -15 and 15. From each original court image two other combinations are created, choosing a random angle between the interval. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71199270-e3783980-2295-11ea-92f6-552c7277afea.png" class="article-body-image-wrapper"&gt;&lt;img alt="rotated" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71199270-e3783980-2295-11ea-92f6-552c7277afea.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Persons Detection
&lt;/h2&gt;

&lt;p&gt;Object detection locates the presence of an object in an image and draws a bounding box around that object, in our case this would be a person.  &lt;/p&gt;

&lt;p&gt;Common Object Detection model architectures are :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;R-Convolutional Neural Networks or R-CNN&lt;/li&gt;
&lt;li&gt;Fast R-CNN&lt;/li&gt;
&lt;li&gt;Faster R-CNN&lt;/li&gt;
&lt;li&gt;Mask R-CNN&lt;/li&gt;
&lt;li&gt;SSD (Single Shot MultiBox Defender)&lt;/li&gt;
&lt;li&gt;YOLO (You Only Look Once)&lt;/li&gt;
&lt;li&gt;Objects as Points&lt;/li&gt;
&lt;li&gt;Data Augmentation Strategies for Object Detection &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find a &lt;a href="https://github.com/arunponnusamy/object-detection-opencv" rel="noopener noreferrer"&gt;working example by Arun Ponnusamy&lt;/a&gt; using &lt;a href="https://pjreddie.com/darknet/yolo/" rel="noopener noreferrer"&gt;Yolo&lt;/a&gt; and &lt;a href="https://opencv.org/" rel="noopener noreferrer"&gt;OpenCV&lt;/a&gt;.  The result image is shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849159-02806100-1e7b-11ea-8c16-e68f8865e9ea.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849159-02806100-1e7b-11ea-8c16-e68f8865e9ea.jpg" alt="persons"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another approach is using Convolutional Neural Networks like TensorFlow.  More details on the different model architectures can be found in &lt;a href="https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3" rel="noopener noreferrer"&gt;A 2019 Guide to Object Detection&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Mask R-CCN allows us to segment the foreground object from the background as shown in this &lt;a href="https://github.com/matterport/Mask_RCNN" rel="noopener noreferrer"&gt;Mask R-CNN example&lt;/a&gt; and the image below.  This will help in the next model where we'll detect a player and based on color classification link them to a team. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70865878-af81d900-1f62-11ea-85d1-44db19a0f7f3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70865878-af81d900-1f62-11ea-85d1-44db19a0f7f3.jpg" alt="output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Player Detection and Color Classification
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70851455-ac6ce700-1e95-11ea-9023-cac328f030e6.png" class="article-body-image-wrapper"&gt;&lt;img alt="PlayerDetection" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70851455-ac6ce700-1e95-11ea-9023-cac328f030e6.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Excerpt from &lt;a href="https://www.cs.ubc.ca/~murphyk/Papers/weilwun-pami12.pdf" rel="noopener noreferrer"&gt;Learning to Track and Identify Players from Broadcast Sports Videos&lt;/a&gt; [4].&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In order to reduce the number of false positive detections, we use the fact that players of the same team wear&lt;br&gt;
jerseys whose colors are different from the spectators, referees, and the other team. Specifically, we train a&lt;br&gt;
logistic regression classifier [32] that maps image patches to team labels (Team A, Team B, and other), where image patches are represented by RGB color histograms. We can then filter out false positive detections (spectators and referees) and, at the same time, group detections into their respective teams. Notice that it is possible to add color features to the DPM detector and train a player detector for a specific team [33]. However, [33] requires a larger labeled training data, while the proposed method only needs a handful examples.&lt;br&gt;
After performing this step, we significantly boost precision to 97% while retaining a recall level of 74%.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70867575-317bfd00-1f77-11ea-8eff-e02e7c49014f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70867575-317bfd00-1f77-11ea-8eff-e02e7c49014f.png" alt="classification"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/stephanj/basketballVideoAnalysis/tree/master/mask-rcnn" rel="noopener noreferrer"&gt;Mask R-CNN application&lt;/a&gt; allows us to extract the segmented image of each identified person. Extracting the dominate colors per segmented image should allow us to classify the players by team.  However for some unknown reason the used python code to accomplish this doesn't identify the yellow jersey color (yet).&lt;/p&gt;

&lt;p&gt;See also &lt;a href="https://web.stanford.edu/class/ee368/Project_Spring_1415/Reports/Cheshire_Halasz_Perin.pdf" rel="noopener noreferrer"&gt;Player Tracking and Analysis of Basketball Plays&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Players Tracking
&lt;/h2&gt;

&lt;p&gt;Excerpt from &lt;a href="https://www.cs.ubc.ca/~murphyk/Papers/weilwun-pami12.pdf" rel="noopener noreferrer"&gt;Learning to Track and Identify Players from Broadcast Sports Videos&lt;/a&gt; [4].&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Face recognition is infeasible in this domain, because image resolution is too low even for human to identify&lt;br&gt;
players. Recognising jersey numbers is possible, but still very challenging. We tried to use image thresholding to&lt;br&gt;
detect candidate regions of numbers, and run an OCR to recognise them. However, we got very poor results because image thresholding cannot reliably detect numbers, and the off-the-shelf OCR is unable to recognise numbers on deformed jerseys. Frequent pose and orientation changes of players further complicate the&lt;br&gt;
problem, because frontal views of faces or numbers are very rare from a single camera view. We adopt a different approach, ignoring face and number recognition, and instead focusing on identification of players as entities. We extract several visual features from the entire body of players. These features can be faces, numbers on the jersey, skin or hair colors. By combining all these weak features together into a novel Conditional Random Field (CRF), the system is able to automatically identify sports players, even in video frames taken from a single pan-tilt-zoom camera.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The open source &lt;a href="https://github.com/MVIG-SJTU/AlphaPose" rel="noopener noreferrer"&gt;Alpha Pose project&lt;/a&gt; can detect a human body within an image and provide a full description of a human pose.&lt;/p&gt;

&lt;p&gt;Alpha Pose is the “first real-time multi-person system to jointly detect human body, hand, and facial key points on single images using 130 key points.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70863757-c537d400-1f4b-11ea-9e29-9015f5b07560.png" class="article-body-image-wrapper"&gt;&lt;img alt="BodyPose" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70863757-c537d400-1f4b-11ea-9e29-9015f5b07560.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we can identify a body pose the direction can be calculated and these can be mapped onto a 2D playing plane/field/court as shown above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Court Mapping via Homography
&lt;/h2&gt;

&lt;p&gt;How can we map a player in a video onto a 2D court?&lt;/p&gt;

&lt;p&gt;A homography is a perspective transformation of a plane (in our case a basketball court) from one camera view into a different.  Basically with a perspective transformation you can map 3D points onto 2D image using a transformation matrix.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849554-488bf380-1e80-11ea-96ce-acbbd0dd2e50.png" class="article-body-image-wrapper"&gt;&lt;img alt="Mapping" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849554-488bf380-1e80-11ea-96ce-acbbd0dd2e50.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By having the dimensions of the court, we are able to find a 3x3 homography matrix that is computed using an affine transform. Each player’s position is then multiplied by the homography matrix that projects them into the model court.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;See also this scientific paper on &lt;a href="https://www.groundai.com/project/a-two-point-method-for-ptz-camera-calibration-in-sports/1" rel="noopener noreferrer"&gt;A Two-point Method for PTZ Camera Calibration in Sports&lt;/a&gt; [22]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70851334-1edcc780-1e94-11ea-8ede-1ec30cb5d861.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70851334-1edcc780-1e94-11ea-8ede-1ec30cb5d861.jpg" alt="A Two-point Method for PTZ Camera Calibration in Sports"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Python example on how to use the OpenCV Homography algorithm can be seen below.  This is based an &lt;a href="https://www.learnopencv.com/homography-examples-using-opencv-python-c/" rel="noopener noreferrer"&gt;article by Satya Mallick&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;    &lt;span class="c1"&gt;#!/usr/bin/env python
&lt;/span&gt;
        &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;
        &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;

            &lt;span class="c1"&gt;# Read source image.
&lt;/span&gt;        &lt;span class="n"&gt;im_src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;book2.jpg&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# Four corners of the book in source image
&lt;/span&gt;        &lt;span class="n"&gt;pts_src&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([[&lt;/span&gt;&lt;span class="mi"&gt;141&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;131&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;480&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;159&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;493&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;630&lt;/span&gt;&lt;span class="p"&gt;],[&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;601&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt;


        &lt;span class="c1"&gt;# Read destination image.
&lt;/span&gt;        &lt;span class="n"&gt;im_dst&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;book1.jpg&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# Four corners of the book in destination image.
&lt;/span&gt;        &lt;span class="n"&gt;pts_dst&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([[&lt;/span&gt;&lt;span class="mi"&gt;318&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;],[&lt;/span&gt;&lt;span class="mi"&gt;534&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;372&lt;/span&gt;&lt;span class="p"&gt;],[&lt;/span&gt;&lt;span class="mi"&gt;316&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;670&lt;/span&gt;&lt;span class="p"&gt;],[&lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;473&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt;

        &lt;span class="c1"&gt;# Calculate Homography
&lt;/span&gt;        &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findHomography&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pts_src&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pts_dst&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Warp source image to destination based on homography
&lt;/span&gt;        &lt;span class="n"&gt;im_out&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warpPerspective&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;im_src&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;im_dst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="n"&gt;im_dst&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;

        &lt;span class="c1"&gt;# Display images
&lt;/span&gt;        &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Source Image&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;im_src&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Destination Image&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;im_dst&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Warped Source Image&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;im_out&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;waitKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  More Advanced Machine Learning Models
&lt;/h2&gt;

&lt;p&gt;In a future version of the project we could also consider adding Start of Game (SoC), Track Ball and Goal (score) machine learning models. &lt;/p&gt;

&lt;h3&gt;
  
  
  Start of Game (SoG)
&lt;/h3&gt;

&lt;p&gt;If we want to solution to automatically analyse a video we could also considering adding a Start of Game (SoC) model.  This identifies the players at a certain position on the court which will flag the beginning of a game or quarter when doing basketball analysis. &lt;/p&gt;

&lt;h3&gt;
  
  
  Track Ball
&lt;/h3&gt;

&lt;p&gt;Tracking the ball will be a requirement when we want to achieve scoring analytics.  Some very interesting research studies have been published on this subject : &lt;a href="https://www.sciencedirect.com/science/article/pii/S123034021830146X" rel="noopener noreferrer"&gt;A deep learning ball tracking system in soccer videos&lt;/a&gt; [5]&lt;/p&gt;

&lt;p&gt;An example of tracking a large ball using OpenCV can be found &lt;a href="https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pose estimator
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70861945-90208700-1f35-11ea-8b2b-8fd29b9bc912.png" class="article-body-image-wrapper"&gt;&lt;img alt="Shot" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70861945-90208700-1f35-11ea-8b2b-8fd29b9bc912.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Alpha Pose is the “first real-time multi-person system to jointly detect human body, hand, and facial key points (in total 130 key points) on single images,”. The solution is capable of taking in an image and detecting key points (eyes, nose, various joints, etc.) on all human figures in the image. This allows the full description of a human pose in an image.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://www.mvig.org/research/alphapose.html" rel="noopener noreferrer"&gt;Alpha Pose&lt;/a&gt; can potentially be a building block to detect shots, layups, dunks etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70861877-7e8aaf80-1f34-11ea-9049-a48fb16f8135.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70861877-7e8aaf80-1f34-11ea-9049-a48fb16f8135.gif" alt="posetrack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A must read related document is &lt;a href="http://openworks.wooster.edu/cgi/viewcontent.cgi?article=10456&amp;amp;context=independentstudy" rel="noopener noreferrer"&gt;Sports Analytics With Computer Vision&lt;/a&gt; [7].&lt;/p&gt;

&lt;p&gt;And another great article giving an overview on the available &lt;a href="https://heartbeat.fritz.ai/a-2019-guide-to-human-pose-estimation-c10b79b64b73" rel="noopener noreferrer"&gt;human pose estimation solutions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71117004-8ae16780-21d5-11ea-9560-2fa224b30ab2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71117004-8ae16780-21d5-11ea-9560-2fa224b30ab2.png" alt="download (1)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Shot Detection
&lt;/h3&gt;

&lt;p&gt;Another very interesting machine learning model that we can use is the open source project on &lt;a href="https://github.com/browlm13/Basketball-Shot-Detection" rel="noopener noreferrer"&gt;basketball shot detection and analysis&lt;/a&gt; shared by &lt;a href="https://www.linkedin.com/in/rembert-daems/" rel="noopener noreferrer"&gt;Rembert Daems&lt;/a&gt; (Thanks for the info).  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This program is able to detect when a shot occurs and fill in the balls flight from captured data. It calculates the balls initial velocity and launch angle. It is able to estimate the balls flight perpendicular to the camera plane (The z axis) using a single camera. The program is also able to detect when the balls flight is interrupted by another object and will drop those data points.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70861665-990f5980-1f31-11ea-866e-5d14fa02db0b.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70861665-990f5980-1f31-11ea-866e-5d14fa02db0b.gif" alt="shot_2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Actions recognition
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/simone-francia-365aa910a/" rel="noopener noreferrer"&gt;Simone Francia&lt;/a&gt; developed a &lt;a href="https://github.com/simonefrancia/SpaceJam" rel="noopener noreferrer"&gt;basketball action recognition dataset&lt;/a&gt; as shown in the video below.   I've contacted Simone to get more details how his dataset can be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=PEziTgHx4cA" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70864235-8eb08800-1f50-11ea-9ddc-7195557748bb.png" alt="defence"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Goal (Score)
&lt;/h3&gt;

&lt;p&gt;Eventually we also want to have a model which can identify when a player makes a goal (in basketball this can be a free throw which is one point, two or three points).   As confirmed by the ML6 presentation, different Goal modals will need to be combined.  For example audio peaks could be an indication that a goal was made, of course ball tracking towards to hoop and "entering" the "ring" are all events which could identify a goal. &lt;/p&gt;

&lt;p&gt;Another idea (when possible) is to do OCR of the score board and see when the score increases as a confirmation of a goal.  Of course the majority of the home made videos do not include the score board.&lt;/p&gt;

&lt;p&gt;Further research is required and suggestions are always welcome.&lt;/p&gt;

&lt;h1&gt;
  
  
  Ensemble Learning
&lt;/h1&gt;

&lt;p&gt;Once we have the different models working we'll most likely need to "stack" them.  Basically using the output from one model as the input for another one.  Or combine similar models to obtain a better predictive performance.&lt;/p&gt;

&lt;p&gt;See also &lt;a href="https://www.analyticsvidhya.com/blog/2018/06/comprehensive-guide-for-ensemble-models/" rel="noopener noreferrer"&gt;A Comprehensive Guide to Ensemble Learning (with Python codes)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One technique of ensemble learning is stacking.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Stacking is a way to ensemble multiple classifications. The point of stacking is to explore a space of different models for the same problem. The idea is that you can attack a learning problem with different types of models which are capable to learn some part of the problem, but not the whole space of the problem. So, you can build multiple different learners and you use them to build an intermediate prediction, one prediction for each learned model. Then you add a new model which learns from the intermediate predictions the same target.&lt;/p&gt;

&lt;p&gt;This final model is said to be stacked on the top of the others, hence the name. Thus, you might improve your overall performance, and often you end up with a model which is better than any individual intermediate model. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70852196-09b96600-1e9f-11ea-8f7a-d289dc7ec9e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70852196-09b96600-1e9f-11ea-8f7a-d289dc7ec9e5.png" alt="stacking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.geeksforgeeks.org/stacking-in-machine-learning/" rel="noopener noreferrer"&gt;https://www.geeksforgeeks.org/stacking-in-machine-learning/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Training Data
&lt;/h1&gt;

&lt;p&gt;Over the past 7 years I've recorded my son's basketball games and these are available on &lt;a href="https://www.youtube.com/channel/UCXKvUa3P3C_FmboKzNSs3cg/videos" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;.  We have thousands of hours of training data which we can use to test and train our machine learning models!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849594-e8498180-1e80-11ea-9450-b59bfcc9fc9b.png" class="article-body-image-wrapper"&gt;&lt;img alt="YouTubeData" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849594-e8498180-1e80-11ea-9450-b59bfcc9fc9b.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Machine Learning Hardware (on a budget)
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71237275-6387b900-2301-11ea-93d7-157b5ac136d9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F71237275-6387b900-2301-11ea-93d7-157b5ac136d9.jpg" alt="Jetson-Nano_3QTR-Front_Left_trimmed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This &lt;a href="https://developer.nvidia.com/embedded/jetson-nano-developer-kit" rel="noopener noreferrer"&gt;NVIDIA Jetson Nano board&lt;/a&gt; could be an interesting and in-expensive solution to train and deploy the Sports Analytics machine learning software.&lt;/p&gt;

&lt;p&gt;According to NVIDIA this is a small, powerful computer that let us run multiple neural networks in parallel for applications like "image classification, object detection, segmentation, and speech processing." &lt;/p&gt;

&lt;h1&gt;
  
  
  Architecture
&lt;/h1&gt;

&lt;p&gt;Once it comes to the architecture we're back on solid ground and have enough experience to make something beautiful.&lt;/p&gt;

&lt;p&gt;The machine learning models can get integrated using an architecture explained by &lt;a href="https://ml6.eu/" rel="noopener noreferrer"&gt;ML6&lt;/a&gt; at &lt;a href="https://www.youtube.com/watch?v=171cDTeK3D0" rel="noopener noreferrer"&gt;Devoxx Belgium 2019&lt;/a&gt; [3].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849711-87bb4400-1e82-11ea-8c75-a94f2fee1446.png" class="article-body-image-wrapper"&gt;&lt;img alt="GoogleArchitecture" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849711-87bb4400-1e82-11ea-8c75-a94f2fee1446.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the architecture is based on Google Cloud but I'm convinced this architecture can most likely also be accomplished using Amazon, IBM or even Microsoft cloud services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data crunching
&lt;/h2&gt;

&lt;p&gt;As explained in [3] some models will only need one video frame others will need multiple frames sorted by a  timestamp (time-series) to analyse for example player movement.  &lt;/p&gt;

&lt;p&gt;A possible approach could be accomplished as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849976-a3741980-1e85-11ea-8cf3-ce58567f6180.png" class="article-body-image-wrapper"&gt;&lt;img alt="datacrunching" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F179457%2F70849976-a3741980-1e85-11ea-8cf3-ce58567f6180.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Commercial solutions
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.secondspectrum.com/index.html" rel="noopener noreferrer"&gt;Second Spectrum&lt;/a&gt;. See also this non technical &lt;a href="https://www.ted.com/talks/rajiv_maheswaran_the_math_behind_basketball_s_wildest_moves" rel="noopener noreferrer"&gt;TED talk by Rajiv Mageswaran&lt;/a&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://playsight.com/" rel="noopener noreferrer"&gt;PlaySight&lt;/a&gt; has an AI solution to analyse sport games.&lt;/li&gt;
&lt;li&gt;Any others?&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  ML Sporting references
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://web.stanford.edu/class/ee368/Project_Spring_1415/Reports/Cheshire_Halasz_Perin.pdf" rel="noopener noreferrer"&gt;Player Tracking and Analysis of Basketball Plays&lt;/a&gt; [1]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://people.cs.nctu.edu.tw/~yushuen/data/BasketballVideo15.pdf" rel="noopener noreferrer"&gt;Court Reconstruction for Camera Calibration in Broadcast Basketball Videos&lt;/a&gt; [2]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=171cDTeK3D0" rel="noopener noreferrer"&gt;Video Analytics for Football games by Sven Degroote at Devoxx Belgium 2019&lt;/a&gt; [3]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.cs.ubc.ca/~murphyk/Papers/weilwun-pami12.pdf" rel="noopener noreferrer"&gt;Learning to Track and Identify Players from Broadcast Sports Videos&lt;/a&gt; [4]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.sciencedirect.com/science/article/pii/S123034021830146X" rel="noopener noreferrer"&gt;A deep learning ball tracking system in soccer videos&lt;/a&gt; [5]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/browlm13/Basketball-Shot-Detection" rel="noopener noreferrer"&gt;Shot Detection project on GitHub&lt;/a&gt; [6]&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://openworks.wooster.edu/cgi/viewcontent.cgi?article=10456&amp;amp;context=independentstudy" rel="noopener noreferrer"&gt;Sports Analytics With Computer Vision&lt;/a&gt; [7]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/MVIG-SJTU/AlphaPose" rel="noopener noreferrer"&gt;An accurate multi-person pose estimator&lt;/a&gt; [8]&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Mask R-CNN
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.pyimagesearch.com/2018/11/19/mask-r-cnn-with-opencv/" rel="noopener noreferrer"&gt;Mask R-CNN with OpenCV example&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Homography References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://pysource.com/2018/06/05/object-tracking-using-homography-opencv-3-4-with-python-3-tutorial-34/" rel="noopener noreferrer"&gt;Object tracking using Homography – OpenCV 3.4 with python 3&lt;/a&gt; [20]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://zbigatron.com/mapping-camera-coordinates-to-a-2d-floor-plan/" rel="noopener noreferrer"&gt;Mapping Camera Coordinates to a 2D Floor Plan&lt;/a&gt; [21]&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.groundai.com/project/a-two-point-method-for-ptz-camera-calibration-in-sports/1" rel="noopener noreferrer"&gt;A Two-point Method for PTZ Camera Calibration in Sports&lt;/a&gt; [22]&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Face Recognition in Action @ Devoxx</title>
      <dc:creator>Stephan</dc:creator>
      <pubDate>Fri, 29 Nov 2019 17:12:10 +0000</pubDate>
      <link>https://dev.to/stephan007/face-recognition-in-action-devoxx-4c67</link>
      <guid>https://dev.to/stephan007/face-recognition-in-action-devoxx-4c67</guid>
      <description>&lt;p&gt;Every Devoxx Belgium edition produces 4000+ pictures by our event photographer Dimitris (and in the past by An). These pictures are then published into &lt;a href="https://www.flickr.com/photos/bejug/albums"&gt;Flickr albums&lt;/a&gt; and can be freely used by speakers, attendees and the event organiser.&lt;/p&gt;

&lt;p&gt;Finding a specific speaker in the different albums often feels like going down a visual rabbit hole. After scrolling through many Flickr pages and wasting a bit of time, you often find the hidden gem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This time consuming effort was crying for an automated approach!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since many years I was searching for a technical solution which would provide me an API that I could integrate in the different conference apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  A first low level Python solution
&lt;/h2&gt;

&lt;p&gt;In 2018 Alex van Boxel introduced me to Lien Wuyts. She was interested in doing her dissertation exactly on this subject. With the support from Alex, Jan-Kees and myself she worked on a Python prototype where an existing paparazzi model was used to find speaker faces. &lt;/p&gt;

&lt;p&gt;A first rough solution was demo'ed during Devoxx Belgium 2018 in an informal BOF session. Together with Jan-Kees she also provided more details during a Devoxx interview.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/E3M1PK24lLo"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  An Out of the Box Solution
&lt;/h2&gt;

&lt;p&gt;The solution from Lien remained an experimental prototype. Luckily Jan-Kees van Andel was still interested in this project and he suggested the JPoint team would tackle this opportunity during a team-building weekend. After that weekend he informed me they had a working solution based on AWS Recognition.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It works!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Tim van Eijndhoven (a colleague of Jan-Kees) took ownership of the project and he provided a REST interface which I could use from within the new Angular CFP.DEV web app.&lt;/p&gt;

&lt;p&gt;The integration was very straight forward. I post the speaker avatar image using a REST interface and the "face-recognition backend" returns me a list of Flickr URL's. These are then listed in the CFP.DEV admin for review. Each picture also has a similarity percentage and includes even the rectangle coordinates where the face was found within the picture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kzagrWFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ukly69xza4lm2tpntc9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kzagrWFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ukly69xza4lm2tpntc9u.png" alt="CFP.DEV"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My PHP CFP.DEV plugin then displays these speaker images on a related Wordpress page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--058gulpz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dgiqc1ie52os3f80rvqq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--058gulpz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dgiqc1ie52os3f80rvqq.jpg" alt="CFP.DEV"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wordpress page
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Amazon Rekognition model is a scary beast because it works frighteningly well. It finds speakers faces in various unsuspected places. During several Devoxx Belgium program committee meetings we were testing the system with all sorts of pictures and each time we were amazed of the results it presented.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where's Wally?
&lt;/h2&gt;

&lt;p&gt;Or... where's &lt;a href="https://twitter.com/mariofusco"&gt;Mario Fusco&lt;/a&gt;? The recognition API returned pictures of speakers where we often had no idea if (s)he was included. After searching for several minutes you could find the speaker, often hidden away with only a portion of the face visible 😱&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GzkoMneN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7y6koq1dudu3gneh60po.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GzkoMneN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7y6koq1dudu3gneh60po.jpeg" alt="Where's Mario?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or some pictures were returned (for example here of James Birnie) where a partial speaker face was shown on the twitter wall.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mjWcdZoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/12ksely9xfprz9jqy7bw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mjWcdZoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/12ksely9xfprz9jqy7bw.jpeg" alt="Devoxx Belgium Team"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or Louis Jacomet was found because he photobombed our end-of-devoxx-belgium pizza picture 😂🍕&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---CeE6wYX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1p1yxvz6i21r71v9iadr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---CeE6wYX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1p1yxvz6i21r71v9iadr.jpeg" alt="Louis Jacomet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find other gems by visiting the &lt;a href="https://devoxx.be/speakers"&gt;Devoxx Belgium speakers page&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy &amp;amp; GDPR
&lt;/h2&gt;

&lt;p&gt;Because of the amazing accuracy we decided to only use this for the speakers. Every user that submits a talk needs to accept the CFP terms &amp;amp; conditions. As a result this gives us permission to publish their talk on &lt;a href="https://www.youtube.com/channel/UCCBVCTuk6uJrN3iFV_3vurg"&gt;YouTube&lt;/a&gt; and share their event photo's on &lt;a href="https://www.flickr.com/photos/bejug/albums"&gt;Flickr&lt;/a&gt; 👍🏼&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Details
&lt;/h2&gt;

&lt;p&gt;The "Face recognition" project was obviously presented at Devoxx Belgium 2019 by both Tim van Eijndhoven and Roy Braam.&lt;/p&gt;

&lt;p&gt;Make sure to watch the presentation if you want more technical details.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/M2LH6DEIDXU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Arun Gupta was of course very excited to interview Tim and Roy about their AWS solution.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/FVhj92vhThs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Thanks again guys for the effort, much appreciated.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;We now have a nice working solution which obviously could be offered as a service by JPoint, will see what happens on that front 😎&lt;/p&gt;

&lt;p&gt;But let's brain-storm for a minute what else could we do with this face recognition solution if we exclude the privacy part:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We could have a camera at the entrance of each room and track which attendee goes to which talk. Based on this info we could recommend talks to other attendees who go to similar sessions. Of course this could already be done just by analysing their schedule favourites instead of tracking faces, which is too scary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We could allow attendees to upload their photo and we then return all the related Flickr photos where they're included.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Roy van Rijn suggested on Twitter that this solution could also be used in situations where a person wants to enforce his "right to be forgotten". We can now easily find all pictures where a person is included and censure him/her out.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We could have a camera at the main entrance and welcome speakers when they enter the venue. Or use it as a notification to inform me that the speaker has arrived. That would actually be cool but we can already do this when they scan their eTicket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What else could we do moving forward?&lt;/p&gt;

&lt;p&gt;Let me know in the comments below 😎👌&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>aws</category>
      <category>devoxx</category>
    </item>
    <item>
      <title>Create the smallest Docker image using JHipster 6 and Java 11+</title>
      <dc:creator>Stephan</dc:creator>
      <pubDate>Tue, 21 May 2019 13:57:14 +0000</pubDate>
      <link>https://dev.to/stephan007/create-the-smallest-docker-image-using-jhipster-6-and-java-11-35m7</link>
      <guid>https://dev.to/stephan007/create-the-smallest-docker-image-using-jhipster-6-and-java-11-35m7</guid>
      <description>&lt;p&gt;This article will explain how to create the smallest Docker image possible using JHipster 6 and Java 11+.&lt;/p&gt;

&lt;p&gt;Make sure to first read the "Better, Faster, Lighter Java with Java 12 and JHipster 6" by Matt Raible.&lt;/p&gt;

&lt;p&gt;Today (Monday 13th of May 2019) Mohammed Aboullaite (from Devoxx Morocco) gave an awesome related talk "Docker containers &amp;amp; java: What I wish I’ve been told!" with lots of interesting info. Make sure to check out his slide deck.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setup your Java 11 development environment
&lt;/h1&gt;

&lt;p&gt;You can skip this part if you've Java 11 already running on your development machine.&lt;/p&gt;

&lt;p&gt;SDKman is a great tool for installing multiple versions of Java. This tool also allows you to switch very easily between different java versions. #MustHave&lt;/p&gt;

&lt;p&gt;After installation you can list all the available Java SDK versions.&lt;/p&gt;

&lt;p&gt;$  sdk list java&lt;br&gt;
You can select (and install) the Java 11 SDK version as follows:&lt;/p&gt;

&lt;p&gt;$  sdk use java 11.0.3-zulu &lt;br&gt;
Now we can change the maven pom.xml java.version from 1.8 to 11.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;java.version&amp;gt;11&amp;lt;/java.version&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The SDK files are located in a hidden directory .sdkman which makes it a bit harder to be re-used in IDEA. Adding a symbolic link is a pragmatic solution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /Library/Java/JavaVirtualMachines

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /Users/stephan/.sdkman/candidates/java/11.0.3-zulu 11.0.03-zulu
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you can add SDK 11 to your IDEA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DVBOwQDM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8ivvwhncskw9ntbmj77b.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DVBOwQDM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8ivvwhncskw9ntbmj77b.jpeg" alt="IDEA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The 'broken' Dockerfile from JHipster
&lt;/h1&gt;

&lt;p&gt;JHipster provides a Dockerfile which is located in src/main/docker :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:11-jre-slim-stretch&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; SPRING_OUTPUT_ANSI_ENABLED=ALWAYS \&lt;/span&gt;
    JHIPSTER_SLEEP=0 \
    JAVA_OPTS=""

# Add a jhipster user to run our application so that it doesn't need to run as root
&lt;span class="k"&gt;RUN &lt;/span&gt;adduser &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /bin/sh jhipster

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /home/jhipster&lt;/span&gt;

&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; entrypoint.sh entrypoint.sh&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;755 entrypoint.sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chown &lt;/span&gt;jhipster:jhipster entrypoint.sh
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; jhipster&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["./entrypoint.sh"]&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;

&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; *.war app.war&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I've several issue's with this Dockerfile, the main one is... it doesn't work 😂&lt;/p&gt;

&lt;p&gt;Make sure to read the addendum why the Dockerfile is broken.&lt;/p&gt;

&lt;p&gt;1) The adduser command gives an error when building the Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Option d is ambiguous &lt;span class="o"&gt;(&lt;/span&gt;debug, disabled-login, disabled-password&lt;span class="o"&gt;)&lt;/span&gt;
Option s is ambiguous &lt;span class="o"&gt;(&lt;/span&gt;shell, system&lt;span class="o"&gt;)&lt;/span&gt;
adduser &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--home&lt;/span&gt; DIR] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--shell&lt;/span&gt; SHELL] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--no-create-home&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--uid&lt;/span&gt; ID]
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--firstuid&lt;/span&gt; ID] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--lastuid&lt;/span&gt; ID] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--gecos&lt;/span&gt; GECOS] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--ingroup&lt;/span&gt; GROUP | &lt;span class="nt"&gt;--gid&lt;/span&gt; ID]
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--disabled-password&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--disabled-login&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--add_extra_groups&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; USER
   Add a normal user


adduser &lt;span class="nt"&gt;--system&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--home&lt;/span&gt; DIR] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--shell&lt;/span&gt; SHELL] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--no-create-home&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--uid&lt;/span&gt; ID]
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--gecos&lt;/span&gt; GECOS] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--group&lt;/span&gt; | &lt;span class="nt"&gt;--ingroup&lt;/span&gt; GROUP | &lt;span class="nt"&gt;--gid&lt;/span&gt; ID] &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--disabled-password&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--disabled-login&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--add_extra_groups&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; USER
   Add a system user


adduser &lt;span class="nt"&gt;--group&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--gid&lt;/span&gt; ID] GROUP
addgroup &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--gid&lt;/span&gt; ID] GROUP
   Add a user group


addgroup &lt;span class="nt"&gt;--system&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;--gid&lt;/span&gt; ID] GROUP
   Add a system group


adduser USER GROUP
   Add an existing user to an existing group


general options:
  &lt;span class="nt"&gt;--quiet&lt;/span&gt; | &lt;span class="nt"&gt;-q&lt;/span&gt;      don&lt;span class="s1"&gt;'t give process information to stdout
  --force-badname   allow usernames which do not match the
                    NAME_REGEX configuration variable
  --help | -h       usage message
  --version | -v    version number and copyright
  --conf | -c FILE  use FILE as configuration file
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The command '/bin/sh -c adduser -D -s /bin/sh jhipster' returned a non-zero code: 1&lt;/p&gt;

&lt;p&gt;2) The Dockerfile should add a JAR file and not a war file (see the maven pom.xml file packaging field).&lt;/p&gt;

&lt;p&gt;The entrypoint.sh script sshould also use a jar file instead of a war.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

echo "The application will start in ${JHIPSTER_SLEEP}s..." &amp;amp;&amp;amp; sleep ${JHIPSTER_SLEEP}
exec java ${JAVA_OPTS} -noverify -XX:+AlwaysPreTouch -Djava.security.egd=file:/dev/./urandom -jar "${HOME}/app.war" "$@"

DockerFile V2
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:11-jre-slim-stretch&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; SPRING_OUTPUT_ANSI_ENABLED=ALWAYS \&lt;/span&gt;
    JHIPSTER_SLEEP=0 \
    JAVA_OPTS=""

# Add a jhipster user to run our application so that it doesn't need to run as root
&lt;span class="k"&gt;RUN &lt;/span&gt;adduser &lt;span class="nt"&gt;--home&lt;/span&gt; /home/jhipster &lt;span class="nt"&gt;--disabled-password&lt;/span&gt; jhipster

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /home/jhipster&lt;/span&gt;

&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; entrypoint.sh entrypoint.sh&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;755 entrypoint.sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chown &lt;/span&gt;jhipster:jhipster entrypoint.sh
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; jhipster&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["./entrypoint.sh"]&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;

&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; *.jar app.jar&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This produces a Docker image of 340Mb but can we make it smaller?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8yvPJXSc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/37i55fblr6c42nirm2kc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8yvPJXSc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/37i55fblr6c42nirm2kc.jpeg" alt="alpine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  From Debian to Alpine Linux (to Distroless)
&lt;/h1&gt;

&lt;p&gt;The JHipster Dockerfile uses an OpenJDK 11 runtime image which is based on Debian, that explains partially why the image is 340Mb. Switching to Alpine Linux is a better strategy!&lt;/p&gt;

&lt;p&gt;Mohammed from Devoxx MA suggested to look into an even smaller possibility using Google's "Distroless" Docker images. #NeedMoreTimeToInvestigate&lt;/p&gt;

&lt;p&gt;HINT: Consider watching this very interesting Voxxed Days Zurich 2019 presentation from Matthew Gilliard on Java Containers. He takes a Hello World example and deploys it using different strategies including native images.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/8SdrYGIM384"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  Azul's OpenJDK Zulu
&lt;/h1&gt;

&lt;p&gt;Azul provides an Alpine Linux OpenJDK distribution for Java 11, the best of both worlds!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FhOdO0g---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z1urxzxxqbh2eq6mi9h5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FhOdO0g---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z1urxzxxqbh2eq6mi9h5.jpeg" alt="Azul"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Azul runtime integrates and natively supports the musl library, which makes the integration more efficient (in terms of the footprint and runtime performance).&lt;/p&gt;

&lt;p&gt;See also the Portola Project - The goal of this project is to provide a port of the JDK to the  Alpine Linux distribution, and in particular the  musl C library.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Let's Strip with JLink
&lt;/h1&gt;

&lt;p&gt;Now that we're (finally) on Java 9+ we can take advantage of the Java modules system. This means we can create a custom JVM which only includes the Java modules used by our application.&lt;/p&gt;

&lt;p&gt;To find out which modules are used we can use jdeps to introspect our project jar file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;jdeps &lt;span class="nt"&gt;--list-deps&lt;/span&gt; myapp-1.0.0.jar

java.base
java.logging
java.sql
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Looks like the app only requires 3 Java modules. Unfortunately this is not correct, more on this later.&lt;/p&gt;

&lt;p&gt;Next step is to create a custom JVM using jlink and add the 3 required modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ jlink --output myjdk --module-path $JAVA_HOME/jmods --add-modules java.base,java.sql,java.logging
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above command creates a myjdk directory where everything is included to run our jar file.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Final Dockerfile
&lt;/h1&gt;

&lt;p&gt;After running the JHipster application on a production machine I noticed several modules were still missing to run the Spring Boot web app using Java 11.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java.desktop       // For Java Beans getter's and setters
java.management    // JMX 
jdk.management     // JDK-specific management interfaces for the JVM
java.naming        // JNDI
jdk.unsupported    // Sun.misc.Unsafe
jdk.crypto.ec      // SSL
java.net.http      // HTTP
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's obvious that depending on your project functionality, you'll need to add more modules.&lt;/p&gt;

&lt;p&gt;Now that we know which Java modules are required we can create the following Dockerfile.&lt;/p&gt;

&lt;p&gt;Part 1 : Take Azul's zulu OpenJDK jvm and create a custom JVM in /jlinked directory.&lt;/p&gt;

&lt;p&gt;Part 2&lt;/p&gt;

&lt;p&gt;Use Alpine linux and copy the jlinked JDK into /opt/jdk and start the java app.&lt;br&gt;
Undertow forced me to run Spring Boot as root because it could otherwise not open some sockets. Further investigation is needed, suggestions are always welcome.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Part 1&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; azul/zulu-openjdk-alpine:11 as zulu&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ZULU_FOLDER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /usr/lib/jvm/&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; jlink &lt;span class="nt"&gt;--compress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;--strip-debug&lt;/span&gt; &lt;span class="nt"&gt;--no-header-files&lt;/span&gt; &lt;span class="nt"&gt;--no-man-pages&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nt"&gt;--module-path&lt;/span&gt; /usr/lib/jvm/&lt;span class="nv"&gt;$ZULU_FOLDER&lt;/span&gt;/jmods &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nt"&gt;--add-modules&lt;/span&gt; java.desktop,java.logging,java.sql,java.management,java.naming,jdk.unsupported,jdk.management,jdk.crypto.ec,java.net.http &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nt"&gt;--output&lt;/span&gt; /jlinked

&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Part 2&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine:latest&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=zulu /jlinked /opt/jdk/&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apk update
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/cache/apk/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; CFP_JAVA_OPTS="-Xmx512m"&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; CFP_PERFORMANCE_OPTS="-Dspring.jmx.enabled=false -Dlog4j2.disableJmx=true"&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; /opt/jdk/bin/java $CFP_JAVA_OPTS $CFP_PERFORMANCE_OPTS -XX:+UseContainerSupport \&lt;/span&gt;
                           -noverify -XX:+AlwaysPreTouch -Djava.security.egd=file:/dev/./urandom -jar /app.jar

&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; target/*.jar /app.jar&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above example is heavily inspired on the ALF.io provided Dockerfile&lt;br&gt;
We now have a 180Mb Docker image which we can deploy to production 😎💪🏻&lt;/p&gt;
&lt;h1&gt;
  
  
  Can we Go Faster?
&lt;/h1&gt;

&lt;p&gt;On my to do list is to investigate the Application Class Data Sharing (CDS), if configured correctly the app can have a 25% faster startup time!&lt;/p&gt;

&lt;p&gt;CDS was a commercial only feature of the Oracle JDK since version 7, but it has also been available in  OpenJ9 and now included in OpenJDK since version 10.&lt;/p&gt;

&lt;p&gt;Another strategy to investigate is using an exploded jar file, not sure if that will give any noticeable increase in startup time?&lt;/p&gt;
&lt;h1&gt;
  
  
  Can we Go Smaller?
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Absolutely!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Imagine if JHipster could produce a Quarkus and/or Micronaut project based on your JDL. This would mean we could create a native image thanks to GraalVM.&lt;/p&gt;

&lt;p&gt;Producing an even smaller Docker image and stellar fast startup... a stellar combination with Google Cloud Run!&lt;/p&gt;
&lt;h1&gt;
  
  
  TheFutureLooksBright
&lt;/h1&gt;

&lt;p&gt;Comments and suggestions are very welcome!&lt;/p&gt;

&lt;p&gt;Cheers,&lt;/p&gt;

&lt;p&gt;Stephan&lt;/p&gt;

&lt;p&gt;Part 2 of this article series is now available @ &lt;a href="https://dev.to/stephan007/the-jhipster-quarkus-demo-app-1a1n"&gt;https://dev.to/stephan007/the-jhipster-quarkus-demo-app-1a1n&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ga-2wFU0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/kdhn4qntyjyjffjuwwi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ga-2wFU0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/kdhn4qntyjyjffjuwwi8.png" alt="Docker"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Addendum
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Jib
&lt;/h2&gt;

&lt;p&gt;Immediate response on my article came from Christophe, thanks for the feedback!&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--p27eGW9g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/882662745040138251/ulDKs1Vg_normal.jpg" alt="Christophe Bornet profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Christophe Bornet
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @cbornet_
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/Stephan007"&gt;@Stephan007&lt;/a&gt; &lt;a href="https://twitter.com/brunoborges"&gt;@brunoborges&lt;/a&gt; &lt;a href="https://twitter.com/alpinelinux"&gt;@alpinelinux&lt;/a&gt; &lt;a href="https://twitter.com/AzulSystems"&gt;@AzulSystems&lt;/a&gt; &lt;a href="https://twitter.com/Docker"&gt;@Docker&lt;/a&gt; &lt;a href="https://twitter.com/OpenJDK"&gt;@OpenJDK&lt;/a&gt; &lt;a href="https://twitter.com/alfio_event"&gt;@alfio_event&lt;/a&gt; Interesting. Note that &lt;a href="https://twitter.com/java_hipster"&gt;@java_hipster&lt;/a&gt;  doesn't use the Dockerfile anymore and it has been removed from master. We now use jib instead.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      06:58 AM - 13 May 2019
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1127830445209600000" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1127830445209600000" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      0
      &lt;a href="https://twitter.com/intent/like?tweet_id=1127830445209600000" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      7
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;It seems JHipster is now using Jib instead of the provided Dockerfile. Will need to investigate how the Dockerfile looks like and if it provides a smaller image?!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Jib builds optimised Docker and  OCI images for your Java applications without a Docker daemon - and without deep mastery of Docker best practices. It is available as plugins for  Maven and  Gradle and as a Java library.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./mvnw package -Pprod verify jib:dockerBuild
More details @ https://www.jhipster.tech/docker-compose/#-building-and-running-a-docker-image-of-your-application
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  "3 Days Ago"
&lt;/h1&gt;

&lt;p&gt;Another response on the article informed me that the JHipster team had switched to OpenJDK11 using Alpine 3 days ago. That's what I love about JHipster, they're at the top of their game!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IjB6qddI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/r25l6yal3xlcx0i9op66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IjB6qddI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/r25l6yal3xlcx0i9op66.png" alt="OpenJDK11"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  "Distroless" Docker Images
&lt;/h1&gt;

&lt;p&gt;My Devoxx Morocco friend Mohammed (and Docker Champion) suggested in a Twitter reply to look at Google's Distroless docker images. Looks very promising indeed, need more time to investigate 😄&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q8HDFv-p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1027696201293094914/8gZi8z3c_normal.jpg" alt="Mohammed Aboullaite profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Mohammed Aboullaite
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @laytoun
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/Stephan007"&gt;@Stephan007&lt;/a&gt; &lt;a href="https://twitter.com/alpinelinux"&gt;@alpinelinux&lt;/a&gt; &lt;a href="https://twitter.com/AzulSystems"&gt;@AzulSystems&lt;/a&gt; &lt;a href="https://twitter.com/Docker"&gt;@Docker&lt;/a&gt; &lt;a href="https://twitter.com/OpenJDK"&gt;@OpenJDK&lt;/a&gt; &lt;a href="https://twitter.com/alfio_event"&gt;@alfio_event&lt;/a&gt; As an alternative to alpine (and the issues with Musl) you can use a very lightweight linux distro from Google (part of the distroless  project &lt;a href="https://t.co/EO1VcsYxeh"&gt;github.com/GoogleContaine…&lt;/a&gt;)&lt;br&gt;&lt;br&gt;The base image is glibc based and it size is almost 8M! You can achieve similar results with openjdk  hotspot
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      07:38 AM - 13 May 2019
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1127840532363919361" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1127840532363919361" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      0
      &lt;a href="https://twitter.com/intent/like?tweet_id=1127840532363919361" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      4
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;h1&gt;
  
  
  Illegal Reflective Access via Undertow
&lt;/h1&gt;

&lt;p&gt;Spring Boot uses Undertow and has a dependency on jboss XNIO-NIO. As a result Java 11 will throw a warning : illegal reflective access operation.&lt;/p&gt;

&lt;p&gt;Switching to Jetty instead of Undertow might resolve this?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WARNING: An illegal reflective access operation has occurred

[dvbe19-cfp-app-64676889d-4g8nv dvbe19-app] WARNING: Illegal reflective access by org.xnio.nio.NioXnio$2 (jar:file:/app.jar!/BOOT-INF/lib/xnio-nio-3.3.8.Final.jar!/) to constructor sun.nio.ch.EPollSelectorProvider()

[dvbe19-cfp-app-64676889d-4g8nv dvbe19-app] WARNING: Please consider reporting this to the maintainers of org.xnio.nio.NioXnio$2

[dvbe19-cfp-app-64676889d-4g8nv dvbe19-app] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations

[dvbe19-cfp-app-64676889d-4g8nv dvbe19-app] WARNING: All illegal access operations will be denied in a future release
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And another reflective warning. But for this we don't have an alternative (yet).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[INFO] --- maven-war-plugin:2.2:war (default-war) @ cfp ---
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.thoughtworks.xstream.core.util.Fields (file:/Users/stephan/.m2/repository/com/thoughtworks/xstream/xstream/1.3.1/xstream-1.3.1.jar) to field java.util.Properties.defaults
WARNING: Please consider reporting this to the maintainers of com.thoughtworks.xstream.core.util.Fields
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
References
https://developer.okta.com/blog/2019/04/04/java-11-java-12-jhipster-oidc
https://spring.io/blog/2018/12/12/how-fast-is-spring
https://blog.gilliard.lol/2018/11/05/alpine-jdk11-images.html
https://docs.oracle.com/en/java/javase/11/vm/class-data-sharing.html
Docker containers &amp;amp; java: What I wish I've been told! https://docs.google.com/presentation/d/1d2L6O6WELVT6rwwhiw_Z9jBnFzVPtku4URPt4KCsWZQ/edit#slide=id.g5278af057a_0_124
"Docker containers &amp;amp; java: What I wish I've been told!" Video @ https://www.docker.com/dockercon/2019-videos?watch=docker-containers-java-what-i-wish-i-had-been-told
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.okta.com/blog/2019/04/04/java-11-java-12-jhipster-oidc"&gt;https://developer.okta.com/blog/2019/04/04/java-11-java-12-jhipster-oidc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://spring.io/blog/2018/12/12/how-fast-is-spring"&gt;https://spring.io/blog/2018/12/12/how-fast-is-spring&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.gilliard.lol/2018/11/05/alpine-jdk11-images.html"&gt;https://blog.gilliard.lol/2018/11/05/alpine-jdk11-images.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/java/javase/11/vm/class-data-sharing.html"&gt;https://docs.oracle.com/en/java/javase/11/vm/class-data-sharing.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.google.com/presentation/d/1d2L6O6WELVT6rwwhiw_Z9jBnFzVPtku4URPt4KCsWZQ/edit#slide=id.g5278af057a_0_124"&gt;https://docs.google.com/presentation/d/1d2L6O6WELVT6rwwhiw_Z9jBnFzVPtku4URPt4KCsWZQ/edit#slide=id.g5278af057a_0_124&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;"Docker containers &amp;amp; java: What I wish I've been told!" Video @ &lt;a href="https://www.docker.com/dockercon/2019-videos?watch=docker-containers-java-what-i-wish-i-had-been-told"&gt;https://www.docker.com/dockercon/2019-videos?watch=docker-containers-java-what-i-wish-i-had-been-told&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>java</category>
      <category>angular</category>
    </item>
    <item>
      <title>The JHipster Quarkus demo app</title>
      <dc:creator>Stephan</dc:creator>
      <pubDate>Tue, 21 May 2019 12:46:24 +0000</pubDate>
      <link>https://dev.to/stephan007/the-jhipster-quarkus-demo-app-1a1n</link>
      <guid>https://dev.to/stephan007/the-jhipster-quarkus-demo-app-1a1n</guid>
      <description>&lt;p&gt;Last weekend I wrote an article on creating the &lt;a href="https://www.linkedin.com/pulse/create-smallest-docker-image-using-jhipster-6-java-11-stephan-janssen/"&gt;smallest possible Docker image for my JHipster&lt;/a&gt; application. The result was a 180Mb Docker image which starts on avg. in 56 seconds on Google Cloud.&lt;/p&gt;

&lt;p&gt;On this rainy Sunday here in Belgium I decided to create a JHipster application which has the fastest possible startup.&lt;/p&gt;

&lt;h1&gt;
  
  
  Executive summary
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;59Mb footprint and 0.056s startup time 😱💪🏻&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Quarkus to the rescue!
&lt;/h1&gt;

&lt;p&gt;Ever since Red Hat announced Quarkus I wanted to play with this new project and today was that day.&lt;/p&gt;

&lt;p&gt;My ambition is basically to replace an existing Spring Boot app (generated by JHipster) and replace it with a Quarkus native version. Let's see how far we can get.&lt;/p&gt;

&lt;p&gt;I wanted to mimic the package structure which JHipster uses, which is very logical. Under the service package you'll also find the DTO's and Mappers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P4-wFFgz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.licdn.com/dms/image/C4D12AQEtVjmCP2TvKg/article-inline_image-shrink_1500_2232/0%3Fe%3D1564012800%26v%3Dbeta%26t%3DeSBQHT7fg5OMNRTK2Iq4fr2u1Ac_3f7xqrWDIIrRVkM" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4-wFFgz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.licdn.com/dms/image/C4D12AQEtVjmCP2TvKg/article-inline_image-shrink_1500_2232/0%3Fe%3D1564012800%26v%3Dbeta%26t%3DeSBQHT7fg5OMNRTK2Iq4fr2u1Ac_3f7xqrWDIIrRVkM" alt="Package structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain: Hibernate with Panache
&lt;/h1&gt;

&lt;p&gt;Let's start with first creating a (Conference) Event domain object which has a name and description fields.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;com.devoxx.hipster.domain&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;io.quarkus.hibernate.orm.panache.PanacheEntity&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;javax.persistence.*&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;javax.validation.constraints.NotNull&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;javax.validation.constraints.Size&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;@Cacheable&lt;/span&gt;
&lt;span class="nd"&gt;@Entity&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"hipster_event"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Event&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;PanacheEntity&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@NotNull@Size&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;min&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="nd"&gt;@Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nullable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Hibernate Panache reminds me of Lombok (no getters and setters needed) and in addition you can also use bean validation. With a simple @Cacheable annotation you cam activate Infinispan caching.&lt;/p&gt;
&lt;h1&gt;
  
  
  EventRepository
&lt;/h1&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;com.devoxx.hipster.repository&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.devoxx.hipster.domain.Event&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;io.quarkus.hibernate.orm.panache.PanacheRepositoryBase&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;javax.enterprise.context.ApplicationScoped&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;@ApplicationScoped&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EventRepository&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;PanacheRepositoryBase&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Event&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Event&lt;/span&gt; &lt;span class="nf"&gt;findByName&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;firstResult&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The finder methods are created in the EventRepository. For my simple CRUD web application this is currently only an example method. I'm hoping to add paging and sorting functionality here in later versions.&lt;/p&gt;

&lt;p&gt;Basic domain objects could be given directly to Angular but when introducing more complex and confidential fields (like emails or OAuth secrets) you want to have a DTO mapper between the domain and web REST package.&lt;/p&gt;
&lt;h1&gt;
  
  
  MapStruct
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;MapStruct is a code generator that greatly simplifies the implementation of mappings between Java bean types based on a convention over configuration approach.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;JHipster heavily depends on MapStruct so I needed to investigate if it was possible with Quarkus. The actual &lt;a href="https://quarkus.io/"&gt;Quarkus website&lt;/a&gt; doesn't mention it but Google did return some recent effort to make MapStruct part of the Quarkus eco-system, great!&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;com.devoxx.hipster.service.mapper&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.devoxx.hipster.domain.Event&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.devoxx.hipster.service.dto.EventDTO&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.mapstruct.Mapper&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;@Mapper&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;QuarkusMappingConfig&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;EventMapper&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nc"&gt;EventDTO&lt;/span&gt; &lt;span class="nf"&gt;toDto&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Event&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="nc"&gt;Event&lt;/span&gt; &lt;span class="nf"&gt;toEntity&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;EventDTO&lt;/span&gt; &lt;span class="n"&gt;eventDTO&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The EventMapper needs a reference to a QuarkusMappingConfig interface which tells Quarkus it's using CDI for dependency injection. There is Spring DI support for Quarkus but not sure if MapStruct already supports it?&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;com.devoxx.hipster.service.mapper&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.mapstruct.MapperConfig&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;@MapperConfig&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;componentModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"cdi"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;QuarkusMappingConfig&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Domain model, DTO and Mappers DONE 👍🏼&lt;/p&gt;
&lt;h1&gt;
  
  
  Service Layer
&lt;/h1&gt;

&lt;p&gt;The EventService is very lightweight and I was very tempted to move this logic into the EventResource class but having a clean separation between these vertical layers will eventually be a good thing. So here we go...&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@ApplicationScoped&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EventService&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Inject&lt;/span&gt;
    &lt;span class="nc"&gt;EventMapper&lt;/span&gt; &lt;span class="n"&gt;eventMapper&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="cm"&gt;/**
     * Get all events.
     *
     * @return list of event DTOs.
     */&lt;/span&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EventDTO&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getAll&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;Stream&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Event&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Event&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;streamAll&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;map&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;eventMapper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toDto&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;)&lt;/span&gt;
                     &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;collect&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Collectors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toList&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The final Service also includes code for getting one specific event (by id) and saving a DTO.&lt;/p&gt;
&lt;h1&gt;
  
  
  EventResource
&lt;/h1&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Path&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"api/events"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nd"&gt;@ApplicationScoped&lt;/span&gt;
&lt;span class="nd"&gt;@Produces&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"application/json"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nd"&gt;@Consumes&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"application/json"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EventResource&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Inject&lt;/span&gt;
    &lt;span class="nc"&gt;EventService&lt;/span&gt; &lt;span class="n"&gt;eventService&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;//...&lt;/span&gt;

    &lt;span class="nd"&gt;@GETpublic&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EventDTO&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getEvents&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EventDTO&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;allEvents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;eventService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getAll&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;allEvents&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;WebApplicationException&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"No events available"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;HttpURLConnection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;HTTP_NOT_FOUND&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;allEvents&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;//...&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The EventResource has again no real surprises. Quarkus uses RestEasy and that's a dip switch I need to change in my neck coming from Spring REST, but I'll survive and learn.&lt;/p&gt;
&lt;h1&gt;
  
  
  Functional Testing
&lt;/h1&gt;

&lt;p&gt;Writing some functional tests which consume the REST endpoints is again a fun experience and looks as follows.&lt;/p&gt;

&lt;p&gt;Talking about fun, Quarkus also supports Kotlin. Should give that a try next.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@QuarkusTest&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EventEndpointTest&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Test&lt;/span&gt;
    &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;testGetOneEvent&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;given&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/api/events/1"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;then&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HttpURLConnection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;HTTP_OK&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;assertThat&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;containsString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Devoxx"&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
                      &lt;span class="n"&gt;containsString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"for developers"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;

    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@Test&lt;/span&gt;
    &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;testGetAllEvents&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;//List all, the database has initially 2 events&lt;/span&gt;
        &lt;span class="n"&gt;given&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/api/events"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;then&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HttpURLConnection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;HTTP_OK&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;assertThat&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"size()"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;

    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  Let's FlyWay
&lt;/h1&gt;

&lt;p&gt;JHipster uses Liquibase but Quarkus (for now) only supports FlyWay (Axel thanks for the &lt;br&gt;
 invite but I have already a headache just looking at beer 🤪).&lt;/p&gt;

&lt;p&gt;Next to adding the FlyWay maven dependency you need to activate it by adding the following line in the application.properties file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Flyway minimal config properties
quarkus.flyway.migrate-at-start=true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;In the resources/db/migration directory you then add the SQL statements. Don't forget to add a PostgreSQL sequence generator otherwise the new domain objects will not get any ids.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;SEQUENCE&lt;/span&gt; &lt;span class="n"&gt;hibernate_sequence&lt;/span&gt; &lt;span class="k"&gt;START&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;hipster_event&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt;   &lt;span class="nb"&gt;INT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;hipster_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Devoxx Belgium 2019'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'The developers conference from developers for developers'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
       &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Devoxx UK 2019'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'The developers conference in London'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Now that we have all the logic in place lets see how we can run this baby.&lt;/p&gt;
&lt;h1&gt;
  
  
  GraalVM
&lt;/h1&gt;

&lt;p&gt;You need to download GraalVM 1.0 rc16 and Apache Maven 3.5.3+.&lt;/p&gt;

&lt;p&gt;Note: GraalVM v19.0 is not yet supported by Quarkus but looks like the Red Hat team is on it @ &lt;a href="https://github.com/quarkusio/quarkus/issues/2412"&gt;https://github.com/quarkusio/quarkus/issues/2412&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the project has been compiled and packaged by maven you can now start the Quarkus application:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mvn quarkus:dev
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;What's really cool is that Quarkus supports hot-reload of the project. Whenever a HTTP request hits the application, it reloads the app because it only takes a few milliseconds. Finally having hot-reload without setting up ZeroTurnaround's JRebel is a very nice bonus.&lt;/p&gt;

&lt;p&gt;The most important output is listed below...&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO  [io.qua.dep.QuarkusAugmentor] (main) Beginning quarkus augmentation
INFO  [io.qua.fly.FlywayProcessor] (build-6) Adding application migrations in path: file:/Users/stephan/java/projects/quarkushipster/backend/target/classes/db/migration/
INFO  [io.qua.fly.FlywayProcessor] (build-6) Adding application migrations in path: file:/Users/stephan/java/projects/quarkushipster/backend/target/classes/db/migration
INFO  [io.qua.dep.QuarkusAugmentor] (main) Quarkus augmentation completed in 703ms
INFO  [org.fly.cor.int.lic.VersionPrinter] (main) Flyway Community Edition 5.2.4 by Boxfuse
INFO  [org.fly.cor.int.dat.DatabaseFactory] (main) Database: jdbc:postgresql:quarkus_hipster (PostgreSQL 10.5)
INFO  [org.fly.cor.int.com.DbValidate] (main) Successfully validated 1 migration (execution time 00:00.013s)
INFO  [org.fly.cor.int.sch.JdbcTableSchemaHistory] (main) Creating Schema History table: "public"."flyway_schema_history"
INFO  [org.fly.cor.int.com.DbMigrate] (main) Current version of schema "public": &amp;lt;&amp;lt; Empty Schema &amp;gt;&amp;gt;
INFO  [org.fly.cor.int.com.DbMigrate] (main) Migrating schema "public" to version 1.0.0 - HIPSTER
INFO  [org.fly.cor.int.com.DbMigrate] (main) Successfully applied 1 migration to schema "public" (execution time 00:00.050s)
INFO  [io.quarkus] (main) Quarkus 0.15.0 started in 1.781s. Listening on: http://[::]:8080
INFO  [io.quarkus] (main) Installed features: [agroal, cdi, flyway, hibernate-orm, jdbc-postgresql, narayana-jta, resteasy, resteasy-jsonb]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;OOOOOooooh Quarkus started my simple CRUD Java application in 1.781 seconds.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Quarkus 0.15.0 started in 1.781s.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And it's not even running in native mode yet 😱&lt;/p&gt;
&lt;h1&gt;
  
  
  Going Native
&lt;/h1&gt;

&lt;p&gt;Before you can build a native package you need to install the GraalVM native-image tool. Change the shell directory to the GraalVM bin directory and type the following:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gu install native-image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Now you can create a native image of the Java application which will take a few tweets and one coffee (around 3 minutes depending on your computer).&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mvn package -Dnative
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;The maven command will create a {project}{version}-runner application in the target directory. You can just start the application by executing it in a shell.&lt;/p&gt;

&lt;p&gt;The native app first started "only" at around 5,056 seconds but it seemed I had to update my /etc/hosts and add my hostname to the localhost.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;127.0.0.1       localhost   Stephans-MacBook-Pro.local
::1             localhost   Stephans-MacBook-Pro.local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Once that was added the native app started in 0,056s as shown below and the application is only 56Mb small 😱&lt;/p&gt;
&lt;h1&gt;
  
  
  And now the FrontEnd
&lt;/h1&gt;

&lt;p&gt;Had to relax first a bit after the speed shock, but now let's generate the Angular 7 app using JHipster.&lt;/p&gt;
&lt;h2&gt;
  
  
  JHipster Client
&lt;/h2&gt;

&lt;p&gt;We only need to create the Angular side of the project, which you can do as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ jhipster --skip-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Once created we can now import the JDL (JHipster Domain Language) file which will create all the related Angular CRUD pages and logic. The current JDL only has the Event domain model in it with two fields: name &amp;amp; description.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ jhipster import-jdl jhipster.jdl 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;This is too easy, the previous command produces state-of-the-art Angular TypeScript code in less than 5 minutes. An average developer would need a day (or more) to make this and probably bill the customer one week!&lt;/p&gt;

&lt;p&gt;You run the Angular JHipster web app using npm start and then open your browser and point it to &lt;a href="http://localhost:9000"&gt;http://localhost:9000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is what you get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XDyNc27U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.licdn.com/dms/image/C4D12AQGxNJRkic1Hsg/article-inline_image-shrink_1500_2232/0%3Fe%3D1564012800%26v%3Dbeta%26t%3DkAwaLlmGB_XHnRxFutmoijl-VgJ4ct9uTduP14sWxWk" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XDyNc27U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.licdn.com/dms/image/C4D12AQGxNJRkic1Hsg/article-inline_image-shrink_1500_2232/0%3Fe%3D1564012800%26v%3Dbeta%26t%3DkAwaLlmGB_XHnRxFutmoijl-VgJ4ct9uTduP14sWxWk" alt="JHipster welcome page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To show the conference event data I had to implement a few mock REST endpoints in Quarkus for the user authentication and account details.&lt;/p&gt;

&lt;p&gt;I also disabled the ROLE_USER authorities in the event.route.ts file because this is not yet configured.&lt;/p&gt;

&lt;p&gt;Once those changes were made I could enjoy my CRUD logic.... euh, wait... what? Damn... the browser doesn't like accessing port 9000 and accessing the REST backend endpoints on port 8080. Cross-Origin Resource Sharing (CORS).&lt;/p&gt;

&lt;p&gt;Hmm, how will Quarkus handle CORS?&lt;/p&gt;

&lt;p&gt;Google pointed me to an example CorsFilter I had to add to the Quarkus project, and that did the trick, great!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3uvIxkQJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.licdn.com/dms/image/C4D12AQHhDT3w7MIE9g/article-inline_image-shrink_1500_2232/0%3Fe%3D1564012800%26v%3Dbeta%26t%3D6BNW1RNOKN-rEhV5g7PoCSVOiydIIkyd11zLj4q8NO8" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3uvIxkQJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.licdn.com/dms/image/C4D12AQHhDT3w7MIE9g/article-inline_image-shrink_1500_2232/0%3Fe%3D1564012800%26v%3Dbeta%26t%3D6BNW1RNOKN-rEhV5g7PoCSVOiydIIkyd11zLj4q8NO8" alt="Angular CRUD page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We now have a (none secure) Angular web app created by JHipster and talking to a Quarkus backend with a startup time of less than 2 seconds &amp;amp; hot-reload of both the web and the java modules.&lt;/p&gt;
&lt;h1&gt;
  
  
  What's next?
&lt;/h1&gt;

&lt;p&gt;Another rainy weekend should allow me to add RBAC &amp;amp; JWT (which Quarkus supports) but in the mean time you can checkout the project from&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/devoxx"&gt;
        devoxx
      &lt;/a&gt; / &lt;a href="https://github.com/devoxx/quarkusHipster"&gt;
        quarkusHipster
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Quarkus JHipster demo project&lt;/h1&gt;
&lt;p&gt;This is a basic JHipster Angular CRUD application using Quarkus as the backend service.&lt;/p&gt;
&lt;p&gt;Checkout also my related &lt;a href="https://www.linkedin.com/pulse/jhipster-quarkus-demo-app-stephan-janssen" rel="nofollow"&gt;LinkedIn article&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The backend code is very straight forward and uses the following Quarkus (extensions) :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;RESTEasy to expose the REST endpoints&lt;/li&gt;
&lt;li&gt;Hibernate ORM with Panache to perform the CRUD operations on the database&lt;/li&gt;
&lt;li&gt;MapStruct for DTO mapping&lt;/li&gt;
&lt;li&gt;FlyWay version control for the database tables&lt;/li&gt;
&lt;li&gt;ArC, the CDI inspired dependency injection tool with zero overhead&lt;/li&gt;
&lt;li&gt;The high performance Agroal connection pool&lt;/li&gt;
&lt;li&gt;Infinispan based caching&lt;/li&gt;
&lt;li&gt;All safely coordinated by the Narayana Transaction Manager&lt;/li&gt;
&lt;li&gt;A PostgreSQL database; see below to run one via Docker&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This demo application is based on the Quarkus example project  'hibernate-orm-panache-resteasy' provided by the RedHat team @ &lt;a href="https://github.com/quarkusio/quarkus-quickstarts"&gt;https://github.com/quarkusio/quarkus-quickstarts&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Thanks to the Quarkus (Red Hat), JHipster, GraalVM teams for their amazing work!&lt;/p&gt;
&lt;h2&gt;
Requirements&lt;/h2&gt;
&lt;p&gt;To compile and run this demo you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GraalVM &lt;code&gt;1.0 rc16&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apache…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/devoxx/quarkusHipster"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;Or on GitLab @ &lt;a href="https://gitlab.com/voxxed/quarkushipster"&gt;https://gitlab.com/voxxed/quarkushipster&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I do accept Merge Requests 😎👍🏼&lt;/p&gt;

&lt;p&gt;Thanks again to the GraalVM, Quarkus and JHipster teams for making this magic possible!&lt;/p&gt;

&lt;h1&gt;
  
  
  Addendum
&lt;/h1&gt;

&lt;p&gt;After publishing the article I got some interesting feedback on Twitter, looks like the startup time is still way too slow 😎&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--hvhrWiSg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/2734209260/0babdf58e01a274a7c5ca468f716d4d1_normal.jpeg" alt="Sanne profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Sanne
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @sannegrinovero
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/Stephan007"&gt;&lt;/a&gt;&lt;a href="https://twitter.com/Stephan007"&gt;@Stephan007&lt;/a&gt; &lt;a href="https://twitter.com/QuarkusIO"&gt;&lt;/a&gt;&lt;a href="https://twitter.com/QuarkusIO"&gt;@QuarkusIO&lt;/a&gt; &lt;a href="https://twitter.com/juliendubois"&gt;@juliendubois&lt;/a&gt; &lt;a href="https://twitter.com/angular"&gt;@angular&lt;/a&gt; &lt;a href="https://twitter.com/GetMapStruct"&gt;@GetMapStruct&lt;/a&gt; &lt;a href="https://twitter.com/FlywayDb"&gt;@FlywayDb&lt;/a&gt; &lt;a href="https://twitter.com/Infinispan"&gt;@Infinispan&lt;/a&gt; &lt;a href="https://twitter.com/springboot"&gt;@springboot&lt;/a&gt; hi &lt;a href="https://twitter.com/Stephan007"&gt;&lt;/a&gt;&lt;a href="https://twitter.com/Stephan007"&gt;@Stephan007&lt;/a&gt; ! Many thanks for the great article, but the boot times you got are way too slow 😅 something is wrong, I'll check it out ... cc/ &lt;a href="https://twitter.com/emmanuelbernard"&gt;@emmanuelbernard&lt;/a&gt; &lt;a href="https://twitter.com/QuarkusIO"&gt;&lt;/a&gt;&lt;a href="https://twitter.com/QuarkusIO"&gt;@QuarkusIO&lt;/a&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      10:07 AM - 20 May 2019
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1130414651135746050" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1130414651135746050" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      0
      &lt;a href="https://twitter.com/intent/like?tweet_id=1130414651135746050" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      4
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;I should be getting around 0.015s in native mode but for some unknown reason (DNS resolving?) my native app starts only after 5 seconds. According to Emmanuel it might be related to some slow/unavailable DNS resolution.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--GoqR4FIJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1110868233488420865/kgwE0i1K_normal.png" alt="Emmanuel Bernard profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Emmanuel Bernard
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @emmanuelbernard
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      &lt;a href="https://twitter.com/Stephan007"&gt;@Stephan007&lt;/a&gt; &lt;a href="https://twitter.com/SanneGrinovero"&gt;@SanneGrinovero&lt;/a&gt; &lt;a href="https://twitter.com/shelajev"&gt;@shelajev&lt;/a&gt; &lt;a href="https://twitter.com/QuarkusIO"&gt;@QuarkusIO&lt;/a&gt; &lt;a href="https://twitter.com/juliendubois"&gt;@juliendubois&lt;/a&gt; &lt;a href="https://twitter.com/angular"&gt;@angular&lt;/a&gt; &lt;a href="https://twitter.com/GetMapStruct"&gt;@GetMapStruct&lt;/a&gt; &lt;a href="https://twitter.com/FlywayDb"&gt;@FlywayDb&lt;/a&gt; &lt;a href="https://twitter.com/Infinispan"&gt;@Infinispan&lt;/a&gt; &lt;a href="https://twitter.com/springboot"&gt;@springboot&lt;/a&gt; I’ve seen something like that on crappy networks. We do suspect some slow/unavailable DNS resolution of sort. Seems to be a general Java problem though vs just Quarkus.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      13:31 PM - 20 May 2019
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1130466137152675840" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1130466137152675840" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      0
      &lt;a href="https://twitter.com/intent/like?tweet_id=1130466137152675840" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      1
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;The above issue was resolved by adding my hostname to the localhost in /etc/hosts!!&lt;/p&gt;

&lt;p&gt;You can follow the related discussion on my Twitter timeline.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;
      &lt;div class="ltag__twitter-tweet__media"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NQ3aThP1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/D6_bwo4XYAAqUvl.jpg" alt="unknown tweet media content"&gt;
      &lt;/div&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--lfySgJWO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1068853881415917568/EeHmM4pj_normal.jpg" alt="Stephan profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Stephan
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="comment-mentioned-user" href="https://dev.to/stephan007"&gt;@stephan007&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      JHipster meets Quarkus, a demo project journey article with bleeding fast startup and small footprint! &lt;a href="https://t.co/1uCuMBNU0P"&gt;linkedin.com/pulse/jhipster…&lt;/a&gt; 
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      06:44 AM - 20 May 2019
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1130363667428630528" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1130363667428630528" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      53
      &lt;a href="https://twitter.com/intent/like?tweet_id=1130363667428630528" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      123
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


</description>
      <category>java</category>
      <category>quarkus</category>
      <category>angular</category>
      <category>graalvm</category>
    </item>
  </channel>
</rss>
